I was revisiting the Don Melton episode of Internet History podcast and heard them mention ‘Code Rush’ - I kinda vaguely remember watching this years ago, but quite nutty to revisit now. It’s such a crazy time-capsule, covering the tail end of the Browser wars, as Netscape rush to meet their own deadline to release the source code of Netscape Communicator, and the formation of Mozilla.org.
I’ve been re-living the dream of the 90’s recently, by spending a lot of time listening to this excellent Internet History podcast -
The site has a full list of all episodes, which you can find on pretty much any podcast app. In particular I enjoyed the conversations with most of the early Netscape engineers, such as Lou Montulli, Chris Wilson, Jon Mittelhauser, Aleks Totic, plus the several episodes discussing the birth of Wired magazine, Suck.com and early online culture!
Beautifully shot, mind blowing short documentary about New York’s early telecoms buildings, the Western Union Building and the AT&T Long Lines Building ..
I updated the IP address for both my Name Servers tonite, and was monitoring to see how quickly the new addresses were propagating. First stop was the exceptionally useful Whats My DNS
At the host level I also wanted to track the incoming DNS queries using tcpdump. I could see them streaming into the new host, and visually you could see an obvious difference when viewing the output of the same command on the old host. I googled around for a timer utility which run a command for a given time, so i could quantify the difference. Perfect answer was here, a simple perl wrapper function.
Here's how to use it to run tcpdump command for sixty seconds, and count the packets seen:
# doalarm () { perl -e 'alarm shift; exec @ARGV' "$@"; } # doalarm 60 tcpdump -u -i eth0 port 53 -n |wc -l tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 19504
A month or two back, i saw an interesting figure on Bram Cohen's blog:
“The speed of light in a fiber optic cable around the earth’s circumference is about 200 milliseconds.” (from here)
I clipped it for my ever expanding Evernote tech tips, thinking it's one of those useful metrics to know. I've referred to it a few times now, but I always like to verify things myself, so this morning I looked up the relevant data -
So - speed of light in a vacuum is 186,000 miles per second. However according to this wikipedia article, the index of refraction for the cladding of an optical fiber is 1.52. “From this information, a good rule of thumb is that signal using optical fiber for communication will travel at around 200 million meters per second”.
Ok, so 200, 000,000 meters / second = 200, 000 meters / ms
“The circumference of the earth at the equator is 24,901.55 miles (40,075.16 kilometers).” // from here
40,075.15km = 40,075,000 meters
With all figures then, Earth Circumference is 40,075,000 meters, and the speed of light in fiber is 200,000 meters per ms: 40,075,000 /2 00,000 = 200.375 ms
// or to be even smarter, I could have just followed the Wolfram Alpha link from Bram's blog here - gotta love the Wolfram //
Unfortunately I forget where I found this link - Hacker News? The Edge Newsletter? I dunno, but it's a pretty interesting one -
A debate between an MIT professor, Erik Brynjolfsson, and an Economist, Tyler Cowen, about the the role of technology in driving economic growth. My views side with the MIT professor, as does most of the audience in the debate.
I won't repeat any of the arguments made in the debate, but what I will add is that the unequal distribution of wealth we see around today is not a symptom of lack of technological growth, it is purely down to good old fashioned political manipulation and deep rooted traditions of cronyism, a tradition thousands of years old.
Technology on the other hand: absolutely it's what will drive the economy, but even that view completely misses the big picture, which is the Medium itself, The Universal Network. I believe we have created a whole new dimension, an evolutionary mathematical abstracted form of biology. This is the beginning of History, Year Zero.
One hundred years from now, or two thousand - people will be able to look back in time and know with a rich level of detail what our life is like now. Thousands, upon millions of instances of video and audio, images, writings, geo locations, online trails, all readily accessible, interlinked and searchable. This level of detail will only increase, as we start recording every aspect of life.
With such archives of data, I can easily imagine the kids of 2123 being able to walk through and interact with a virtual London in the swinging 2020′s, or San Francisco's roaring 2030′s. Whereas, for future generations, any time predating the late 1990′s will essentially be a static foreign place in comparison. We have created time-travel - we just don't know it yet.
This Network has already achieved a basic level of independence from humanity - where now it is possible for a Something to exist outwith a single containing computer system using techniques like redundancy and geographic load-balancing. I don't mean to imply there is any intelligence there, but there is a level of resilience we've never seen in nature before. To give a more concrete example, I'm referring to something like you as a user interacting with the amazon website to purchase something, meanwhile the power goes out in the datacentre hosting the server your browser was communicating with, and, if engineered correctly, your interaction could continue, picked up by a secondary datacentre with no loss of data, nor interruption of service. This isn't exactly life as we know it, but if you squint your eyes just a little, its not too hard to see an analogy to biological cell life.
Over the next few years, Society's experience of reality is going to go through the biggest change in history, as our physical world merges completely with this new virtual world of realtime interconnected information and communication, completely warping our sense of time and geography.
The iPhone was stage one, Google Glasses or something very similar will be stage two, and its right around the corner.
I finished reading James Gleick's The Information tonite - so good!
Really, the central character is Claude Shannon, who I'm ashamed to admit I didn't previously know much about. Had a quick search when i finished it and found this decent little 30 min documentary which gives a good overview -
I started reading James Gleick's “The Information” last week and haven't been able to put it down yet - so good! I just found this video of a talk he presented at Google last year on the book, looks ace, i'll save it for watching this evening.
I've been trying to track down problems with really slow network transfer speeds between my servers and several DSPs. I knew it wasn't local I/O, as we could hit around 60Mb/s to some services, whereas the problematic ones were a sluggish 0.30Mb/s; I knew we weren't hitting our bandwidth limit, as cacti showed us daily peaks of only around 500Mb/s of our 600Mb/s line.
I was working with the network engineer on the other side, running tcpdump captures while uploading a file and analysing that in Wireshark's IO Graphs - stream looked absolutely fine, no lost packets, big non-changing tcp receive windows. We were pretty much stumped, and the other engineer recommend i look into HPN-SSH, which does indeed sound very good, but first i started playing around with trying different ciphers and compression.
Our uploads are all run via a perl framework, which utilises Net::SFTP in order to do the transfers. My test program was also written in perl and using the same library. In order to try different cyphers i started testing uploads with the interactive command line SFTP. Boom! 6Mb/s upload speed. Biiiig difference from the Net::SFTP client. I started playing with blowfish cipher and trying to enable compression with Net::SFTP - it wasn't really working, it can only do Zlib compression, which my SSHD server wouldn't play with until i specifically enabled compression in the sshd_config file.
After much more digging around, i came across reference to Net::SFTP::Foreign, which uses the installed ssh binary on your system for transport rather than relying on the pure perl Net::SSH.
Syntax is very similar, so it was a minor rewrite to switch modules, yet such a massive payback, from 0.30Mb/s up to 6Mb/s.
(It turns out the DSPs i mentioned earlier who could achieve 60Mb/s were actually FTP transfers, not SFTP)
Ever since reading Neal Stephenson's Mother Earth Mother Board article, I've been quite fascinated with the undersea cables which physically connect the land masses of the world. This map, linked above is an amazing view into this part of the Internet's current infrastructure.