The Future of the Internet Part 2

OK, Future of the internet part two. Sounds like I should write something really cool and insightful with a headline like that.

Let me just point out a few articles, that show where the internet may be headed.

The first is The Semantic Web by Tim Berners-Lee which was written in Scientific American’s special Internet issue in 2001. In a way, the RSS/XML feed technology which is catching on is the beginnings of the semantic web described.

The second is an article about self configuring wireless networks that the military is developing. The idea is to create a self sustaining peer-to peer network of mobile computers where wi-fi hubs are not available. The technology will no doubt spread to civilian use.

A third article that caught my eye is one from C-Net about how in faster and faster broadband services the slowest part of the web can become the DNS. The DNS is the rather elaborate system that translates “yahoo.com” to network reachable 216.109.112.135

If you open a DOS window and type “tracert yahoo.com” there are a bunch of hoops that you go through to translate. Your domain request goes to root servers who provide name servers, and the name servers provide IP addresses, and the IP addresses provide content. If anyone of them is slow, then your internet will seem slow. Your speed is only as fast as the slowest connection.

But DNS speed is not that big of a problem actually. While you are into the DOS window, type “ipconfig /displaydns”. When you visit a site, your computer will store the DNS info of that site temporarily on the computer. If you go back to that site, the jump hoops needed to get there are a lot shorter, because your computer remembers the name server and IP addresses of domains you have recently visited.

On the other hand, here is somethig to think about. The weakness of the WWW currently is that all new requests have to go to one of 13 root servers. If these root servers go down, the internet goes down.

But recently we have developed technology like bitorent, which was originally designed to get around copyright laws, distributes file sharing hosting and indexing tasks over the internet, rather than one central computer. Downloading files via bitorent services is often faster than downloading from central servers. There are other examples of distributed networks doing cool things like searching for really big prime numbers or searching for intelligent life in space that anyone with a computer can participate in.

So why not handle DNS chores using distributed networks? There is just such an operation at http://www.opendns.com/.

If such a system could be developed so that root servers are optional, it would also be possible to make up new top level domains completely out of the control of ICANN. Getting out of ICANN means getting out of potential government interference.

Ah, one can dream.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s