So: why doesn’t my web browser detect unlinked URLs in a page and turn them into links for me? Sure, sure, it should be an option I can turn off. However, I want to stop cutting and pasting stuff like http://www.meyerweb.com. For that matter, I wouldn’t mind if it picked up any hostname beginning with www — let it catch www.meyerweb.com too.
Catching anything that registers as a domain name might be a bit much. On the other hand, perhaps it might be worth doing a DNS lookup and converting anything that returns. In a very optimistic world with sufficient computing power, you could do the DNS lookup, check port 80, and if there’s something responding then do the conversion.
Hell, humans are slow readers. Go ahead and fetch the page and cache it in case that’s where I want to go next. At this point you ought to be prefetching allll the links, though.
And they say there’s no reasonable use for more bandwidth. It is to snicker. You just keep precaching further and further out the more you get.