As mentioned before, one of the most frustrating things about the internet is the likelihood that following a link will lead to a page that has moved, changed, or vanished since the link was posted. Given the massive increases in bandwidth and storage space that have taken place, I had an idea for combating this. Basically, it would be an automated system that saves a cached copy of any linked page, then allows anyone viewing the linking page to view the saved version of the linked page, in the event the latter became unavailable. That means a blog post linking to a news story or other blog entry would be able to provide access to either of the latter, even at a point in time when they are no longer available in their initial contexts.
It would work a lot like Google’s cache: saving text and formatting, with links to any images and video in their original locations. As such, the maximum amount of content would be retained without using too much disk space. To begin with, the system could be integrated into content management systems like WordPress. Eventually, it may be sensible for every link created to express this behaviour by default – at least on websites that choose to enable it.
Like so many useful things, the system would but up against copyright restrictions. That being said, Google has thus far been successful in defending the legality of their own caching practices. Perhaps the courts would be willing to consider the kind of enhanced links I described as a fair use of potentially copyrighted material.