The Information Superhighway reaching speed limit

We use affiliate links. If you buy something through the links on this page, we may earn a commission at no cost to you. Learn more.

Fiber Optic

Gizmodo this week reported something that sent chills to all of us that rely on the Internet for so much of our lives. It seems that we are getting close to reaching the physical capacity of the underlying cable infrastructure that keeps the information superhighway speeding along. According to researchers in London, fiber optic cables, which are so critical to the Internet, are reaching their physical limits. One estimate that has come to light this week, is that we may hit the 100 terabytes per second barrier in FIVE YEARS.

This may not seem like such a crisis, but with 4K video gaining popularity, 8K video coming soon, and the incredible bandwidth demands that our hunger for streaming music, streaming video, enterprise cloud computing, streaming gaming, increasing by the second, coming up with a solution that meets increasing consumer and business demand, is no certainty. Researchers have many solutions that they are working on, but these are akin to squeezing ever more blood from a stone. All we can do is hope that a solution is found before the Information Superhighway crashes and burns.

Source: Gizmodo

2 thoughts on “The Information Superhighway reaching speed limit”




  1. Gadgeteer Comment Policy - Please read before commenting
  2. Many moons ago it was common for ISPs to host usenet services within their own network. One of the reasons they did this so the material would need to pass the peering connection once rather than every time somebody WANTED something. The bandwidth savings wasn’t huge (as they pulled EVERYTHING) but it was there at a time when bandwidth was expensive.

    Netflix, for example, has floated the idea of a “cache” type box within an ISPs network which will host the most commonly streamed files from WITHIN an ISPs network (to avoid all that traffic passing across the peer) which is much like what was done with usenet way-back-when. If “congestion” does become an issue there will be solutions with current technology so we don’t need “magic” to make bits go faster.

  3. The simplest solution in many cases will be ‘lay another cable’. Two cables carry twice the data of one. (And one ‘cable’ is many bundles of fiber – so they can lay a new cable with more bundles and increase that.)

    Caching will also help, as jhon mentioned. (Although the way DRM is implemented in many cases will limit the caching available.) Another simple fix for many situations switch away from streaming to downloading – many sites require you to re-stream files that you used to be able to just download to a local cache and watch. (I believe this change is often in the interests of control and monitoring over what users watch.) (Though for jhon’s specific example: Those are actually in place, in most cases. Where they aren’t it’s typically business concerns that are preventing them, not technical. Despite Netflix offering them for free.)

    The rise of privacy concerns and the use of HTTPS instead of HTTP is also an influence: The former caches easier, though the abuse of trust by some parties has made that less important than the privacy protection afforded by actually encrypting things.

    Still, in most cases I think this is probably non-news. Even if we can’t squeeze more out of a single fiber, there are dark fibers or the opportunity to lay more fiber – or switch to something else, either replacing or in addition to fiber.

Leave a Comment

Your email address will not be published. Required fields are marked *