The Physical Layer of the Internet, Part 2: The Privatized Nature of the Internet

Justin James
7 min readJul 16, 2020

Wow, okay, so it turns out that the internet is vast enough for me to write a part two to my original blog of The Physical Layer of the Internet, except this one is all about, you guessed it, the Privatization of the Internet.

This article is primarily meant to serve as a nice little tidying of some of the ideas that were not quite clarified in the original blog, or just that I found interesting — I hope you find them interesting, too.

There are plenty of high quality articles on the privatization of the internet in terms of how this was really a technology that started out as a way of linking high-powered supercomputers on university campuses for research and educational purposes. However, as the technology became more and more stable, demand increased, and it simply became too much for the government to handle (Hey, I understand if this explanation seems a bit rushed, feel free to check out the extra reading at the bottom of this page for more information).

Information Superhighway

Information flows over the internet

What I really want to delve into is the interconnection of different Internet Service Providers and to explore some of the peculiarities that happen when you request information from servers over the internet. Where are those servers? How were they paid for? Does everyone own their own servers?

An important way to differentiate between the different ways information flows is by examining how information changes hands, or doesn’t on it’s way to the server and back to the client (in case you forgot, the client made the request from the server for data in the first place). For a primer on understanding this, it’s important to remember, or realize, that the internet backbone, as a privatized structure, is merely the product of a series of contracts made between the various parties involved (which you will also see below, but which include, among others, internet service providers, companies that need content hosting services, and everyone willing to provide those services and/or support the infrastructure.

For content or data that travels over the internet and the server and client are both under the jurisdiction of the same internet service provider, they simply both pay the ISP (internet service provider), for the use of the internet.

When data travels over the internet and the client and server have different internet service providers, there now exist a range of options.

Private Peering

The first of these options is that of private peering. This is the case in which a bilateral contract is made between the two ISP’s to facilitate the transfer of data from one jurisdiction to the next, generally with no exchange of money. Money only starts to exchange hands when it becomes clear that one carrier is taking on the responsibility for shouldering a significantly higher proportion of the data than the other ISP.

Internet Exchange Points

The next option is that of an internet exchange point, which can be run by a separate organization (or else they would just be part of an ISP’s Infrastructure), where two or more ISP’s can come together to exchange data to get that data to its destination. All the ISP’s generally co tribute directly to the maintenance of an IXP, or an outside company sees to its organization and upkeep. The largest operator of IXP’s is Equinix, with over 200 data centers in many cities (of which only a portion are configured as IXP’s).

Transit Agreements

The final type of agreement — a transit agreement — is similar to private peering, only it’s a relatively lopsided relationship, with one ISP clearly providing the majority of the services and being compensated by the other, or some other type of end-around way.

Content Delivery Networks (CDN’s)

Geographically Distributed Networks of Internet capacities normally rendered by internet servers. These CDN’s reduce lag time and are part of the reason why the internet data you request gets to you so fast. Content that would normally be hosted by servers carried which may take a long distance to reach (geographically) is instead cached or “stored” in CDN’s, especially the most popular content — among other things, think Netflix’s top ten list (not verified).

Feel free to do a small Case study on Netflix, with both the website from the google doc, and the following:

CDN’s appear to have facilitated the growth of such mega-giants like Netflix, among other players in the streaming service business. Of course, all of this data, anywhere from one to three gigabytes for the streaming of a short movie, must lay somewhere, and it’s certainly not on your phone — hint, hint, it’s Content Delivery Networks. This is largely data that has to be available on-demand, to a wide audience of people, and for an extended period of time (that is, it is not sufficient to just try to use some sliver of traffic capacity).

In fact, as a bit of a deep dive into Netflix, there were actually massive tensions involving Netflix’s relationship with existing ISP’s, as the ISP’s that Netflix had originally tried to contract with wanted paid peering agreements instead of simply deploying Netflix servers. As we can see from above, Netflix actually wanted space at Internet Exchange Points (IXP’s), so that’s exactly what they went out and did. They rent space in IXP’s, hosting their content in their own servers — essentially hardware configured to deliver their content — and typically closer to end-users. In fact, these IXP’s are an integral part of at least what Netflix would mean by its “Content Delivery Network ‘’, among other options which still include some ISP’s.

A quick finance detour

In other activities, proximity manifests itself in other ways. For example, one of the things that really got me interested in the physical layer of the Internet was that of high frequency trading, which is based on the proximity of high-frequency traders being able to set up shop, that is, there software, in extremely close proximity to the server, and being able to access market information essentially before the rest of the market, if only for mere milliseconds (and it’s plenty of time to get all the information they want, they did pay to set up shop where they did after all, just one of the many implications of a privatized internet).

Data centers, large and small

data center
Microsoft Inspire Data Center

(Hint: data centers are still anywhere data is stored, and they fall under the range of centers controlled by ISP’s, IXP’s, and those controlled by larger firms as we will see below).

Contracts for data centers span a gambit of different arrangements, from buyers that only need storage, and so they rent the data space, own servers, and let someone else manage the building, to larger firms like Google and Amazon. Among these larger firms includes Facebook, Apple, and Microsoft, and these companies own and operate their own massive data centers and configure the building and services to suit their needs and applications.

From the standpoint of the famous cloud computing providers, these are companies which rent out services for storage, computing, or applications like a database. These are generally services that allow you to pay as you go, so that however much you need, the necessary fixture already exists for your use.

Some of the major players include Amazon Web Services, Microsoft Azure, and Google Cloud.

Geographic Accessibility

A final really important consideration on the privatized nature of the internet is that of geographic availability. It may be no secret that more financially well-off areas and high density cities tend to attract the most investment from internet service providers. If you can achieve economies of scale with your users, it becomes quite clear that you would prefer to reach the largest market possible, and co you ally strive to be on the forefront of new technologies to capture even more profitable market share. Furthermore, in order to finance these new technologies, you need to either have many customers all willing to pay a little, or fewer willing to pay a lot, and when you find both as is the case in large, relatively affluent cities, you would often find this to be the ideal decision. We can also remind ourselves that trying to gather up all of the urban areas of a plan may not be worth it for those who have to make the large fixed cost of laying down lines (referenced more in the first edition part of this article). It should be mentioned that this analysis applies mainly to broadband connections, as most rural users who do have internet access, use a satellite-enabled connection.

Many of the possible responses that some have proposed seem to put forward public policy, and the idea of the internet as a public good. This also may be a good time to mention that the internet was the product of a massive outlay in public investment, primarily for social good. However, as we can see from above, a lot of the current infrastructure was built, and is maintained by private dynamics. I will leave my response to these notions to a later day, but first impressions are that if people vote for it, they vote for it, and we can have it, it seems to be a matter of whatever people want.

Further Reading

Here’s a nice article around more of the background behind the privatization process of the internet, and an argument in favor of the internet as a public good (note: this does not represent my opinion, only one side of the story).

If you have any questions about the article or spot any mistakes, feel free to let me know!

--

--