The netflix data itself points to a gradual lowering of bandwidth to their service. Now here's 2 scenarios :
1) Verizon decided to enter some router configurations that affect the packet forwarding to Netflix (well, most likely FROM netflix, as the internet is not actually symmetrical). What we should see as a result of this is a sudden change in bandwidth.
2) The load on the interfaces in between is changing, either because of more netflix subscribers, more AWS traffic, or more netflix traffic. Since these are things happening slowly over time, you'd see a gradual worsening of traffic conditions.
I think it's pretty fucking likely that what's happening here is 2). Now granted, Verizon could solve this problem by cooperating with Netflix. But for them that's an expense with no upside, so likely they're demanding Netflix make it worth their while to upgrade the interconnects.
This is one of the basic problems with the internet today. It used to be the case that bandwidth between providers was much more plentiful than last-mile bandwidth. So we upgraded last-mile bandwidth. Problem : doing that resulted in an exponential increase in load on the core network and the interconnects between different networks. Needless to say, nobody can keep up with exponentially rising demands (especially without subscriber growth). The basic problem is that a linear increase in bandwidth for the customer results in an exponential increase in costs for the ISP (so -surpise- small ISPs don't feel the effects nearly as badly).
So ISPs are desperate. Their costs are spiralling out of control, with everyone demanding they follow this exponential curve, but of course their customers not willing to pay for it (and all the money in the world would only buy time if thrown at this problem). Demanding the ISPs pay for it is simply not going to work. Right now, yes, they could theoretically pay for it, but that won't last long.
Plus Netflix is being a rather bad netizen themselves. The polite thing to do is to carry the traffic to very close to the user on your own network, and only use other's links, especially transit, as a last resort ("cold-potato" routing, for content providers). Netflix has caches, but no own network.
There's other problems that result from this cost problem. Already transcontinental and other long-haul links are strained to the limit. This has resulted in a massive degradation of non-local internet traffic, and it's getting worse fast.
It is really a very simple problem. You have consumers, and you have producers. And you get more and more of each. To a limited approximation you can assume everyone on the internet is both. To simplify things, let's say everyone uses a tiny amount of everyone else's services. Now given N participants in the network, what is the load on the core network ?
N * (N - 1) =~ N^2 (assuming full duplex links, which would be generally correct for the internet)
So adding more participants in the internet increases resources required to give every existing participant in the network his "old" speed. These need to be paid for. But unless everyone want to start paying an amount for your internet connections that rises with the square of the size of the internet, we're going to see progressively worse and worse throttling. Right now the strategy is fast becoming limiting interconnect capacity to prevent the core network overloading, which would have far worse consequences.
While, yes, historically this has mostly been monopolies cheating the market, that's less and less true. This is going to get worse, fast. And since those resources are obviously not going to be made available, there's only one thing to say :
> The basic problem is that a linear increase in bandwidth for the customer results in an exponential increase in costs for the ISP
> N * (N - 1) =~ N^2 (assuming full duplex links, which would be generally correct for the internet)
1) N^2 is not exponential, it's polynomial.
2) Expanding capacity isn't even N^2, it's just linear.
If you double each residential customer's bandwidth, you "only" have to double the capacity of your uplinks to other networks. The number of endpoints or total nodes in the network is entirely irrelevant, because you have the same 50Mbps whether you're drawing it from one AWS server or a thousand BitTorrent peers. The only way you would get the result you're assuming is if every endpoint had a fully meshed connection to every other endpoint, i.e. each customer gets 50Mbps to each Amazon server, a separate 50Mbps to each Google server and a separate 50Mbps to every single other user on the internet, for a total of several hundred billion Mbps for every customer. That's not how it works.
And in practice it's even less expensive. It's sub-linear. Because just giving everybody a connection which is twice as fast doesn't mean everybody is going to immediately double their usage. A large fraction of users, given a faster connection, will do with it only what they currently do with the slower connection. Their pages will load faster but they won't load more pages, so the load they put on uplinks to other networks will remain the same.
There are no exponentially increasing costs. There are only the same costs as there ever were: Cisco et al come out with a new router that puts twice as many bits through the same piece of cable and if you want those bits you pay them a fixed cost and swap out your old model for the new one, and in another few years you do it again.
> Because just giving everybody a connection which is twice as fast doesn't mean everybody is going to immediately double their usage.
My understanding is that peak load is the issue, not total data transferred in a month. If Netflix downloads/buffers video content as fast as it can, and people mostly use Netflix at the same time of day, then giving them more bandwidth is a potentially big problem
Peak load is what you have to design capacity for, but that is hardly unique to Netflix. They seem to have it sorted for FiOS/U-verse/etc, don't they?
The argument they might make is that their services keep the data closer to the users, but that's such a cop out. They could do the same thing for Netflix with transparent proxies if they wanted to (but the result would be to make Netflix more usable instead of less, which is adverse to their interests as competitors). And in any event the uplink to a large peer is a totally inconsequential part of the cost of operating a network. The thing which is expensive is upgrading the last mile to handle a large amount of video traffic, but that cost is the same whether the traffic is Netflix or FiOS TV.
In the UK, I believe the main practice revolves around broadband, whereas in US it's all around phone (VoIP or otherwise). If you could enable competition within the same last mile service, you'd a)give a definite incentive to make sure the entire network delivers the best experience and b)have alternate sources of revenue to maintain the last mile network.
The current duopoly that exists SoCal in most areas (1 cable provider and 1 DSL provider) is a really terrible consumer experience. For those who have access to FiOS, it's a tad better, but not by much.
If you had access to 10 providers at once with comparable services, there would be real incentive for them to offer the best possible experience and lowest possible prices to consumers... but of course that would hurt corporate profits and so on and no politician getting those donations wants that...
What we really need is the equivalent of Glass-Steagall for internet services. Prohibit the company that operates the last mile from owning or being owned by anybody that offers end-to-end connectivity or over the top services to end users.
The problem with just local loop unbundling is that the last mile provider has the incentive to favor their own services. They can operate their own ISP division at a loss or with zero margins (more than made up for by profits in the last mile division) which disadvantages competitors who have to lease the last mile and thereby maintains the status quo of no competition.
I think people get confused by the Startup rule of Descriptive Speech which states that all growth is exponential, all interfaces are gorgeous, and all teams are composed of A players.
The thing is, it's not even close to doubling. I recently exchanged my 30Mbps ADSL to a 100Mbps fiber. In practice this means a jump from 0.8Mbps (yay long copper links!) to 50Mbps, which is most probably limited by the core. Thus, for vast residential areas (this is near the center of a 80k people town in a metro are of about a million), we are talking about multiplying the last mile by a factor of roughly 100x.
Ask yourself whether that ISP allowing speeds to languish for more than a decade such that they provide one ~100x increase instead of six or seven generations of ~2x increases has made their total costs lower or higher. The fact that they've been screwing over their customers to save a buck for a long time is not a thing that should inspire sympathy for their expenditures.
What 100x performance difference do you think exists between 30Mbps ADSL and 100Mbps fiber?
The answer is that there exists a box that mounts on a telephone pole. The purpose of the box is to terminate hundreds/thousands of DSL lines and connect them all to the central office with a single fiber optic cable. It allows you to get 30Mbps out of your 30Mbps DSL instead of getting .8Mbps, because the box is close enough to your house to get full speed unlike the central office. It saves the phone company from having to string a new strand of fiber to the premises of every individual customer who lives too far away from the central office for high speeds over twisted pair copper.
The first generation would have been actually installing the boxes, which they apparently never even did. Then you periodically upgrade the boxes to provide faster speeds as the technology improves and/or the cost comes down, e.g. from 1.5Mbps DSL to 3Mbps to 6Mbps to 12Mbps etc.
Giving everyone a guaranteed connection is a theoretical networking problem. The brute force solution as you've mentioned, is a crossbar switch, which is implemented in O(n^2).
However, theoretical networking problems like this have been solved many years ago. The Clos Network can provide full duplex links at significantly better big-O
I don't fully remember the details, but I believe you grow at approximately N*log(n). The problem is that the original Clos paper was published before big-O notation was invented... and that this was just "one other homework problem" that a professor gave to me about 5 years ago when I was in college.
So my memory is fuzzy, and the math is undocumented on the internet :-(
The internet is built on top of unreliable datagrams, which means the connectivity problem is even simpler. It satisfies the conditions of a Rearrangeably nonblocking Clos network. So the Big-O is even smaller than the above networking problem.
Clos wrote his paper specifically for phone connections. You cannot disconnect people for no reason while they're in the middle of a conversation. However, unreliable IP packets can be disconnected and rearranged, allowing you to use a cheaper form of the Clos network.
Either way, (like my professor from half a decade ago...), I'm going to leave the asymptotic complexity of "Strict Clos networks" and "Rearrangeably nonblocking CLOS networks" as an exercise up to the reader.
Mostly because I don't remember the solution... As a hint, replace the crossbar switches (in the wikipedia page) with a recursive Clos network and solve for the Recurrence relation. Use a 2x2 crossbar as the base case.
Have you thought about what such an architecture would mean bandwidth-wise on a WAN network ? I mean, dear God. This would actually require MORE links than full mesh.
In other words, core network capacity (meaning total bandwidth in all links) if you have a clos scales even worse than n^2, with n the bandwidth you deliver to users. Even worse : you need as many long-haul links as you have subscribers (because your subscribers are connected to different locations).
Is there question in your mind as to why ISPs don't do that ?
I think you're right about a lot of things in this post, but the one thing I disagree with is that internet traffic is going to continually follow an exponential growth curve.
The main driver behind increased internet traffic is the migration from television to internet for video content. I think that trend is going to continue over the next decade or so until everyone uses the internet for video content. But after that I think you'll see internet traffic level off to an extent.
ISPs need to build out their systems to handle that. I can't see anything else causing a dramatic rise in traffic so once they build out their system to handle video content it should be good for awhile.
Unfortunately, most ISPs are also cable companies, so naturally they are resistant to spending money just to shoot themselves in the foot. But this trend is going to continue whether they like it or not. It's probably going to be a really painful process for the ISPs and their customers. Netflix is going to be trapped in the middle. The ISPs will want to choke netflix out and take over their business. With net neutrality gone they just might.It's going to take some good PR and strategy for netflix to come out of this fight alive. If I was a betting man I'd say they'll put up a good fight but eventually be bought out by one of the big ISPs.
That's at best the average cost. You can also deliver a few terabytes for something like $3 in postage if you ship a tape, but that average cost would be deceptively low if you were trying to use that bandwidth to watch Netflix.
The real cost is building the infrastructure to handle higher and higher peak bandwidth. Suppose you're Comcast with 20 million subscribers, half of whom are trying to watch 5Mbit Super-HD Netflix during prime time. That's 5 terabit, and even if you could agree with Netflix's providers to divide it up geographically, it's still an outrageous amount of bits to figure out how to ship.
I don't understand: If under total load, with every customer pulling the maximum possible bandwidth simultaneously, Comcast could give you 250Kbps, you want them to sell a product that only gives you 250Kbps all the time just so their advertising will be accurate?
Like everything else, there's fine print: 20Mbps or whatever is described as a maximum speed, and many people actually see burst speeds that high. But as it would be unreasonable to be upset you can't get 20Mbps to Nigeria, it's also somewhat unreasonable to be completely upset at Comcast that you can't get 20Mbps to Netflix if Netflix is paying for cheap ISPs that won't fairly peer. I'm not sufficiently informed to take sides in that dispute, but I can easily imagine that Cogent (for example) is dumping way more traffic into Comcast than Comcast is dumping into it, and trying to get away without paying for that disparity.
> If under total load, with every customer pulling the maximum possible bandwidth simultaneously, Comcast could give you 250Kbps, you want them to sell a product that only gives you 250Kbps all the time just so their advertising will be accurate?
No, their advertising just needs to be accurate! They need to say "250Kbp bandwidth, max peak 20Mb". The real number has to be the lead, not the BS peak number. There's a parallel here to audio amplifiers. Advertising the peak wattage and not the RMS is stupid and deceptive.
There are two key differences: the first is, as other people noted, that they should advertise that range rather than always marketing the highest number. “256KB guaranteed, up to 20Mbs” isn't any harder to sell than telling people they won't always be able drive 65MPH.
The bigger issue, however, is whether those limits are based on the underlying limits of the network or artificial caps: this is currently completely opaque. It'd be much better if they were forced to publish any traffic shaping performed so the consumer can actually make an informed decision. This might lead to other questions such as whether they should be refusing to deploy Netflix OpenConnect nodes which would be healthy – and no doubt a key factor in why they won't talk about it unless forced.
It is practically and theoretically impossible for Comcast (a residential ISP with asymmetrical connection speeds) to have a balanced traffic relationship with any Tier 1 ISP. Every typical internet behavior of a Comcast subscriber follows the pattern of sending a small request, and receiving a large amount of data (webpages, music, video, etc.) in return.
The cost of transferring each bit is exponentially decreasing as network equipment tracks Moore's law. Their costs are going down, not up.
What this is really about is damaging competitors to their cable TV operations. Wail about Netflix using a lot of bandwidth, never mind that the cost of providing that bandwidth is falling as fast as the demand for it is rising, and you can put on a good show for the regulators as to why you need to destroy Netflix and push everybody back onto U-verse and cable TV.
That depends entirely on whether you see bandwidth costs as revenue or an expense. For many of these companies it's both, but generally they make most their money from consumers, and in that case it's an expense (to the degree it's got an ongoing price at all, given peering arrangements).
Didn't ISP's in the USA get a whole bunch of money in the past from the government to improve infrastructure but they didn't which now led to the current state of USA not being an internet leader?
Telcos got subsidies in the form of tax breaks to improve infrastructure. They spent their own money though. They talked about wiring America with fiber when trying to get the subsidies through congress, so that's why people claim they didn't live up to their "bargain."
Today this has been twisted into a false narrative: "the teclos got paid 200 billion to build fiber and never did".
A reduction in debt is a form of income though. If I owe someone $200 dollars. But then I only have to give them $100 I now have gains of $100 vs 0 if I had to pay the full amount.
To be fair, often politicians will refer to tax breaks as a cost, as in lost revenues, to the Treasury as if something was paid out. It's common enough that it's understandable about the confusion of what the narrative actually means.
Scenarios don't allow you to make blaming statements. We don't actually know anything about the connection between Verizon and AWS, other than it's slow. Making statements like "they're demanding Netflix make it worth their while to upgrade the interconnects" is speaking for others based on a presumption you had.
Saying they are 'desperate' makes them sound fearful. Verizon made $70B in gross profit last year. I can't see how they would be afraid of upgrading their network - it's part of the business.
This would all be correct, if not for the Netflix Open Connect Appliance. Netflix is moving content closer to the user, so only the last mile feels the affects of the traffic.
alphadogg had an interesting, albeit dead question:
"So, how does this analysis account for the fact that, in many reported cases like this, other services work just fine at high-speed? Your point 2 is that more consumers and producers are flooding the net as a whole, but that would cause widespread issues, not just premier services issues, right?"
This is a boon for indie ISPs who are too small to host a cache as the bandwidth is cheap and it makes Super HD streams available. I wouldn't call NetFlix a bad netizen.
It isn't even just "indie" ISPs who do it. Cablevision's Optimum Online, which is probably the biggest internet provider on Long Island and Westchester has a peering relationship with Netflix, and lands at #2 on the speed chart, just behind Google Fiber:
http://ispspeedindex.netflix.com/usa
It is also worth noting, that FIOS was #6. I do wonder if that has really changed or not.
1) Verizon decided to enter some router configurations that affect the packet forwarding to Netflix (well, most likely FROM netflix, as the internet is not actually symmetrical). What we should see as a result of this is a sudden change in bandwidth.
2) The load on the interfaces in between is changing, either because of more netflix subscribers, more AWS traffic, or more netflix traffic. Since these are things happening slowly over time, you'd see a gradual worsening of traffic conditions.
I think it's pretty fucking likely that what's happening here is 2). Now granted, Verizon could solve this problem by cooperating with Netflix. But for them that's an expense with no upside, so likely they're demanding Netflix make it worth their while to upgrade the interconnects.
This is one of the basic problems with the internet today. It used to be the case that bandwidth between providers was much more plentiful than last-mile bandwidth. So we upgraded last-mile bandwidth. Problem : doing that resulted in an exponential increase in load on the core network and the interconnects between different networks. Needless to say, nobody can keep up with exponentially rising demands (especially without subscriber growth). The basic problem is that a linear increase in bandwidth for the customer results in an exponential increase in costs for the ISP (so -surpise- small ISPs don't feel the effects nearly as badly).
So ISPs are desperate. Their costs are spiralling out of control, with everyone demanding they follow this exponential curve, but of course their customers not willing to pay for it (and all the money in the world would only buy time if thrown at this problem). Demanding the ISPs pay for it is simply not going to work. Right now, yes, they could theoretically pay for it, but that won't last long.
Plus Netflix is being a rather bad netizen themselves. The polite thing to do is to carry the traffic to very close to the user on your own network, and only use other's links, especially transit, as a last resort ("cold-potato" routing, for content providers). Netflix has caches, but no own network.
There's other problems that result from this cost problem. Already transcontinental and other long-haul links are strained to the limit. This has resulted in a massive degradation of non-local internet traffic, and it's getting worse fast.
It is really a very simple problem. You have consumers, and you have producers. And you get more and more of each. To a limited approximation you can assume everyone on the internet is both. To simplify things, let's say everyone uses a tiny amount of everyone else's services. Now given N participants in the network, what is the load on the core network ?
N * (N - 1) =~ N^2 (assuming full duplex links, which would be generally correct for the internet)
So adding more participants in the internet increases resources required to give every existing participant in the network his "old" speed. These need to be paid for. But unless everyone want to start paying an amount for your internet connections that rises with the square of the size of the internet, we're going to see progressively worse and worse throttling. Right now the strategy is fast becoming limiting interconnect capacity to prevent the core network overloading, which would have far worse consequences.
While, yes, historically this has mostly been monopolies cheating the market, that's less and less true. This is going to get worse, fast. And since those resources are obviously not going to be made available, there's only one thing to say :
Get used to it.