I don't know how many times I need to say this, but I will die on this hill.
Centralized services don't decrease redundancy. They're usually far more redundant than whatever homegrown solution you can come up with.
The difference between centralized and homegrown is mostly psychological. We notice the outages of centralized systems more often, as they affect everything at the same time instead of different systems at different times. This is true even if, in a hypothetical world with no centralization, we'd have more total outage time than we do now.
If your gas station says "closed" due to a problem that only affects their own networks, people usually go "aah they're probably doing repairs or something", and forget about the problem 5 minutes later. If there's a Cloudflare outage... everybody (rightly) blames the Cloudflare outage.
Where this becomes a problem is when correlated failures are actually worse than uncorrelated ones. If Visa goes down, it's better if Mastercard stays up, because many customers have both and can use the other when one doesn't work. In some ways, it's better to have 30 mins of Visa outages today and 30 mins of Mastercard outages tomorrow, than to have just 15 mins of correlated outages in one day.
It would be a good thing, if it would cause anything to change. It obviously won't. As if a single person reading this post wasn't aware that the Internet is centralized, and couldn't name specifically a few sources of centralization (Cloudflare, AWS, Gmail, Github). As if it's the first time this happens. As if after the last time AWS failed (or the one before that, or one before…) anybody stopped using AWS. As if anybody could viably stop using them.
> It would be a good thing, if it would cause anything to change. It obviously won't.
I agree wholeheartedly. The only change is internal to these organizations (eg: CloudFlare, AWS) Improvements will be made to the relevant systems, and some teams internally will also audit for similar behavior, add tests, and fix some bugs.
However, nothing external will change. The cycle of pretending like you are going to implement multi-region fades after a week. And each company goes on continuing to leverage all these services to the Nth degree, waiting for the next outage.
Not advocating that organizations should/could do much, it's all pros/cons. But the collective blast radius is still impressive.
the root cause is customers refusing to punish these downtime.
Checkout how hard customers punish blackouts from the grid - both via wallet, but also via voting/gov't. It's why they are now more reliable.
So unless the backbone infrastructure gets the same flak, nothing is going to change. After all, any change is expensive, and the cost of that change needs to be worth it.
If it’s that easy to get the exact same service / product as another vendor the maybe your competitive advantage isn’t so high. If Amazon would be down I’d just wait a few hours as I don’t want to sign up on another site.
I remember a Google cloud outage years ago that happened to coincide with one of our customers' massively expensive TV ads. All the people who normally would've gone straight to their website instead got 502. Probably a 1M+ loss for them all things considered.
Downtimes happen one way or another. The upside of using Cloudflare is that bringing things back online is their problem and not mine like when I self-host. :]
Their infrastructure went down for a pretty good reason (let the one who has never caused that kind of error cast the first stone) and was brought back within a reasonable time.
If anything, centralisation shields companies using a hyperscaler from criticism. You’ll see downtime no matter where you host. If you self host and go down for a few hours, customers blame you. If you host on AWS and “the internet goes down”, then customers treat it akin to an act of God, like a natural disaster that affects everyone.
It’s not great being down for hours, but that will happen regardless. Most companies prefer the option that helps them avoid the ire of their customers.
Where it’s a bigger problem is when a critical industry like retail banking in a country all choose AWS. When AWS goes down all citizens lose access to their money. They can’t pay for groceries or transport. They’re stranded and starving, life grinds to a halt. But even then, this is not the bank’s problem because they’re not doing worse than their competitors. It’s something for the banking regulator and government to worry about. I’m not saying the bank shouldn’t worry about it, I’m saying in practice they don’t worry about it unless the regulator makes them worry.
I completely empathise with people frustrated with this status quo. It’s not great that we’ve normalised a few large outages a year. But for most companies, this is the rational thing to do. And barring a few critical industries like banking, it’s also rational for governments to not intervene.
>If anything, centralisation shields companies using a hyperscaler from criticism. You’ll see downtime no matter where you host. If you self host and go down for a few hours, customers blame you.
What if you host on AWS and only you go down? How does hosting on AWS shield you from criticism?
This discussion is assuming that the outage is entirely out of your control because the underlying datacenter you relied on went down.
Outages because of bad code do happen and the criticism is fully on the company. They can be mitigated by better testing and quick rollbacks, which is good. But outages at the datacenter level - nothing you can do about that. You just wait until the datacenter is fixed.
This discussion started because companies are actually fine with this state of affairs. They are risking major outages but so are all their competitors so it’s fine actually. The juice isn’t worth the squeeze to them, unless an external entity like the banking regulator makes them care.
Same with the big Crowdstrike fail of 2024. Especially when everyone kept repeating the laughable statement that these guys have their shit in order, so it couldn't possibly be a simple fuckup on their end. Guess what, they don't, and it was. And nobody has realized the importance of diversity for resilience, so all the major stuff is still running on Windows and using Crowdstrike.
I wrote https://johannes.truschnigg.info/writing/2024-07-impending_g... in response to the CrowdStrike fallout, and was tempted to repost it for the recent CloudFlare whoopsie. It's just too bad that publishing rants won't change the darned status quo! :')
Here's where we separate the men from the boys, the women from the girls, the Enbys from the enbetts, and the SREs from the DevOps. If you went down when Cloudflare went do, do you go multicloud so that can't happen again, or do you shrug your shoulders and say "well, everyone else is down"? Have some pride in your work, do better, be better, and strive for greatness. Have backup plans for your backup plans, and get out of the pit of mediocrity.
Or not, shit's expensive and kubernetes is too complicated and "no one" needs that.
Does the author of this post not see the irony of posting this content on Github?
My counter argument is that "centralization" in a technical sense isn't about what company owns things but how services are operated. Cloudflare is very decentralized.
Furthermore, I've seen regional outages caused by things like anchors dropped by ships in the wrong place, a shark eating a cable. Regional power outages caused by squirrels,etc... outages happen.
If everyone ran their own server from their own home, AT&T or Level3 could have an outage and still take out similar swathes of the internet.
With CDNs like cloudflare, if Level3 had an outage, your website won't be down because your home or VPS host's upstream transit happens to be Level3 (or whatever they call themselves these days) because your content (at least static) is cached globally.
The only real reasonable alternative is something like ipfs, web3 and similar talk.
Cloudflare has always called itself a content transport provider, think of it as such. But also, Cloudflare is just one player, there are several very big players. Every big cloud provider has a competing product, not to mention companies like Akamai.
People are rage posting about cloudflare, especially because it has made CDNs accessible to everyone. You can easily setup a free cloudflare account and be on your merry way. This isn't something you should be angry about. You're free to pay for any number of other cdns, many do.
If you don't like how Cloudflare has so much market share, then come up with a similarly competitive alternative and profit. Just this HN thread alone is enough for me to think there is a market for more players. Or, just spread the word about the competition that exists today. Use frontdoor, cloudfront, netlify, flycdn, akamai,etc... It's hardly a monopoly.
What happens if you don't use Cloudflare and just host everything on a server?
Can't you run a website like that if you don't host heavy content?
How common are DDOS attacks anyway, and aren't there local (to the server), that analyze user behavior to a decent accuracy (at least it can tell they're using a real browser and behaving more or less like a human would, making attacks expensive).
Can't you buy a list of ISP ranges from a GeoIP provider (you can), at least then you'd know which addresses belong to real humans.
I don't think botnets are that big of a problem (maybe in some obscure places of the world, but you can temp rangeban a certain IP range, if there's a lot of suspicious traffic coming from there).
If lots of legit networks (as in belonging to people who are paying an ISP for their network connections) have botnets, that's means most PCs are compromised, which is a much more severe issue.
Lots of people use raspberry pi’s for this, which is a smidge anaemic for some
decent load (HN Hug Of Death)- even an Intel N100 is more grunt, for context.
This makes people think that their self hosting setup can never handle HN load; because when they see people talking about self hosting the site goes down.
> What happens if you don't use Cloudflare and just host everything on a server?
It works.
> Can't you run a website like that if you don't host heavy content?
Even with a heavy content - question is how many visitors do you have. If there is one once an hour you would suffice on a 100Mbit/Unlim connection.
> How common are DDOS attacks anyway
Extremely rare. 99% of sites never experience it, 1% do have some trouble because somebody nearby is being DDoS'ed.
> and aren't there local (to the server), that analyze user behavior to a decent accuracy (at least it can tell they're using a real browser and behaving more or less like a human would, making attacks expensive).
No point, you can't do anything anyway - it's a denial of service so there are gigabytes of trash flowing your way.
> Can't you buy a list of ISP ranges from a GeoIP provider (you can), at least then you'd know which addresses belong to real humans.
No point. If you are not being DDoS'ed then you just spent money and time (ie money) on useless preventive measure you never use. And when (if) it would come you can't do anything anyway, because it's a distributed denial of service attack.
> I don't think botnets are that big of a problem (maybe in some obscure places of the world, but you can temp rangeban a certain IP range, if there's a lot of suspicious traffic coming from there).
It's not a DDoS if you can filter at the endpoint.
So were going backwards to a world where there are basically 5 computers running everything and everyone is basically accessing the world through a dumb terminal.Even though the digital slab in our pockets has more compute than a roomful of the early gen devices.
Hopefully critical infrashifts back to managed metal or private clouds - dont see it though with the last decades of cloud evangalism to move all legacy systems to the cloud doesnt look like reversing anytime soon.
I agree considering all the Cloudflare AWS Azure apologists I see all around... Learning AWS already is the #1 tip on social media to "become employed as a dev in 2025 guaranteed" and I always just sigh when seeing this. I wouldnt touch it with a stick.
I agree. I think the whole point is someone like TFA author has a pretty broad choice of places they can choose to publish this and choosing GitHub is somewhat ironic.
Reminds me of the guy who posted an open letter to Mark Zuckerberg like "we are not for sale" on LinkedIn, a place that literally sells access to its users as their main product.
The problem is far more nuanced than the internet simply becoming too centralised.
I want to host my gas station network’s air machine infrastructure, and I only want people in the US to be able to access it. That simple task is literally impossible with what we have allowed the internet to become.
FWIW I love Cloudflare’s products and make use of a large amount of them, but I can’t advocate for using them in my professional job since we actually require distributed infrastructure that won’t fail globally in random ways we can’t control.
> and I only want people in the US to be able to access it. That simple task is literally impossible with what we have allowed the internet to become.
Is anyone else as confused as I am about how common anti-openness and anti-freedom comments are becoming on HN? I don’t even understand what this comment wants: Banning VPNs? Walling off the rest of the world from US internet? Strict government identity and citizenship verification of people allowed to use the internet?
It’s weird to see these comments get traction after growing up in an internet where tech comments were relentlessly pro freedom and openness on the web. Now it seems like every day I open HN and there are calls to lock things down, shut down websites, institute age (and therefore identify) verification requirements. It’s all so foreign and it feels like the vibe shift happened overnight.
> Is anyone else as confused as I am about how common anti-openness and anti-freedom comments are becoming on HN?
In this specific case I don't think it's about being anti-open? It's that a business with only physical presence in one country selling a service that is only accessible physically inside the country.... doesn't.... have any need for selling compressed air to someone who isn't like 15 minutes away from one of their gas stations?
If we're being charitable to GP, that's my read at least.
If it was a digital services company, sure. Meatspace in only one region though, is a different thing?
> In this specific case I don't think it's about being anti-open? It's that a business with only physical presence in one country selling a service that is only accessible physically inside the country.... doesn't.... have any need for selling compressed air to someone who isn't like 15 minutes away from one of their gas stations?
But that person might be physically further away at the time they want to order something or gather information etc. Maybe they are on holidays in Spain and want to access their account to pay a bill. Maybe they are in Mexico on a work trip and want to help their aunt back home to use some service for which they need to log in from abroad.
The other day I helped a neighbor (over here in Europe) prepare for a trip to Canada where he wanted to make adjustments to a car sharing account. The website always timed out. It was geofenced. I helped him set up a VPN. That illustrated how locked in this all has become, geofencing without thinking twice.
> I want to host my gas station network’s air machine infrastructure, and I only want people in the US to be able to access it. That simple task is literally impossible with what we have allowed the internet to become.
That task was never simple and is unrelated to Cloudflare or AWS. The internet at a fundamental level only knows where the next hop is, not where the source or destination is. And even if it did, it would only know where the machine is, not where the person writing the code that runs on the machine is.
Genuine question - why are you spending time and effort on geofencing when you could spend it on improving your software/service?
It takes time and effort for no gain in any sensible business goal. People outside of US won't need it, bad actors will spoof their location, and it might inconvenience your real customers.
And if you want a secure communication just setup zero-trust network.
Literally impossible? On the contrary; Geofencing is easy. I block all kind of nefarious countries on my firewall, and I don't miss them (no loss not being able to connect to/from a mafia state like Russia). Now, if I were to block FAMAG... or Cloudflare...
Yes, literally impossible. The barrier to entry for anyone on the internet to create a proxy or VPN to bypass your geofencing is significantly lower than your cost to prevent them.
I don’t even understand where this line of reasoning is going. Did you want a separate network blocked off from the world? A ban on VPNs? What are we supposed to believe could have been disallowed to make this happen?
I don't understand why you want to allow any random guy anywhere in the US but not people country hopping on VPNs. For your air machine infrastructure.
It's a bit weird that you can't do this simple thing, but what's the motivation for this simple thing?
Actually, the 140k Tor exit nodes, VPNs, and compromised proxy servers have been indexed.
It takes 24 minutes to compile these firewall rules, but the black-list along with tripwires have proven effective at banning game cheats. Example, dropping connections from TX with a hop-count and latency significantly different from their peers.
Preemptively banning all bad-reputation cloud IP ranges except whitelisted hosts has zero impact on clients. =3
I was a bit shocked when my mother called me for IT help and sent me a screenshot of a Cloudflare error page with Cloudflare being the broken link and not the server. I assumed it's a bug in the error page and told her that the server is down.
not a sysadmin here. why wouldn't this be behind a VPN or some kind of whitelist where only confirmed IPs from the offices / gas stations have access to the infrastructure?
In practice, many gas stations have VPNs to various services, typically via multiple VPN links for redundancy. There’s no reason why this couldn’t be yet another service going over a VPN.
Gas stations didn’t stop selling gas during this outage. They have planned for a high degree of network availability for their core services. My guess is this particular station is an independent or the air pumping solution not on anyone’s high risk list.
Client side SSL certificates with embedded user account identification are trivial, and work well for publicly exposed systems where IPsec or Dynamic frame sizes are problematic (corporate networks often mangle traffic.)
Accordingly, connections from unauthorized users is effectively restricted, but is also not necessarily pigeonholed to a single point of failure.
Spot on article, but without a call to action. What can we do to combat the migration of society to a centralized corpro-government intertwined entity with no regard for
unprofitable privacy or individualism?
Individuals are unlikely to be able to do something about the centralization problem except vote for politicians that want to implement countermeasures. I don’t know of any politicians (with a chance to win anything) that have that on their agenda.
There is a crucial step between having an opinion and voting. It's conversations within society. That's what makes democracy and facilitates change. If you only take your opiniom, isolated from everybody else, and vote from that, there isn't much democracy going on and your chance for change is slim. It's when there is broad conversations happening when movements have an impact.
And that step is here on HN. That's why it's very relevant to observe that that HN crowd is increasingly happy to support a non-free internet. Be it walled gardens, geofencing, etc.
That’s called antitrust, and is absolutely a cause you can vote for. Some of the Biden administration’s biggest achievements were in antitrust, and the head of the FTC for Biden has joined Mamdani’s transition team.
Even if you learn to Host, there are many other services that are going to get relied on those centralised platforms, so if you are thinking to Host, every single thing on your own, then it is going to be more work than you can even imagine and definitely super hard to organise as well
If you host you are running on my cPanel SW. 70% of the internet is doing that. Also a kinda centralized point of failure, but I didn't hear of any bugs in the last 14 years.
Have you tried that? I gave up on hosting my own email server seven or eight years ago, after it became clear that there would be an endless fight with various entities to accept my mail. Hosting a webserver without the expectation that you'll need some high powered DDOS defense seems naive, in the current day, and good luck doing that with a server or two.
I have never hosted my own email. It took me roughly a day to set it up on a vanilla FreeBSD install running on Vultr’s free tier plan and it has been running flawlessly for nearly a year. I did not use AI at all, just the FreeBSD, Postfix, and Dovecot’s handbooks. I do have a fair bit of Linux admin and development experience but all in all this has been a weirdly painless experience.
If you don’t love this approach, Mail-in-a-box works incredibly well even if the author of all the Python code behind it insists on using tabs instead of spaces :)
And you can always grab a really good deal from a small hosting company, likely with decades of experience in what they do, via LowEndBox/LowEndTalk. The deal would likely blow AWS/DO/Vultr/Google Cloud out of the water in terms of value. I have been snagging deals from there for ages and I lost a virtual host twice. Once was a new company that turned out to be shady and another was when I rented a VPS in Cairo and a revolution broke out. They brought everything back up after a couple of months.
For example I just bought a lifetime email hosting system with 250GB of storage, email, video, full office suite, calendar, contacts, and file storage for $75. Configuration here is down to setting the DNS records they give you and adding users. Company behind it has been around for ages and is one of the best regarded in the LET community.
It's not insurmountable to set up initially. And when you get email denied from whatever org (your lawyer, your mom, some random business, whatever), each individual one isn't insurmountable to fix. It does get old after awhile.
It also depends on how much you are emailing, and who. If it's always the same set of known entities, you might be totally fine with self hosting. Someone else who's regularly emailing a lot of new people or businesses might incur a lot of overhead. At least worth more than their time than a fastmail or protonmail subscription or whatever.
"Embrace outages, and build redundancy." — It feels like back in the day this was championed pretty hard especially by places like Netflix (Chaos Monkey) but as downtime has become more expected it seems we are sliding backwards. I have a tendency to rely too much on feelings so I'm sure someone could point me to some data that proves otherwise but for now that's my read on things. Personally, I've been going a lot more in on self-hosting lots of things I used to just mindlessly leave on the cloud.
For me personally I didn't notice the downtime in the first hour or so. When using some website assets were not loading, but that's it. Turnstile outage maybe impacted me most. Could be because I'm EU based and Cloudflare is not "so" widespread here as in other parts of the world.
I wonder what would life without cloudflare look like? What practices would fill the gaps if a company didn't - or wasn't allowed to -- satisfy the the concerns that cloudflare fills.
Pretty much exactly like it does now but with less captchas.
Fun fact: Headless browsers can easily pass cloudflare captchas automatically. They're not actually captchaing - they're just a placebo. You just need to be coming from a residential IP address and using a real browser.
Now just wait til every country on earth really does replace most of its employees with ChatGPT... and then OpenAI's data center goes offline with a fiber cut or something. All work everywhere stops. Cloudflare outage is nothing compared to that.
I'll die on the hill that centralization is more efficient than decentralization and that rare outages of hugely centralized systems that are otherwise highly reliable are much better than full decentralization with much worse reliability.
In other words, when AWS or Cloudflare go down it's catastrophic in the sense that everyone sees the issues at the same time, but smaller providers usually have much more ongoing issues, that just happen to be "chronic" vs "acute" pains.
There are multiple dimensions to this problem. Putting everything behind Cloudflare might give you better uptime, reliability, performance, etc. but it also has the effect of centralizing power into the hands of a single entity. Instead of twisting the arms of ten different CXOs, your local politician now only needs to twist the arm of a single CXO to knock your entire business off the internet.
I live in India, where the government has always been hostile to the ideals of freedom of speech and expression. Complete internet blackouts are common in several states, and major ISPs block websites without due process or an appeals mechanism. Nobody is safe from this, not even Github[1]. In countries like India, decentralization is a preventative measure.
And I'm not even going to talk about abuse of monopoly power and all that. What happens when Cloudflare has their Apple moment? When they jack up their prices 10x, or refuse to serve customers that might use their CDNs to serve "inappropriate" content? When the definition of "inappropriate" is left fuzzy, so that it applies to everything from CSAM to political commentary?
My old employer used azure. It irritated me to no end when they said we must rename all our resources to match the convention of naming everything US East as "eu-" because (Eastern United States I guess)
I don't like this argument since you can applied this argument to google,microsot,aws,facebook etc
Tech world is dominated by US company and what is alternative to most of these service???? its a lot fewer than you might think and even then you must make a compromise in certain areas
> They [outages] can force redundancy and resilience into systems.
They won’t until either the monetary pain of outages becomes greater than the inefficiency of holding on to more systems to support that redundancy, or, government steps in with clear regulation forcing their hand. And I’m not sure about the latter. So I’m not holding my breath about anything changing. It will continue to be a circus of doing everything on a shoestring because line must go up every quarter or a shareholder doesn’t keep their wings.
Centralization has nothing to do with the problems of society and technology. And if you think the internet is all controlled by just a couple companies, you don't actually understand how it works. The internet is wildly decentralized. Even Cloudflare is. It offers tons of services, all of which are completely optional and can be used individually. You can also stop using them at any time, and use any of their competitors (of which there are many).
If, on the off chance, people just get "addicted" to Cloudflare, and Cloudflare's now-obviously-terrible engineering causes society to become less reliable, then people will respond to that. Either competitors will pop up, or people will depend on them less, or governments will (finally!) impose some regulations around the operation of technical infrastructure.
We have actually too much freedom on the Internet. Companies are free to build internet systems any way they want - including in very unreliable ways - because we impose no regulations or standards requirements on them. Those people are then free to sell products to real people based on this shoddy design, with no penalty for the products falling apart. So far we haven't had any gigantic disasters (Great Chicago Fire, Triangle Shirtwaist Factory Fire, MGM Grand Hotel Fire), but we have had major disruptions.
We already dealt with this problem in the rest of society. Buildings have building codes, fire codes, electrical codes. They prescribe and require testing procedures, provide standard building methods to ensure strength in extreme weather, resist a spreading fire long enough to allow people to escape, etc. All measures to ensure the safety and reliability of the things we interact with and depend on. You can build anything you want - say, a preschool? - but you aren't allowed to build it in a shoddy manner. We have that for physical infrastructure; now we need it for virtual infrastructure. A software building code.
Centralization means having a single point of failure for everything. If your government, mobile phone or car stops working, it doesn't mean all governments, all cars and all mobile phones stop working.
Centralization makes mass surveillance easier, makes selectively denying of service easier. Centralization also means that once someone hacks into the system, he gains access to all data, not just a part of it.
>It's ironic because the internet was actually designed for decentralisation, a system that governments could use to coordinate their response in the event of nuclear war
This is not true. The internet was never designed to withstand nuclear war.
Perhaps. Perhaps not. But it will survive it. It will survive a complete nuclear winter. It's too useful to die, and will be one the first things to be fixed after global annihilation.
But Internet is not hosting companies or cloud providers. Internet does not care if they don't build their systems resilient enough and let the SPOFs creep up. Internet does it's thing and the packets keep flowing. Maybe BGP and DNS could use some additional armoring but there are ways around both of them in case of actual emergency.
ARPANET was literally invented during the cold war for the specific and explicit purpose of networked communications resilience for government and military in the event major networking hubs went offline due to one or more successful nuclear attacks against the United States
Your link is talking about work Baran did before ARPANET was created. The timeline doesn't back your point. And when ARPANET was created after Baran's work with Rand:
>Wired: The myth of the Arpanet – which still persists – is that it was developed to withstand nuclear strikes. That's wrong, isn't it?
>Paul Baran: Yes. Bob Taylor1 had a couple of computer terminals speaking to different machines, and his idea was to have some way of having a terminal speak to any of them and have a network. That's really the origin of the Arpanet. The method used to connect things together was an open issue for a time.
"A preferred alternative would be to have the ability to withstand a first strike and the capability of returning the damage in kind. This reduces the overwhelming advantage by a first strike, and allows much tighter control over nuclear weapons. This is sometimes called Second Strike Capability."
The stated research goals are not necessarily the same as the strategic funding motivations. The DoD clearly recognized packet-switching's survivability and dynamic routing potential when the US Air Force funded the invention of networked packet switching by Paul Baran six years earlier, in 1960, for which the explicit purpose was "nuclear-survivable military communications".
There is zero reason to believe ARPA would've funded the work were it not for internal military recognition of the utility of the underlying technology.
To assume that the project lead was told EVERY motivation of the top secret military intelligence committee that was responsible for 100% of the funding of the project takes either a special kind of naïveté or complete ignorance of compartmentalization practices within military R&D and procurement practices.
ARPANET would never have been were it not for ARPA funding, and ARPA never would've funded it were it not for the existence of packet-switched networking, which itself was invented and funded, again, six years before Bob Taylor even entered the picture, for the SOLE purpose of "nuclear-survivable military communications".
Consider the following sequence of events:
1. US Air Force desires nuclear-survivable military communications, funds Paul Baran's research at RAND
2. Baran proves packet-switching is conceptually viable for nuclear-survivable communications
3. His specific implementation doesn't meet rigorous Air Force deployment standards (their implementation partner, AT&T, refuses - which is entirely expectable for what was then a complex new technology that not a single AT&T engineer understood or had ever interacted with during the course of their education), but the concept is now proven and documented
4. ARPA sees the strategic potential of packet-switched networks for the explicit and sole purpose of nuclear-survivable communications, and decides to fund a more robust development effort
5. They use academic resource-sharing as the development/testing environment (lower stakes, work out the kinks, get future engineers conceptually familiar with the underlying technology paradigms)
6. Researchers, including Bob Taylor, genuinely focus on resource sharing because that's what they're told their actual job is, even though that's not actually the true purpose of their work
7. Once mature, the technology gets deployed for it's originally-intended strategic purposes (MILNET split-off in 1983)
Under this timeline, the sole true reason for ARPA's funding of ARPANET is nuclear-survivable military communication, Bob Taylor, being the military's R&D pawn, is never told that (standard compartmentalization practice). Bob Taylor can credibly and honestly state that he was tasked with implementing resource sharing across academic networks, which is true, but was never the actual underlying motivation to fund his research.
...and the myth of "ARPANET wasn't created for nuclear survivability" is born.
> It puzzles me why the hell they build such a function in the first place.
One reason is similar to why most programming languages don't return an Option<T> when indexing into an array/vector/list/etc. There are always tradeoffs to make, especially when your strangeness budget is going to other things.
I don't know how many times I need to say this, but I will die on this hill.
Centralized services don't decrease redundancy. They're usually far more redundant than whatever homegrown solution you can come up with.
The difference between centralized and homegrown is mostly psychological. We notice the outages of centralized systems more often, as they affect everything at the same time instead of different systems at different times. This is true even if, in a hypothetical world with no centralization, we'd have more total outage time than we do now.
If your gas station says "closed" due to a problem that only affects their own networks, people usually go "aah they're probably doing repairs or something", and forget about the problem 5 minutes later. If there's a Cloudflare outage... everybody (rightly) blames the Cloudflare outage.
Where this becomes a problem is when correlated failures are actually worse than uncorrelated ones. If Visa goes down, it's better if Mastercard stays up, because many customers have both and can use the other when one doesn't work. In some ways, it's better to have 30 mins of Visa outages today and 30 mins of Mastercard outages tomorrow, than to have just 15 mins of correlated outages in one day.
It would be a good thing, if it would cause anything to change. It obviously won't. As if a single person reading this post wasn't aware that the Internet is centralized, and couldn't name specifically a few sources of centralization (Cloudflare, AWS, Gmail, Github). As if it's the first time this happens. As if after the last time AWS failed (or the one before that, or one before…) anybody stopped using AWS. As if anybody could viably stop using them.
> It would be a good thing, if it would cause anything to change. It obviously won't.
I agree wholeheartedly. The only change is internal to these organizations (eg: CloudFlare, AWS) Improvements will be made to the relevant systems, and some teams internally will also audit for similar behavior, add tests, and fix some bugs.
However, nothing external will change. The cycle of pretending like you are going to implement multi-region fades after a week. And each company goes on continuing to leverage all these services to the Nth degree, waiting for the next outage.
Not advocating that organizations should/could do much, it's all pros/cons. But the collective blast radius is still impressive.
the root cause is customers refusing to punish these downtime.
Checkout how hard customers punish blackouts from the grid - both via wallet, but also via voting/gov't. It's why they are now more reliable.
So unless the backbone infrastructure gets the same flak, nothing is going to change. After all, any change is expensive, and the cost of that change needs to be worth it.
Is a little downtime such a bad thing? Trying to avoid some bumps and bruises in your business has diminishing returns.
Even more so when most of the internet is also down.
What are customers going to do? Go to competitor that's also down?
It is extremely annoying, will ruin your day, but as movie quote goes - if everyone is special, no one is.
They could go to your competitor that's up. If you choose to be up, your competitor's customers could go to you.
If it’s that easy to get the exact same service / product as another vendor the maybe your competitive advantage isn’t so high. If Amazon would be down I’d just wait a few hours as I don’t want to sign up on another site.
What's "a little downtime" to you might be work ruined and day wasted for someone else.
I remember a Google cloud outage years ago that happened to coincide with one of our customers' massively expensive TV ads. All the people who normally would've gone straight to their website instead got 502. Probably a 1M+ loss for them all things considered.
We got an extremely angry email about it.
It's 2025. That downtime could be be difference between my cat pics not loading fast enough, or someone's teleoperated robot surgeon glitching out.
Depends on the business.
Grid reliability depends on where you live. In some places, UPS or even a generator is a must have. So it's a bad example, I would say.
Downtimes happen one way or another. The upside of using Cloudflare is that bringing things back online is their problem and not mine like when I self-host. :]
Their infrastructure went down for a pretty good reason (let the one who has never caused that kind of error cast the first stone) and was brought back within a reasonable time.
With the rise in unfriendly bots on the internet as well as DDoS botnets reaching 15 Tbps, I don’t think many people have much of a choice.
The cynic in me wonders how much blame the world's leading vendor of DDoS prevention might share in the creation of that particularly problem
They provide free services to DDoS-for-hire services and do not terminate the services when reported.
It’s just a function of costs vs benefits. For most people, building redundancy at this layer costs far too much than the benefits.
If Cloudflare or AWS go down, the outage is usually so big that smaller players have an excuse and people accept that.
It’s as simple as that.
“Why isn’t your site working?” “Half the internet is down, here read this news article: …” “Oh, okay, let me know when it’s back!”
> As if anybody could viably stop using them.
You can, and even save money.
If anything, centralisation shields companies using a hyperscaler from criticism. You’ll see downtime no matter where you host. If you self host and go down for a few hours, customers blame you. If you host on AWS and “the internet goes down”, then customers treat it akin to an act of God, like a natural disaster that affects everyone.
It’s not great being down for hours, but that will happen regardless. Most companies prefer the option that helps them avoid the ire of their customers.
Where it’s a bigger problem is when a critical industry like retail banking in a country all choose AWS. When AWS goes down all citizens lose access to their money. They can’t pay for groceries or transport. They’re stranded and starving, life grinds to a halt. But even then, this is not the bank’s problem because they’re not doing worse than their competitors. It’s something for the banking regulator and government to worry about. I’m not saying the bank shouldn’t worry about it, I’m saying in practice they don’t worry about it unless the regulator makes them worry.
I completely empathise with people frustrated with this status quo. It’s not great that we’ve normalised a few large outages a year. But for most companies, this is the rational thing to do. And barring a few critical industries like banking, it’s also rational for governments to not intervene.
>If anything, centralisation shields companies using a hyperscaler from criticism. You’ll see downtime no matter where you host. If you self host and go down for a few hours, customers blame you.
What if you host on AWS and only you go down? How does hosting on AWS shield you from criticism?
This discussion is assuming that the outage is entirely out of your control because the underlying datacenter you relied on went down.
Outages because of bad code do happen and the criticism is fully on the company. They can be mitigated by better testing and quick rollbacks, which is good. But outages at the datacenter level - nothing you can do about that. You just wait until the datacenter is fixed.
This discussion started because companies are actually fine with this state of affairs. They are risking major outages but so are all their competitors so it’s fine actually. The juice isn’t worth the squeeze to them, unless an external entity like the banking regulator makes them care.
Same with the big Crowdstrike fail of 2024. Especially when everyone kept repeating the laughable statement that these guys have their shit in order, so it couldn't possibly be a simple fuckup on their end. Guess what, they don't, and it was. And nobody has realized the importance of diversity for resilience, so all the major stuff is still running on Windows and using Crowdstrike.
I wrote https://johannes.truschnigg.info/writing/2024-07-impending_g... in response to the CrowdStrike fallout, and was tempted to repost it for the recent CloudFlare whoopsie. It's just too bad that publishing rants won't change the darned status quo! :')
> It obviously won't.
Here's where we separate the men from the boys, the women from the girls, the Enbys from the enbetts, and the SREs from the DevOps. If you went down when Cloudflare went do, do you go multicloud so that can't happen again, or do you shrug your shoulders and say "well, everyone else is down"? Have some pride in your work, do better, be better, and strive for greatness. Have backup plans for your backup plans, and get out of the pit of mediocrity.
Or not, shit's expensive and kubernetes is too complicated and "no one" needs that.
You make the appropriate cost/benefit decision for your business and ignore apathy on one side and dogma on the other.
Does the author of this post not see the irony of posting this content on Github?
My counter argument is that "centralization" in a technical sense isn't about what company owns things but how services are operated. Cloudflare is very decentralized.
Furthermore, I've seen regional outages caused by things like anchors dropped by ships in the wrong place, a shark eating a cable. Regional power outages caused by squirrels,etc... outages happen.
If everyone ran their own server from their own home, AT&T or Level3 could have an outage and still take out similar swathes of the internet.
With CDNs like cloudflare, if Level3 had an outage, your website won't be down because your home or VPS host's upstream transit happens to be Level3 (or whatever they call themselves these days) because your content (at least static) is cached globally.
The only real reasonable alternative is something like ipfs, web3 and similar talk.
Cloudflare has always called itself a content transport provider, think of it as such. But also, Cloudflare is just one player, there are several very big players. Every big cloud provider has a competing product, not to mention companies like Akamai.
People are rage posting about cloudflare, especially because it has made CDNs accessible to everyone. You can easily setup a free cloudflare account and be on your merry way. This isn't something you should be angry about. You're free to pay for any number of other cdns, many do.
If you don't like how Cloudflare has so much market share, then come up with a similarly competitive alternative and profit. Just this HN thread alone is enough for me to think there is a market for more players. Or, just spread the word about the competition that exists today. Use frontdoor, cloudfront, netlify, flycdn, akamai,etc... It's hardly a monopoly.
What happens if you don't use Cloudflare and just host everything on a server?
Can't you run a website like that if you don't host heavy content?
How common are DDOS attacks anyway, and aren't there local (to the server), that analyze user behavior to a decent accuracy (at least it can tell they're using a real browser and behaving more or less like a human would, making attacks expensive).
Can't you buy a list of ISP ranges from a GeoIP provider (you can), at least then you'd know which addresses belong to real humans.
I don't think botnets are that big of a problem (maybe in some obscure places of the world, but you can temp rangeban a certain IP range, if there's a lot of suspicious traffic coming from there).
If lots of legit networks (as in belonging to people who are paying an ISP for their network connections) have botnets, that's means most PCs are compromised, which is a much more severe issue.
Yeah, you can.
Lots of people use raspberry pi’s for this, which is a smidge anaemic for some decent load (HN Hug Of Death)- even an Intel N100 is more grunt, for context.
This makes people think that their self hosting setup can never handle HN load; because when they see people talking about self hosting the site goes down.
Botnets use real residential connections not just data centers. So your static list of “real people” doesn’t really make a difference.
> What happens if you don't use Cloudflare and just host everything on a server?
It works.
> Can't you run a website like that if you don't host heavy content?
Even with a heavy content - question is how many visitors do you have. If there is one once an hour you would suffice on a 100Mbit/Unlim connection.
> How common are DDOS attacks anyway
Extremely rare. 99% of sites never experience it, 1% do have some trouble because somebody nearby is being DDoS'ed.
> and aren't there local (to the server), that analyze user behavior to a decent accuracy (at least it can tell they're using a real browser and behaving more or less like a human would, making attacks expensive).
No point, you can't do anything anyway - it's a denial of service so there are gigabytes of trash flowing your way.
> Can't you buy a list of ISP ranges from a GeoIP provider (you can), at least then you'd know which addresses belong to real humans.
No point. If you are not being DDoS'ed then you just spent money and time (ie money) on useless preventive measure you never use. And when (if) it would come you can't do anything anyway, because it's a distributed denial of service attack.
> I don't think botnets are that big of a problem (maybe in some obscure places of the world, but you can temp rangeban a certain IP range, if there's a lot of suspicious traffic coming from there).
It's not a DDoS if you can filter at the endpoint.
So were going backwards to a world where there are basically 5 computers running everything and everyone is basically accessing the world through a dumb terminal.Even though the digital slab in our pockets has more compute than a roomful of the early gen devices. Hopefully critical infrashifts back to managed metal or private clouds - dont see it though with the last decades of cloud evangalism to move all legacy systems to the cloud doesnt look like reversing anytime soon.
Yeah it's crazy to realize it takes a room of electronics for me to get my (g)mail. The more things change, the more they stay the same, eh?
I agree considering all the Cloudflare AWS Azure apologists I see all around... Learning AWS already is the #1 tip on social media to "become employed as a dev in 2025 guaranteed" and I always just sigh when seeing this. I wouldnt touch it with a stick.
"The Cloudflare outage was a good thing [...] they're a warning. They can force redundancy and resilience into systems."
- he says. On Github.
Thanks for doing the meme! https://knowyourmeme.com/memes/we-should-improve-society-som...
You are very intelligent!
That's fair. However I don't think I would have wrote that if those thoughts were shared on a blogging platform.
Most blogging platforms do not qualify as critical infrastructure. GitHub with all its CI/CD and supply chain attacks does.
There is a certain particular irony of this being written on critical (centralized) infrastructure without any apparent need.
Maybe it was intended, maybe not, in any case I found it funny.
I agree. I think the whole point is someone like TFA author has a pretty broad choice of places they can choose to publish this and choosing GitHub is somewhat ironic.
Reminds me of the guy who posted an open letter to Mark Zuckerberg like "we are not for sale" on LinkedIn, a place that literally sells access to its users as their main product.
The problem is far more nuanced than the internet simply becoming too centralised.
I want to host my gas station network’s air machine infrastructure, and I only want people in the US to be able to access it. That simple task is literally impossible with what we have allowed the internet to become.
FWIW I love Cloudflare’s products and make use of a large amount of them, but I can’t advocate for using them in my professional job since we actually require distributed infrastructure that won’t fail globally in random ways we can’t control.
> and I only want people in the US to be able to access it. That simple task is literally impossible with what we have allowed the internet to become.
Is anyone else as confused as I am about how common anti-openness and anti-freedom comments are becoming on HN? I don’t even understand what this comment wants: Banning VPNs? Walling off the rest of the world from US internet? Strict government identity and citizenship verification of people allowed to use the internet?
It’s weird to see these comments get traction after growing up in an internet where tech comments were relentlessly pro freedom and openness on the web. Now it seems like every day I open HN and there are calls to lock things down, shut down websites, institute age (and therefore identify) verification requirements. It’s all so foreign and it feels like the vibe shift happened overnight.
> Is anyone else as confused as I am about how common anti-openness and anti-freedom comments are becoming on HN?
In this specific case I don't think it's about being anti-open? It's that a business with only physical presence in one country selling a service that is only accessible physically inside the country.... doesn't.... have any need for selling compressed air to someone who isn't like 15 minutes away from one of their gas stations?
If we're being charitable to GP, that's my read at least.
If it was a digital services company, sure. Meatspace in only one region though, is a different thing?
> In this specific case I don't think it's about being anti-open? It's that a business with only physical presence in one country selling a service that is only accessible physically inside the country.... doesn't.... have any need for selling compressed air to someone who isn't like 15 minutes away from one of their gas stations?
But that person might be physically further away at the time they want to order something or gather information etc. Maybe they are on holidays in Spain and want to access their account to pay a bill. Maybe they are in Mexico on a work trip and want to help their aunt back home to use some service for which they need to log in from abroad.
The other day I helped a neighbor (over here in Europe) prepare for a trip to Canada where he wanted to make adjustments to a car sharing account. The website always timed out. It was geofenced. I helped him set up a VPN. That illustrated how locked in this all has become, geofencing without thinking twice.
"only need US customers to be able to" vs "want non-US customers to be unable to"
you're being obtuse, GP clearly wants a locked down internet
> I want to host my gas station network’s air machine infrastructure, and I only want people in the US to be able to access it. That simple task is literally impossible with what we have allowed the internet to become.
That task was never simple and is unrelated to Cloudflare or AWS. The internet at a fundamental level only knows where the next hop is, not where the source or destination is. And even if it did, it would only know where the machine is, not where the person writing the code that runs on the machine is.
And that is a good thing and we should embrace it instead of giving in to some idiotic ideas from a non-technical C-suite demanding geofencing.
Genuine question - why are you spending time and effort on geofencing when you could spend it on improving your software/service?
It takes time and effort for no gain in any sensible business goal. People outside of US won't need it, bad actors will spoof their location, and it might inconvenience your real customers.
And if you want a secure communication just setup zero-trust network.
Literally impossible? On the contrary; Geofencing is easy. I block all kind of nefarious countries on my firewall, and I don't miss them (no loss not being able to connect to/from a mafia state like Russia). Now, if I were to block FAMAG... or Cloudflare...
Yes, literally impossible. The barrier to entry for anyone on the internet to create a proxy or VPN to bypass your geofencing is significantly lower than your cost to prevent them.
I don’t even understand where this line of reasoning is going. Did you want a separate network blocked off from the world? A ban on VPNs? What are we supposed to believe could have been disallowed to make this happen?
I don't understand why you want to allow any random guy anywhere in the US but not people country hopping on VPNs. For your air machine infrastructure.
It's a bit weird that you can't do this simple thing, but what's the motivation for this simple thing?
Actually, the 140k Tor exit nodes, VPNs, and compromised proxy servers have been indexed.
It takes 24 minutes to compile these firewall rules, but the black-list along with tripwires have proven effective at banning game cheats. Example, dropping connections from TX with a hop-count and latency significantly different from their peers.
Preemptively banning all bad-reputation cloud IP ranges except whitelisted hosts has zero impact on clients. =3
Is Cloudflare having more outages than aws, gcp or azure? Honestly curious, I don't know the answer.
Definitely not.
I was a bit shocked when my mother called me for IT help and sent me a screenshot of a Cloudflare error page with Cloudflare being the broken link and not the server. I assumed it's a bug in the error page and told her that the server is down.
not a sysadmin here. why wouldn't this be behind a VPN or some kind of whitelist where only confirmed IPs from the offices / gas stations have access to the infrastructure?
In practice, many gas stations have VPNs to various services, typically via multiple VPN links for redundancy. There’s no reason why this couldn’t be yet another service going over a VPN.
Gas stations didn’t stop selling gas during this outage. They have planned for a high degree of network availability for their core services. My guess is this particular station is an independent or the air pumping solution not on anyone’s high risk list.
Client side SSL certificates with embedded user account identification are trivial, and work well for publicly exposed systems where IPsec or Dynamic frame sizes are problematic (corporate networks often mangle traffic.)
Accordingly, connections from unauthorized users is effectively restricted, but is also not necessarily pigeonholed to a single point of failure.
https://www.rabbitmq.com/docs/ssl
Best of luck =3
Spot on article, but without a call to action. What can we do to combat the migration of society to a centralized corpro-government intertwined entity with no regard for unprofitable privacy or individualism?
Individuals are unlikely to be able to do something about the centralization problem except vote for politicians that want to implement countermeasures. I don’t know of any politicians (with a chance to win anything) that have that on their agenda.
There is a crucial step between having an opinion and voting. It's conversations within society. That's what makes democracy and facilitates change. If you only take your opiniom, isolated from everybody else, and vote from that, there isn't much democracy going on and your chance for change is slim. It's when there is broad conversations happening when movements have an impact.
And that step is here on HN. That's why it's very relevant to observe that that HN crowd is increasingly happy to support a non-free internet. Be it walled gardens, geofencing, etc.
That’s called antitrust, and is absolutely a cause you can vote for. Some of the Biden administration’s biggest achievements were in antitrust, and the head of the FTC for Biden has joined Mamdani’s transition team.
Learn how to host anything, today.
Even if you learn to Host, there are many other services that are going to get relied on those centralised platforms, so if you are thinking to Host, every single thing on your own, then it is going to be more work than you can even imagine and definitely super hard to organise as well
Anything.
If you host you are running on my cPanel SW. 70% of the internet is doing that. Also a kinda centralized point of failure, but I didn't hear of any bugs in the last 14 years.
Have you tried that? I gave up on hosting my own email server seven or eight years ago, after it became clear that there would be an endless fight with various entities to accept my mail. Hosting a webserver without the expectation that you'll need some high powered DDOS defense seems naive, in the current day, and good luck doing that with a server or two.
I have never hosted my own email. It took me roughly a day to set it up on a vanilla FreeBSD install running on Vultr’s free tier plan and it has been running flawlessly for nearly a year. I did not use AI at all, just the FreeBSD, Postfix, and Dovecot’s handbooks. I do have a fair bit of Linux admin and development experience but all in all this has been a weirdly painless experience.
If you don’t love this approach, Mail-in-a-box works incredibly well even if the author of all the Python code behind it insists on using tabs instead of spaces :)
And you can always grab a really good deal from a small hosting company, likely with decades of experience in what they do, via LowEndBox/LowEndTalk. The deal would likely blow AWS/DO/Vultr/Google Cloud out of the water in terms of value. I have been snagging deals from there for ages and I lost a virtual host twice. Once was a new company that turned out to be shady and another was when I rented a VPS in Cairo and a revolution broke out. They brought everything back up after a couple of months.
For example I just bought a lifetime email hosting system with 250GB of storage, email, video, full office suite, calendar, contacts, and file storage for $75. Configuration here is down to setting the DNS records they give you and adding users. Company behind it has been around for ages and is one of the best regarded in the LET community.
It's not insurmountable to set up initially. And when you get email denied from whatever org (your lawyer, your mom, some random business, whatever), each individual one isn't insurmountable to fix. It does get old after awhile.
It also depends on how much you are emailing, and who. If it's always the same set of known entities, you might be totally fine with self hosting. Someone else who's regularly emailing a lot of new people or businesses might incur a lot of overhead. At least worth more than their time than a fastmail or protonmail subscription or whatever.
We could quibble about the premise.
"Embrace outages, and build redundancy." — It feels like back in the day this was championed pretty hard especially by places like Netflix (Chaos Monkey) but as downtime has become more expected it seems we are sliding backwards. I have a tendency to rely too much on feelings so I'm sure someone could point me to some data that proves otherwise but for now that's my read on things. Personally, I've been going a lot more in on self-hosting lots of things I used to just mindlessly leave on the cloud.
My friend wasn't able to do RTG during the outage. They had to use ultrasound machine on his broken arm to see inside.
> My friend wasn't able to do RTG during the outage.
What is RTG?
X-ray, in some languages (like Polish) the abbreviation comes from https://en.wikipedia.org/wiki/Roentgen_(unit)
Wilhelm Röntgen, Nobel Prize in 1901, experimentally discovered X-rays.
X-ray
For me personally I didn't notice the downtime in the first hour or so. When using some website assets were not loading, but that's it. Turnstile outage maybe impacted me most. Could be because I'm EU based and Cloudflare is not "so" widespread here as in other parts of the world.
I wonder what would life without cloudflare look like? What practices would fill the gaps if a company didn't - or wasn't allowed to -- satisfy the the concerns that cloudflare fills.
Pretty much exactly like it does now but with less captchas.
Fun fact: Headless browsers can easily pass cloudflare captchas automatically. They're not actually captchaing - they're just a placebo. You just need to be coming from a residential IP address and using a real browser.
Now just wait til every country on earth really does replace most of its employees with ChatGPT... and then OpenAI's data center goes offline with a fiber cut or something. All work everywhere stops. Cloudflare outage is nothing compared to that.
That was this outage. ChatGPT and Claude are both behind Clouflare’s bot detection. You couldn’t log into either Web frontends.
And the error message said you were blocking them. We had support tickets coming in demanding to know why ChatGPT was being blocked.
We also couldn’t log into our supplier’s B2B system to place our customer orders.
So all the advice of “just self host” is moot when you’re in a food web.
> goes offline with a fiber cut
If a fiber cut brings your network down then you have fundamental network design issues and need to change hiring practices.
That's why it's better to have redundancy. Hire Claude and Deepseek, too.
The outage wasn’t a good thing, since nothing is changing as a result. (How many outages does cloud flare had?)
It's a tragedy of the commons. Even if you don't use Cloudflare does it matter if no one can pay for your products.
I'll die on the hill that centralization is more efficient than decentralization and that rare outages of hugely centralized systems that are otherwise highly reliable are much better than full decentralization with much worse reliability.
In other words, when AWS or Cloudflare go down it's catastrophic in the sense that everyone sees the issues at the same time, but smaller providers usually have much more ongoing issues, that just happen to be "chronic" vs "acute" pains.
Efficient in terms of what, exactly?
There are multiple dimensions to this problem. Putting everything behind Cloudflare might give you better uptime, reliability, performance, etc. but it also has the effect of centralizing power into the hands of a single entity. Instead of twisting the arms of ten different CXOs, your local politician now only needs to twist the arm of a single CXO to knock your entire business off the internet.
I live in India, where the government has always been hostile to the ideals of freedom of speech and expression. Complete internet blackouts are common in several states, and major ISPs block websites without due process or an appeals mechanism. Nobody is safe from this, not even Github[1]. In countries like India, decentralization is a preventative measure.
[1] https://en.wikipedia.org/wiki/Censorship_of_GitHub#India
And I'm not even going to talk about abuse of monopoly power and all that. What happens when Cloudflare has their Apple moment? When they jack up their prices 10x, or refuse to serve customers that might use their CDNs to serve "inappropriate" content? When the definition of "inappropriate" is left fuzzy, so that it applies to everything from CSAM to political commentary?
No thanks.
The fix to government censorship must be political, not technical.
And the irony is that people are pushing for decentralization like microservices and k8s - on centralized platforms like AWS.
>I'll die on hill that hyperoptimized systems are more efficient than anti-fragile.
Of course they are, the issue is what level of failure were going to accept.
how many people are still on us-east-1
My old employer used azure. It irritated me to no end when they said we must rename all our resources to match the convention of naming everything US East as "eu-" because (Eastern United States I guess)
A total clown show
i hate that i cannot just scrape things for me usage and i have to use things like camufox instead of curl
I don't like this argument since you can applied this argument to google,microsot,aws,facebook etc
Tech world is dominated by US company and what is alternative to most of these service???? its a lot fewer than you might think and even then you must make a compromise in certain areas
> They [outages] can force redundancy and resilience into systems.
They won’t until either the monetary pain of outages becomes greater than the inefficiency of holding on to more systems to support that redundancy, or, government steps in with clear regulation forcing their hand. And I’m not sure about the latter. So I’m not holding my breath about anything changing. It will continue to be a circus of doing everything on a shoestring because line must go up every quarter or a shareholder doesn’t keep their wings.
That's ok though, not every website needs 5 9s
Centralization has nothing to do with the problems of society and technology. And if you think the internet is all controlled by just a couple companies, you don't actually understand how it works. The internet is wildly decentralized. Even Cloudflare is. It offers tons of services, all of which are completely optional and can be used individually. You can also stop using them at any time, and use any of their competitors (of which there are many).
If, on the off chance, people just get "addicted" to Cloudflare, and Cloudflare's now-obviously-terrible engineering causes society to become less reliable, then people will respond to that. Either competitors will pop up, or people will depend on them less, or governments will (finally!) impose some regulations around the operation of technical infrastructure.
We have actually too much freedom on the Internet. Companies are free to build internet systems any way they want - including in very unreliable ways - because we impose no regulations or standards requirements on them. Those people are then free to sell products to real people based on this shoddy design, with no penalty for the products falling apart. So far we haven't had any gigantic disasters (Great Chicago Fire, Triangle Shirtwaist Factory Fire, MGM Grand Hotel Fire), but we have had major disruptions.
We already dealt with this problem in the rest of society. Buildings have building codes, fire codes, electrical codes. They prescribe and require testing procedures, provide standard building methods to ensure strength in extreme weather, resist a spreading fire long enough to allow people to escape, etc. All measures to ensure the safety and reliability of the things we interact with and depend on. You can build anything you want - say, a preschool? - but you aren't allowed to build it in a shoddy manner. We have that for physical infrastructure; now we need it for virtual infrastructure. A software building code.
Centralization means having a single point of failure for everything. If your government, mobile phone or car stops working, it doesn't mean all governments, all cars and all mobile phones stop working.
Centralization makes mass surveillance easier, makes selectively denying of service easier. Centralization also means that once someone hacks into the system, he gains access to all data, not just a part of it.
>It's ironic because the internet was actually designed for decentralisation, a system that governments could use to coordinate their response in the event of nuclear war
This is not true. The internet was never designed to withstand nuclear war.
Arpanet absolutely was designed to be a physically resilient network which could survive the loss of multiple physical switch locations.
Perhaps. Perhaps not. But it will survive it. It will survive a complete nuclear winter. It's too useful to die, and will be one the first things to be fixed after global annihilation.
But Internet is not hosting companies or cloud providers. Internet does not care if they don't build their systems resilient enough and let the SPOFs creep up. Internet does it's thing and the packets keep flowing. Maybe BGP and DNS could use some additional armoring but there are ways around both of them in case of actual emergency.
ARPANET was literally invented during the cold war for the specific and explicit purpose of networked communications resilience for government and military in the event major networking hubs went offline due to one or more successful nuclear attacks against the United States
It literally wasn't. It's an urban myth.
>Bob Taylor initiated the ARPANET project in 1966 to enable resource sharing between remote computers.
>The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim.
https://en.wikipedia.org/wiki/ARPANET
Per interviews, the initial impetus wasn't to withstand a nuclear attack - but after it was first set up, it most certainly a major part of the thought process in design. https://web.archive.org/web/20151104224529/https://www.wired...
>but after it was first set up
Your link is talking about work Baran did before ARPANET was created. The timeline doesn't back your point. And when ARPANET was created after Baran's work with Rand:
>Wired: The myth of the Arpanet – which still persists – is that it was developed to withstand nuclear strikes. That's wrong, isn't it?
>Paul Baran: Yes. Bob Taylor1 had a couple of computer terminals speaking to different machines, and his idea was to have some way of having a terminal speak to any of them and have a network. That's really the origin of the Arpanet. The method used to connect things together was an open issue for a time.
Read the whole article. And peruse the oral history here: https://ethw.org/Oral-History:Paul_Baran - the genesis was most definitely related to the cold war.
"A preferred alternative would be to have the ability to withstand a first strike and the capability of returning the damage in kind. This reduces the overwhelming advantage by a first strike, and allows much tighter control over nuclear weapons. This is sometimes called Second Strike Capability."
The stated research goals are not necessarily the same as the strategic funding motivations. The DoD clearly recognized packet-switching's survivability and dynamic routing potential when the US Air Force funded the invention of networked packet switching by Paul Baran six years earlier, in 1960, for which the explicit purpose was "nuclear-survivable military communications".
There is zero reason to believe ARPA would've funded the work were it not for internal military recognition of the utility of the underlying technology.
To assume that the project lead was told EVERY motivation of the top secret military intelligence committee that was responsible for 100% of the funding of the project takes either a special kind of naïveté or complete ignorance of compartmentalization practices within military R&D and procurement practices.
ARPANET would never have been were it not for ARPA funding, and ARPA never would've funded it were it not for the existence of packet-switched networking, which itself was invented and funded, again, six years before Bob Taylor even entered the picture, for the SOLE purpose of "nuclear-survivable military communications".
Consider the following sequence of events:
1. US Air Force desires nuclear-survivable military communications, funds Paul Baran's research at RAND
2. Baran proves packet-switching is conceptually viable for nuclear-survivable communications
3. His specific implementation doesn't meet rigorous Air Force deployment standards (their implementation partner, AT&T, refuses - which is entirely expectable for what was then a complex new technology that not a single AT&T engineer understood or had ever interacted with during the course of their education), but the concept is now proven and documented
4. ARPA sees the strategic potential of packet-switched networks for the explicit and sole purpose of nuclear-survivable communications, and decides to fund a more robust development effort
5. They use academic resource-sharing as the development/testing environment (lower stakes, work out the kinks, get future engineers conceptually familiar with the underlying technology paradigms)
6. Researchers, including Bob Taylor, genuinely focus on resource sharing because that's what they're told their actual job is, even though that's not actually the true purpose of their work
7. Once mature, the technology gets deployed for it's originally-intended strategic purposes (MILNET split-off in 1983)
Under this timeline, the sole true reason for ARPA's funding of ARPANET is nuclear-survivable military communication, Bob Taylor, being the military's R&D pawn, is never told that (standard compartmentalization practice). Bob Taylor can credibly and honestly state that he was tasked with implementing resource sharing across academic networks, which is true, but was never the actual underlying motivation to fund his research.
...and the myth of "ARPANET wasn't created for nuclear survivability" is born.
The thing I learned from the incident is that rust offer a unpack function. It puzzles me why the hell they build such a function in the first place.
> It puzzles me why the hell they build such a function in the first place.
One reason is similar to why most programming languages don't return an Option<T> when indexing into an array/vector/list/etc. There are always tradeoffs to make, especially when your strangeness budget is going to other things.