Years ago, when I started working on projects that served billions of users, I saw how infrastructure choices made in the early days can reshape an entire industry’s destiny.
Even platforms launched with the best intentions to be open, neutral, and free-of-control can slide into forms of centralization.
It’s not because anyone sets out to be “evil”; it’s just the natural gravitational pull of technology and markets when certain design decisions are locked in from the start.
Infrastructure design choices matter from day one.
These core design choices have to ensure that the technology itself enforces fairness and prevents power from consolidating in the first place.
“Power tends to concentrate, even if nobody plans for it”
It’s a subtle yet profound truth I learned firsthand while working on large-scale internet products.
When the ‘decentralized industry’ was born, it seemed like a second chance. We looked at Bitcoin, Ethereum, and others as ways to escape old power structures.
The narrative was straightforward: take back control, cut out middlemen, and let code ensure fairness.
But we have to be honest, over time, the same pressures that centralized the Internet also started acting on these ‘decentralized’ systems.
But how did the Internet get centralized?
Wasn’t the Internet started as a decentralized P2P network that could even withstand nuclear war?
To understand why these decentralized systems are succumbing to centralizing pressures, you have to understand what happened with the Internet.
You have to look at how it transitioned from its idealistic beginnings into a highly centralized ecosystem.
“In the beginning, nobody held all the keys, and no single player was calling all the shots”
The earliest version of what we now call the Internet basically started out under the U.S. Department of Defense, with things like ARPANET in the late ‘60s.
Source: @DailySwig
The whole idea from day one was to avoid that single point of failure, to make sure no one spot going down could take everything else with it.
This network was deliberately designed to be decentralized.
The rationale was strategic: a distributed system could withstand the failure of any single node, making communications more resilient in the face of disruptions like equipment malfunctions or even wartime conditions.
A reliable and decentralized communication network that could withstand even a nuclear attack.
Each node was a “peer” capable of sending and receiving data without relying on a single centralized authority. Any machine, regardless of hardware or operating system, could “speak” TCP/IP and exchange data.
By the ’70s and ’80s, universities and research labs linked up through NSFNET and ARPANET, and suddenly you had this environment where nobody held all the keys and no single player was calling all the shots.
It showed up in the fundamentals:
TCP/IP, FTP, Telnet, Usenet newsgroups, and DNS were not about locking anybody into one spot. There was little incentive to impose strict controls or hierarchies.
Usenet,for example, spread messages in a fully decentralized P2P manner. DNS delegated naming authority in a distributed hierarchy, but every component still acted as both a client and server to some degree.
It all reinforced that original principle:
a network that wasn’t just about one big player setting the rules, but rather a system where anyone could plug in and participate.
But in the early ’90s, the World Wide Web and browsers changed the whole game.
The recreated page of the first website (Image: CERN)
Tim Berners-Lee: The Visionary Behind the World Wide Web
“As the Internet’s user base surged, the original design’s assumptions around open participation and mutual trust began to show cracks”
The World Wide Web, introduced in 1989–1991, was built atop open standards (HTTP, HTML) deliberately placed in the public domain. In its earliest form, the Web made it trivial for individuals, small organizations, or anyone with a modem and hosting to put up a website.
The infrastructure was still largely “flat” and decentralized, with countless independent webpages linked together in a loose federation.
But in the early ’90s something became really popular.
This is when ‘Web Browsing’ became the “killer app.”
Websites became storefronts, news outlets, and entertainment hubs. The average person wasn’t running their own server or hosting their own pages.
Netscape’s home page in 1994, featuring its mascot Mozilla, as seen in NCSA Mosaic 3.0
[Screenshot: Alex Pasternack / OldWeb.today]
They ran web browsers (clients), first with those slow modems, then broadband, to fetch content from large, well-known web servers. Suddenly, hosting huge amounts of data and setting up e-commerce sites or search engines became a big thing.
Early search engines like AltaVista, Yahoo!, and later Google emerged to help people navigate the rapidly expanding online world.
The network effect became pronounced: the more people used a search engine, the better it could refine its indexing and advertising models, reinforcing its dominance.
Google’s PageRank algorithm turned it into a singular gateway to the web’s vastness.
That drove money and attention into big data centers, and the ones who could scale up and handle those massive loads came out on top.
As Internet Service Providers emerged to serve millions of new users, the infrastructure naturally optimized for downstream delivery.
Faster download speeds than upload speeds (asymmetric broadband connections like ADSL or cable) made economic sense because most users consumed more than they produced. The network “learned” that most endpoints were clients only.
And as the Internet’s user base surged, the original design’s assumptions around open participation and mutual trust began to show cracks.
“Freedom and openness without safeguards can invite abuses that force us to build higher walls.”
The original protocols hadn’t been built to handle a massive, diverse crowd, many with business interests or motivations that tested the system’s openness.
With no real safeguards, spam became a big issue, exploiting that open environment.
The original, open design made every host reachable from any other host, which was fine when the Internet was a small, trusted community.
But as it grew, attacks, hacking attempts, and malicious activities ballooned.
Source: emailtray.com
Similarly, without some way to keep bandwidth usage fair, a few applications learned to push the limits and gain an advantage at others’ expense.
These design gaps nudged the Internet toward more regulation and control.
To protect internal networks, organizations deployed firewalls to block incoming connections. Network Address Translation (NAT) further isolated internal machines behind a single shared IP address.
This curtailed the peer-to-peer nature of communications.
Hosts behind NATs and firewalls were effectively forced into a client-only role, no longer directly addressable from the outside world.
Over time, these infrastructure decisions reinforced each other.
“A handful of companies realized that controlling data centers and owning massive server infrastructures gave them enormous competitive advantages.”
The complexity and cost of running one’s own server from home, coupled with technical barriers like NAT and firewalls, meant fewer individuals participated as true peers.
In other words, the environment pretty much nudged the Net towards a handful of centralized giants.
By the early 2000s, a handful of companies realized that controlling data centers and owning massive server infrastructures gave them enormous competitive advantages.
They could provide faster, more reliable, and more convenient services than a random peer on the network.
This trend was on steroids in the late 2000s.
Source: datareportal.com
Search engines like Google, large platforms like Amazon, social media giants like Facebook, and content distribution networks built massive infrastructures that delivered content and applications at unprecedented speed and scale.
These large companies also tapped into the “virtuous cycle” of data and algorithms.
The more users they attracted, the more data they gathered, which let them refine their products, personalize experiences, and target advertisements more accurately. This made their services even more attractive, pulling in more users and more revenue.
Then the Internet’s revenue model shifted heavily toward targeted advertising.
Over time, this feedback loop concentrated power further, as smaller competitors struggled to match the infrastructure investment and data advantages of big players.
Infrastructure that once could be run from a personal server or a local data center increasingly moved into the cloud.
Giants like Amazon (AWS), Microsoft (Azure), and Google Cloud now host the backbone of much of the Internet. This shift occurred because running large-scale, secure, and reliable infrastructure became so complex and capital-intensive that only a handful of companies could do it efficiently.
Startups and even established companies found it cheaper and simpler to rely on these big cloud providers.
Services such as CDNs (like Cloudflare or Akamai) and DNS resolvers also gravitated toward a few big players.
The complexity and cost advantages of these managed solutions meant fewer reasons for organizations to “roll their own” infrastructure.
Gradually, the decentralized underpinnings such as small ISPs, independent hosting, and localized routing gave way to a model where most traffic and services depend on a tiny number of major intermediaries.
“The big players didn’t start out evil; they just optimized for convenience, performance, and profit.
It was the natural outcome of early architectural design choices in the underlying network.”
With scale and centralization came more power and control.
Large platforms set their own terms of service, determining what content users could see or post and how their data would be collected or sold. Users had fewer alternatives if they didn’t like these policies.
Over time, these platforms’ recommendation algorithms and content policies became de facto arbiters of public discourse.
Paradoxically, what began as an open, decentralized network that empowered free exchange of ideas and content now often funneled information through a few corporate gateways.
Now these companies, in some respects, wield power comparable to that of governments: they can shape public discourse, influence commerce, and control entire ecosystems of third-party developers.
A network originally designed for free-form, peer-level interconnection now orbits around powerful corporate hubs that can shape and control much of the online experience.
This wasn’t some grand scheme to concentrate power. Nor did this situation stem from a single “wrong turn”.
The big players didn’t start out evil; they just optimized for convenience, performance, and profit. It was the natural outcome of early architectural design choices in the underlying network.
These choices didn’t anticipate how a much broader and more commercially driven audience would use the system and push it beyond its initial design parameters.
Over time, these choices accrued into a system where a handful of companies dominate.
The same thing is happening before our eyes in the decentralized industry.
“The pull towards centralization isn’t always the result of malicious intent; often, it’s an attempt to fix problems of a system never built to stay decentralized at scale.”
Just as the early Internet slid away from its peer-to-peer ideals and drifted into the hands of a few big players, today’s blockchain and “decentralized” technologies risk following the same path.
This is easiest to see with Ethereum’s attempts to scale.
High fees and slow throughput pushed developers to adopt Layer-2 (L2) solutions: rollups that bundle transactions off-chain and then settle them on Ethereum. In theory, these L2s should retain Ethereum’s trustless nature.
In practice, many depend on a single “sequencer” (a central server that orders transactions) run by one company.
Right now, one particular L2 solution has the most activity and total value locked, yet it is also the most centralized,
The pitch is that decentralization will come someday, but we’ve heard that before.
Over time, these “temporary” solutions have a way of becoming permanent. The same pattern may emerge with future layered approaches; some might not even bother promising any path to decentralization.
“Social logins” might seem helpful: they make it easy for people to start using a service without juggling private keys or complicated interfaces. But if these logins depend on a centralized component, you’re once again trusting one company to do the right thing.
That’s why, when we built zkLogin, we built and integrated it in a trustless manner. It was hard, but we cannot compromise and introduce centralization for convenience.
A similar pattern emerged in the NFT ecosystem.
A single dominant marketplace became the primary venue for secondary sales, capturing most of the trading volume and effectively becoming the de facto platform.
Not long ago, this marketplace decided to stop enforcing royalty payments on secondary sales.
Yes, it increased trading volume, but it screws over the creators who relied on those royalties as a key source of income.
This is a clear example of the fallout when centralized platforms can modify the rules any time they want.
Their dominance also extended beyond trading, as many projects also depended on their APIs and infrastructure.
When this centralized platform had outages, the entire ecosystem felt the impact, exposing the deep reliance that had formed.
But why does this keep happening?
Because users want quick, cheap, and easy experiences. Developers, under pressure, often turn to familiar and reliable solutions. These choices are simpler and faster but can introduce points of control that undermine decentralization.
None of these steps start out as grand plans to monopolize. They’re just practical responses to tough technical and market challenges.
But over time, these “band-aids” become embedded in the system’s DNA, creating a structure where a few players hold the keys.
That’s why these systems must be designed from the ground up for builders, not just for consumers.
“If I had asked people what they wanted, they would have said faster horses.” - Henry Ford
Most consumers just want a better version of what they already have.
But when we only chase these short-term improvements, we risk ending up with systems that look decentralized on the surface but still have a few key players pulling the strings.
If we really want to avoid repeating the mistakes that led to today’s digital gatekeepers, we need to focus on the creators of the future, the builders, not just the consumers.
This is why I always tell my team, consumers will always ask for a faster horse; it’s the builders who imagine the car.
0:00 / 0:38
With the right building blocks, developers can launch platforms that aren’t forced into centralization for the sake of convenience. They can create systems where no single entity can dominate or lock users in, ensuring that benefits flow more evenly to all participants.
That’s why these systems must be designed from the ground up to reinforce decentralization, even when they have to scale to an internet level.
“Tech debt can be fixed with refactoring; design debt often demands a total reset.”
From my early years working on systems that scaled to billions of users, one lesson stuck with me: Once a system becomes mission-critical, you can’t just tear it all down and rebuild without causing massive disruption.
The moment that millions of users rely on your system’s entrenched behaviors and assumptions, even proposing radical architectural changes becomes a non-starter.
It would break applications, business models, and the trust of entire communities built on top.
This is the concept of “design debt” at its most severe.
This isn’t just about code cleanliness; it’s about fundamental architectural choices that dictate how trust, power, and value flow through the network.
In the early days of this industry, the so-called “blockchain or scalability trilemma,” the idea that you can’t have decentralization, security, and scalability all at once, was treated like a law of nature.
People built on top of that assumption, believing it was as unchangeable as gravity. But it wasn’t.
It stemmed from flawed initial architectures: massive global shared states, limiting data models that made parallelism and modular scaling impossible.
The only way forward is to lump all transactions together, forcing every participant to fight for the same limited resources, regardless of what they’re doing. The result? Inefficient auctions for block space that drive up costs during spikes in demand and fail to isolate congestion to where it actually occurs.
Under these conditions, adding layers (like L2s that rely on centralized sequencers or compressed assets that depend on centralized storage) only papered over the cracks.
Each patch aimed at fixing short-term issues often adds more complexity and more points of centralized control, drifting further away from the original vision.
This is how design debt accumulates into a form of “technical gravity” that pulls everything toward centralization.
Even systems that never intended to be gatekeepers end up reinforcing hierarchical structures because their fundamental architecture demands it. Once that happens, the road back to a truly decentralized, trustless state is blocked by entrenched interests and infrastructural inertia.
The lesson is clear: you have to get the architecture right from the start.
That means picking data models that don’t bundle everything into a single global state, using storage solutions that are verifiable without trusting a middleman, and choosing a networking layer that doesn’t depend on a handful of powerful intermediaries.
It’s about reimagining the entire tech stack from day one.
“The only real cure for design debt is not to accumulate it in the first place.”
When we talk about building infrastructure that cannot be evil, we’re really talking about making the right architectural choices from day one.
That’s why, when we designed Sui, we wanted to bake in those foundational principles from day one.
This allows developers to build scalable, secure, and user-friendly applications without bending over backwards or relying on centralized crutches.
Consider the programming model itself:
Sui’s object-centric approach is a deliberate departure from the account-based paradigms that have dominated many blockchains.
At the core of Sui’s design philosophy is the object-centric programming model.
In a world where Web2 developers naturally think in terms of objects, such as files, records, and assets, it doesn’t make sense to reduce everything to a monolithic account model.
Doing so forces developers into unnatural thought patterns. It introduces complexity that’s ripe for errors.
The object-centric programming model aligns naturally with how Web2 engineers already reason about software.
Objects serve as first-class citizens, making it simple to represent assets, define rules, and avoid common pitfalls, such as reentrancy bugs, without convoluted boilerplate code.
This familiar model drastically reduces conceptual overhead and common pitfalls like reentrancy. Instead of writing boilerplate checks or complex guardrails to prevent exploits, developers rely on the Move VM to ensure safety at the runtime level.
As a result, code is more readable, secure, and easier to reason about.
It’s a direct bridge from Web2’s object-oriented mindset to Web3’s trustless environment, made possible by starting with the right assumptions from the start.
But a great programming model means nothing if it crumples under load.
From the beginning, Sui was built to handle real-world load. It was designed to scale horizontally while maintaining synchronous atomic composability.
The system’s object model gives Sui a fine-grained understanding of which parts of the state each transaction touches, enabling parallel execution at scale. This is a stark contrast to EVM-based systems, which must lock the entire global state. This slows everything down and encourages centralized solutions to offload the transaction volume.
With Sui, each object is effectively its own shard. Need more capacity? Add more computational power to handle the load.
The Pilotfish Prototype : https://blog.sui.io/pilotfish-execution-scalability-blockchain/
Developers don’t have to worry about sharding logic, bridging multiple domains, or hacking together infrastructure to achieve scale.
So the system can handle more traffic as the network grows, but how do you ensure fair resource allocation?
If one popular asset or dApp corners the market on state updates, it can drive up costs and degrade the experience for everyone else.
Instead of relying on a single, global auction for block space, where one hot application can jack up prices for everyone, local fee markets let the system price resources at a finer level of granularity.
Each “object” or shard can have its own fee market, ensuring that congestion in one area doesn’t spill over and penalize unrelated parts of the network.
It’s all baked into the platform’s foundational design, ensuring that even as demand grows, the system doesn’t revert to the tired old patterns of gatekeepers and walled gardens.
Designing for decentralization also means building verifiability right into storage and communication layers.
If data storage relies on a single trusted party, you are back to square one. You need storage solutions that let anyone verify data integrity without depending on a middleman.
A truly decentralized application can’t rely on a single cloud provider or a centralized database.
Walrus provides a decentralized, verifiable storage layer comparable in power and scale to centralized offerings like AWS or Google Cloud.
With Walrus verifiability of data isn’t an afterthought but an intrinsic property.
By integrating a storage layer that’s inherently verifiable and tamper-proof, Walrus ensures that developers can run websites, host data, and build fully decentralized applications without slipping back into the centralized patterns we set out to avoid.
In other words, Walrus extends the “correct by construction” philosophy from execution to storage, ensuring your application’s integrity at every layer.
Now, designing for decentralization also means that it shouldn’t stop at the consensus or execution layer; it should extend into the network itself.
Networking layers should not hinge on a handful of powerful ISPs or routing services. That’s also centralization.
Networking is another piece of the puzzle often overlooked in Web3.
Traditional Internet routing is controlled by a few ISPs, introducing potential choke points and vulnerabilities.
SCION is a next-generation networking protocol that challenges this status quo, making routing more secure, reliable, and resistant to centralized control.
It is a secure, multi-path, inter-domain routing architecture that can run side by side with today’s internet. It is a complete reimagining of how data moves across networks, built with security, control, and performance baked right into its core.
By integrating SCION to Sui, we’re ensuring that the underlying network isn’t a single point of failure or control.
No single entity gets to dictate data flow, and users can trust that the underlying routes won’t be manipulated or monopolized.
By integrating verifiability and permissionlessness into each layer, including the data model, storage, and networking, you reduce the surface area where central points of control can take hold.
You are not adding decentralization as an afterthought; you are embedding it into the foundation.
This simplicity reduces complexity and keeps the door closed to “convenient” but centralizing workarounds. Most importantly, getting the fundamentals right means never placing your bet on a “we will fix it later” mentality.
“Decentralization isn’t a validator count. True decentralization is about the architecture that keeps power from pooling in one place.”
The point of everything we’ve explored is simple: if you want a system that cannot be evil, you have to start with the right architecture.
If you start with the wrong assumptions, no amount of extra code or clever tricks will save you.
An architecture that rewards gatekeepers. A data model that forces every actor to compete for the same scarce resource. A networking layer designed around centralized hubs. Eventually, you’ll slip into the same old patterns of control and hierarchy.
This is why the architectural groundwork matters so much.
Decentralization isn’t just about counting how many nodes you have. True decentralization means designing at the root level so that trust, fairness, and verifiability are impossible to circumvent.
It means building systems where neither a handful of whales nor a well-resourced company can quietly tilt the playing field. It’s about ensuring that every participant has a fair shot, and that no choke point, no subtle design decision, can snowball into runaway centralization.
Sui is one example of what happens when you design with these principles in mind from day one rather than trying to retrofit them after the fact.
When the entire stack, from the programming model to the consensus layer, and from user onboarding to data availability and networking layer, reinforces openness and neutrality, you create an environment where builders and users flourish on equal terms.
By starting from first principles and enforcing decentralization at every layer, you can create infrastructure that remains true to its ethos, no matter how big it grows.
Build it right from the start, and you won’t need to promise future fixes or half-measures.
You’ll have a network that’s inherently just, resilient, and ready to serve as the foundation for the next generation of digital experiences.
Share
Content
Years ago, when I started working on projects that served billions of users, I saw how infrastructure choices made in the early days can reshape an entire industry’s destiny.
Even platforms launched with the best intentions to be open, neutral, and free-of-control can slide into forms of centralization.
It’s not because anyone sets out to be “evil”; it’s just the natural gravitational pull of technology and markets when certain design decisions are locked in from the start.
Infrastructure design choices matter from day one.
These core design choices have to ensure that the technology itself enforces fairness and prevents power from consolidating in the first place.
“Power tends to concentrate, even if nobody plans for it”
It’s a subtle yet profound truth I learned firsthand while working on large-scale internet products.
When the ‘decentralized industry’ was born, it seemed like a second chance. We looked at Bitcoin, Ethereum, and others as ways to escape old power structures.
The narrative was straightforward: take back control, cut out middlemen, and let code ensure fairness.
But we have to be honest, over time, the same pressures that centralized the Internet also started acting on these ‘decentralized’ systems.
But how did the Internet get centralized?
Wasn’t the Internet started as a decentralized P2P network that could even withstand nuclear war?
To understand why these decentralized systems are succumbing to centralizing pressures, you have to understand what happened with the Internet.
You have to look at how it transitioned from its idealistic beginnings into a highly centralized ecosystem.
“In the beginning, nobody held all the keys, and no single player was calling all the shots”
The earliest version of what we now call the Internet basically started out under the U.S. Department of Defense, with things like ARPANET in the late ‘60s.
Source: @DailySwig
The whole idea from day one was to avoid that single point of failure, to make sure no one spot going down could take everything else with it.
This network was deliberately designed to be decentralized.
The rationale was strategic: a distributed system could withstand the failure of any single node, making communications more resilient in the face of disruptions like equipment malfunctions or even wartime conditions.
A reliable and decentralized communication network that could withstand even a nuclear attack.
Each node was a “peer” capable of sending and receiving data without relying on a single centralized authority. Any machine, regardless of hardware or operating system, could “speak” TCP/IP and exchange data.
By the ’70s and ’80s, universities and research labs linked up through NSFNET and ARPANET, and suddenly you had this environment where nobody held all the keys and no single player was calling all the shots.
It showed up in the fundamentals:
TCP/IP, FTP, Telnet, Usenet newsgroups, and DNS were not about locking anybody into one spot. There was little incentive to impose strict controls or hierarchies.
Usenet,for example, spread messages in a fully decentralized P2P manner. DNS delegated naming authority in a distributed hierarchy, but every component still acted as both a client and server to some degree.
It all reinforced that original principle:
a network that wasn’t just about one big player setting the rules, but rather a system where anyone could plug in and participate.
But in the early ’90s, the World Wide Web and browsers changed the whole game.
The recreated page of the first website (Image: CERN)
Tim Berners-Lee: The Visionary Behind the World Wide Web
“As the Internet’s user base surged, the original design’s assumptions around open participation and mutual trust began to show cracks”
The World Wide Web, introduced in 1989–1991, was built atop open standards (HTTP, HTML) deliberately placed in the public domain. In its earliest form, the Web made it trivial for individuals, small organizations, or anyone with a modem and hosting to put up a website.
The infrastructure was still largely “flat” and decentralized, with countless independent webpages linked together in a loose federation.
But in the early ’90s something became really popular.
This is when ‘Web Browsing’ became the “killer app.”
Websites became storefronts, news outlets, and entertainment hubs. The average person wasn’t running their own server or hosting their own pages.
Netscape’s home page in 1994, featuring its mascot Mozilla, as seen in NCSA Mosaic 3.0
[Screenshot: Alex Pasternack / OldWeb.today]
They ran web browsers (clients), first with those slow modems, then broadband, to fetch content from large, well-known web servers. Suddenly, hosting huge amounts of data and setting up e-commerce sites or search engines became a big thing.
Early search engines like AltaVista, Yahoo!, and later Google emerged to help people navigate the rapidly expanding online world.
The network effect became pronounced: the more people used a search engine, the better it could refine its indexing and advertising models, reinforcing its dominance.
Google’s PageRank algorithm turned it into a singular gateway to the web’s vastness.
That drove money and attention into big data centers, and the ones who could scale up and handle those massive loads came out on top.
As Internet Service Providers emerged to serve millions of new users, the infrastructure naturally optimized for downstream delivery.
Faster download speeds than upload speeds (asymmetric broadband connections like ADSL or cable) made economic sense because most users consumed more than they produced. The network “learned” that most endpoints were clients only.
And as the Internet’s user base surged, the original design’s assumptions around open participation and mutual trust began to show cracks.
“Freedom and openness without safeguards can invite abuses that force us to build higher walls.”
The original protocols hadn’t been built to handle a massive, diverse crowd, many with business interests or motivations that tested the system’s openness.
With no real safeguards, spam became a big issue, exploiting that open environment.
The original, open design made every host reachable from any other host, which was fine when the Internet was a small, trusted community.
But as it grew, attacks, hacking attempts, and malicious activities ballooned.
Source: emailtray.com
Similarly, without some way to keep bandwidth usage fair, a few applications learned to push the limits and gain an advantage at others’ expense.
These design gaps nudged the Internet toward more regulation and control.
To protect internal networks, organizations deployed firewalls to block incoming connections. Network Address Translation (NAT) further isolated internal machines behind a single shared IP address.
This curtailed the peer-to-peer nature of communications.
Hosts behind NATs and firewalls were effectively forced into a client-only role, no longer directly addressable from the outside world.
Over time, these infrastructure decisions reinforced each other.
“A handful of companies realized that controlling data centers and owning massive server infrastructures gave them enormous competitive advantages.”
The complexity and cost of running one’s own server from home, coupled with technical barriers like NAT and firewalls, meant fewer individuals participated as true peers.
In other words, the environment pretty much nudged the Net towards a handful of centralized giants.
By the early 2000s, a handful of companies realized that controlling data centers and owning massive server infrastructures gave them enormous competitive advantages.
They could provide faster, more reliable, and more convenient services than a random peer on the network.
This trend was on steroids in the late 2000s.
Source: datareportal.com
Search engines like Google, large platforms like Amazon, social media giants like Facebook, and content distribution networks built massive infrastructures that delivered content and applications at unprecedented speed and scale.
These large companies also tapped into the “virtuous cycle” of data and algorithms.
The more users they attracted, the more data they gathered, which let them refine their products, personalize experiences, and target advertisements more accurately. This made their services even more attractive, pulling in more users and more revenue.
Then the Internet’s revenue model shifted heavily toward targeted advertising.
Over time, this feedback loop concentrated power further, as smaller competitors struggled to match the infrastructure investment and data advantages of big players.
Infrastructure that once could be run from a personal server or a local data center increasingly moved into the cloud.
Giants like Amazon (AWS), Microsoft (Azure), and Google Cloud now host the backbone of much of the Internet. This shift occurred because running large-scale, secure, and reliable infrastructure became so complex and capital-intensive that only a handful of companies could do it efficiently.
Startups and even established companies found it cheaper and simpler to rely on these big cloud providers.
Services such as CDNs (like Cloudflare or Akamai) and DNS resolvers also gravitated toward a few big players.
The complexity and cost advantages of these managed solutions meant fewer reasons for organizations to “roll their own” infrastructure.
Gradually, the decentralized underpinnings such as small ISPs, independent hosting, and localized routing gave way to a model where most traffic and services depend on a tiny number of major intermediaries.
“The big players didn’t start out evil; they just optimized for convenience, performance, and profit.
It was the natural outcome of early architectural design choices in the underlying network.”
With scale and centralization came more power and control.
Large platforms set their own terms of service, determining what content users could see or post and how their data would be collected or sold. Users had fewer alternatives if they didn’t like these policies.
Over time, these platforms’ recommendation algorithms and content policies became de facto arbiters of public discourse.
Paradoxically, what began as an open, decentralized network that empowered free exchange of ideas and content now often funneled information through a few corporate gateways.
Now these companies, in some respects, wield power comparable to that of governments: they can shape public discourse, influence commerce, and control entire ecosystems of third-party developers.
A network originally designed for free-form, peer-level interconnection now orbits around powerful corporate hubs that can shape and control much of the online experience.
This wasn’t some grand scheme to concentrate power. Nor did this situation stem from a single “wrong turn”.
The big players didn’t start out evil; they just optimized for convenience, performance, and profit. It was the natural outcome of early architectural design choices in the underlying network.
These choices didn’t anticipate how a much broader and more commercially driven audience would use the system and push it beyond its initial design parameters.
Over time, these choices accrued into a system where a handful of companies dominate.
The same thing is happening before our eyes in the decentralized industry.
“The pull towards centralization isn’t always the result of malicious intent; often, it’s an attempt to fix problems of a system never built to stay decentralized at scale.”
Just as the early Internet slid away from its peer-to-peer ideals and drifted into the hands of a few big players, today’s blockchain and “decentralized” technologies risk following the same path.
This is easiest to see with Ethereum’s attempts to scale.
High fees and slow throughput pushed developers to adopt Layer-2 (L2) solutions: rollups that bundle transactions off-chain and then settle them on Ethereum. In theory, these L2s should retain Ethereum’s trustless nature.
In practice, many depend on a single “sequencer” (a central server that orders transactions) run by one company.
Right now, one particular L2 solution has the most activity and total value locked, yet it is also the most centralized,
The pitch is that decentralization will come someday, but we’ve heard that before.
Over time, these “temporary” solutions have a way of becoming permanent. The same pattern may emerge with future layered approaches; some might not even bother promising any path to decentralization.
“Social logins” might seem helpful: they make it easy for people to start using a service without juggling private keys or complicated interfaces. But if these logins depend on a centralized component, you’re once again trusting one company to do the right thing.
That’s why, when we built zkLogin, we built and integrated it in a trustless manner. It was hard, but we cannot compromise and introduce centralization for convenience.
A similar pattern emerged in the NFT ecosystem.
A single dominant marketplace became the primary venue for secondary sales, capturing most of the trading volume and effectively becoming the de facto platform.
Not long ago, this marketplace decided to stop enforcing royalty payments on secondary sales.
Yes, it increased trading volume, but it screws over the creators who relied on those royalties as a key source of income.
This is a clear example of the fallout when centralized platforms can modify the rules any time they want.
Their dominance also extended beyond trading, as many projects also depended on their APIs and infrastructure.
When this centralized platform had outages, the entire ecosystem felt the impact, exposing the deep reliance that had formed.
But why does this keep happening?
Because users want quick, cheap, and easy experiences. Developers, under pressure, often turn to familiar and reliable solutions. These choices are simpler and faster but can introduce points of control that undermine decentralization.
None of these steps start out as grand plans to monopolize. They’re just practical responses to tough technical and market challenges.
But over time, these “band-aids” become embedded in the system’s DNA, creating a structure where a few players hold the keys.
That’s why these systems must be designed from the ground up for builders, not just for consumers.
“If I had asked people what they wanted, they would have said faster horses.” - Henry Ford
Most consumers just want a better version of what they already have.
But when we only chase these short-term improvements, we risk ending up with systems that look decentralized on the surface but still have a few key players pulling the strings.
If we really want to avoid repeating the mistakes that led to today’s digital gatekeepers, we need to focus on the creators of the future, the builders, not just the consumers.
This is why I always tell my team, consumers will always ask for a faster horse; it’s the builders who imagine the car.
0:00 / 0:38
With the right building blocks, developers can launch platforms that aren’t forced into centralization for the sake of convenience. They can create systems where no single entity can dominate or lock users in, ensuring that benefits flow more evenly to all participants.
That’s why these systems must be designed from the ground up to reinforce decentralization, even when they have to scale to an internet level.
“Tech debt can be fixed with refactoring; design debt often demands a total reset.”
From my early years working on systems that scaled to billions of users, one lesson stuck with me: Once a system becomes mission-critical, you can’t just tear it all down and rebuild without causing massive disruption.
The moment that millions of users rely on your system’s entrenched behaviors and assumptions, even proposing radical architectural changes becomes a non-starter.
It would break applications, business models, and the trust of entire communities built on top.
This is the concept of “design debt” at its most severe.
This isn’t just about code cleanliness; it’s about fundamental architectural choices that dictate how trust, power, and value flow through the network.
In the early days of this industry, the so-called “blockchain or scalability trilemma,” the idea that you can’t have decentralization, security, and scalability all at once, was treated like a law of nature.
People built on top of that assumption, believing it was as unchangeable as gravity. But it wasn’t.
It stemmed from flawed initial architectures: massive global shared states, limiting data models that made parallelism and modular scaling impossible.
The only way forward is to lump all transactions together, forcing every participant to fight for the same limited resources, regardless of what they’re doing. The result? Inefficient auctions for block space that drive up costs during spikes in demand and fail to isolate congestion to where it actually occurs.
Under these conditions, adding layers (like L2s that rely on centralized sequencers or compressed assets that depend on centralized storage) only papered over the cracks.
Each patch aimed at fixing short-term issues often adds more complexity and more points of centralized control, drifting further away from the original vision.
This is how design debt accumulates into a form of “technical gravity” that pulls everything toward centralization.
Even systems that never intended to be gatekeepers end up reinforcing hierarchical structures because their fundamental architecture demands it. Once that happens, the road back to a truly decentralized, trustless state is blocked by entrenched interests and infrastructural inertia.
The lesson is clear: you have to get the architecture right from the start.
That means picking data models that don’t bundle everything into a single global state, using storage solutions that are verifiable without trusting a middleman, and choosing a networking layer that doesn’t depend on a handful of powerful intermediaries.
It’s about reimagining the entire tech stack from day one.
“The only real cure for design debt is not to accumulate it in the first place.”
When we talk about building infrastructure that cannot be evil, we’re really talking about making the right architectural choices from day one.
That’s why, when we designed Sui, we wanted to bake in those foundational principles from day one.
This allows developers to build scalable, secure, and user-friendly applications without bending over backwards or relying on centralized crutches.
Consider the programming model itself:
Sui’s object-centric approach is a deliberate departure from the account-based paradigms that have dominated many blockchains.
At the core of Sui’s design philosophy is the object-centric programming model.
In a world where Web2 developers naturally think in terms of objects, such as files, records, and assets, it doesn’t make sense to reduce everything to a monolithic account model.
Doing so forces developers into unnatural thought patterns. It introduces complexity that’s ripe for errors.
The object-centric programming model aligns naturally with how Web2 engineers already reason about software.
Objects serve as first-class citizens, making it simple to represent assets, define rules, and avoid common pitfalls, such as reentrancy bugs, without convoluted boilerplate code.
This familiar model drastically reduces conceptual overhead and common pitfalls like reentrancy. Instead of writing boilerplate checks or complex guardrails to prevent exploits, developers rely on the Move VM to ensure safety at the runtime level.
As a result, code is more readable, secure, and easier to reason about.
It’s a direct bridge from Web2’s object-oriented mindset to Web3’s trustless environment, made possible by starting with the right assumptions from the start.
But a great programming model means nothing if it crumples under load.
From the beginning, Sui was built to handle real-world load. It was designed to scale horizontally while maintaining synchronous atomic composability.
The system’s object model gives Sui a fine-grained understanding of which parts of the state each transaction touches, enabling parallel execution at scale. This is a stark contrast to EVM-based systems, which must lock the entire global state. This slows everything down and encourages centralized solutions to offload the transaction volume.
With Sui, each object is effectively its own shard. Need more capacity? Add more computational power to handle the load.
The Pilotfish Prototype : https://blog.sui.io/pilotfish-execution-scalability-blockchain/
Developers don’t have to worry about sharding logic, bridging multiple domains, or hacking together infrastructure to achieve scale.
So the system can handle more traffic as the network grows, but how do you ensure fair resource allocation?
If one popular asset or dApp corners the market on state updates, it can drive up costs and degrade the experience for everyone else.
Instead of relying on a single, global auction for block space, where one hot application can jack up prices for everyone, local fee markets let the system price resources at a finer level of granularity.
Each “object” or shard can have its own fee market, ensuring that congestion in one area doesn’t spill over and penalize unrelated parts of the network.
It’s all baked into the platform’s foundational design, ensuring that even as demand grows, the system doesn’t revert to the tired old patterns of gatekeepers and walled gardens.
Designing for decentralization also means building verifiability right into storage and communication layers.
If data storage relies on a single trusted party, you are back to square one. You need storage solutions that let anyone verify data integrity without depending on a middleman.
A truly decentralized application can’t rely on a single cloud provider or a centralized database.
Walrus provides a decentralized, verifiable storage layer comparable in power and scale to centralized offerings like AWS or Google Cloud.
With Walrus verifiability of data isn’t an afterthought but an intrinsic property.
By integrating a storage layer that’s inherently verifiable and tamper-proof, Walrus ensures that developers can run websites, host data, and build fully decentralized applications without slipping back into the centralized patterns we set out to avoid.
In other words, Walrus extends the “correct by construction” philosophy from execution to storage, ensuring your application’s integrity at every layer.
Now, designing for decentralization also means that it shouldn’t stop at the consensus or execution layer; it should extend into the network itself.
Networking layers should not hinge on a handful of powerful ISPs or routing services. That’s also centralization.
Networking is another piece of the puzzle often overlooked in Web3.
Traditional Internet routing is controlled by a few ISPs, introducing potential choke points and vulnerabilities.
SCION is a next-generation networking protocol that challenges this status quo, making routing more secure, reliable, and resistant to centralized control.
It is a secure, multi-path, inter-domain routing architecture that can run side by side with today’s internet. It is a complete reimagining of how data moves across networks, built with security, control, and performance baked right into its core.
By integrating SCION to Sui, we’re ensuring that the underlying network isn’t a single point of failure or control.
No single entity gets to dictate data flow, and users can trust that the underlying routes won’t be manipulated or monopolized.
By integrating verifiability and permissionlessness into each layer, including the data model, storage, and networking, you reduce the surface area where central points of control can take hold.
You are not adding decentralization as an afterthought; you are embedding it into the foundation.
This simplicity reduces complexity and keeps the door closed to “convenient” but centralizing workarounds. Most importantly, getting the fundamentals right means never placing your bet on a “we will fix it later” mentality.
“Decentralization isn’t a validator count. True decentralization is about the architecture that keeps power from pooling in one place.”
The point of everything we’ve explored is simple: if you want a system that cannot be evil, you have to start with the right architecture.
If you start with the wrong assumptions, no amount of extra code or clever tricks will save you.
An architecture that rewards gatekeepers. A data model that forces every actor to compete for the same scarce resource. A networking layer designed around centralized hubs. Eventually, you’ll slip into the same old patterns of control and hierarchy.
This is why the architectural groundwork matters so much.
Decentralization isn’t just about counting how many nodes you have. True decentralization means designing at the root level so that trust, fairness, and verifiability are impossible to circumvent.
It means building systems where neither a handful of whales nor a well-resourced company can quietly tilt the playing field. It’s about ensuring that every participant has a fair shot, and that no choke point, no subtle design decision, can snowball into runaway centralization.
Sui is one example of what happens when you design with these principles in mind from day one rather than trying to retrofit them after the fact.
When the entire stack, from the programming model to the consensus layer, and from user onboarding to data availability and networking layer, reinforces openness and neutrality, you create an environment where builders and users flourish on equal terms.
By starting from first principles and enforcing decentralization at every layer, you can create infrastructure that remains true to its ethos, no matter how big it grows.
Build it right from the start, and you won’t need to promise future fixes or half-measures.
You’ll have a network that’s inherently just, resilient, and ready to serve as the foundation for the next generation of digital experiences.