The Great Consolidation
I’ve noticed something when using GitHub Copilot. If I start writing some code to analyze data, it almost always suggests I use Python and the pandas library. It’s usually a good suggestion. But it is always that suggestion. It feels less like magic and more like I’m walking on a path that has been worn smooth by thousands of people before me.
This isn’t just a quirk. It’s a sign of a much bigger shift that’s happening right now. Large language models, the tech behind tools like ChatGPT and Copilot, are powerful agents of consolidation. They are trained on a vast snapshot of our current world, and by reflecting that world back to us, they amplify its most dominant ideas and habits. They are, in a sense, freezing our culture in digital amber.
You see this most clearly with language. LLMs learn from the internet, books, and articles. So they learn the slang we use, the way we argue, and the stories we tell. When you ask one to write a casual email, it might throw in an “LOL” or use a phrase that feels distinctly like it came from Twitter in 2022. It has captured the linguistic fashion of a specific moment in time.
On the surface, this seems harmless. But it has deeper implications. Most of the internet is in English, and a lot of its cultural gravity is American. So if you ask an LLM an abstract question about “freedom,” its answer will likely be shaped by Western, and particularly American, philosophy. It’s not going to give you a perspective from, say, an Indigenous tribe in the Amazon unless you work hard to ask for it. The model presents a default view, and that default is whatever is most common in its training data. It’s globalization in overdrive, smoothing out the weird and wonderful edges of culture into a single, global dialect.
The same thing is happening with code. Copilot suggests pandas because most of the public code it was trained on uses pandas. It suggests React for building a web app because React is all over GitHub. This creates a powerful feedback loop. The most popular tools get suggested by the AI, which makes them even more popular, which ensures they’ll dominate the training data for the next AI.
This could make it harder for new ideas to spread. A breakthrough new programming language or a brilliantly efficient web framework might struggle to get noticed, because the AI assistant that millions of developers use every day never mentions it. It’s like trying to find a new restaurant when your map app only shows you McDonald’s.
So, are we all doomed to use React and say “yeet” forever? I don’t think so. That’s not the whole story.
For one thing, culture moves too fast. By the time a massive model is trained and deployed, the slang it knows is already starting to sound a little dated. New ideas, new memes, and new ways of speaking are always bubbling up from the edges. An LLM is a lagging indicator of culture, not its source.
The same is true for programming. Programmers are restless innovators. The rise of Rust happened because developers were actively looking for something better and safer than C++, and they pushed for it. A good programmer can always ignore Copilot’s suggestion and decide to try that new, experimental library. Human curiosity is a powerful antidote to consolidation.
And the models themselves aren’t perfect monoliths. Their training data is huge and contains multitudes. If you ask, they can often write in a rare dialect or generate code in a niche language like Haskell. The seeds of diversity are in there, even if the main path is paved with whatever is most popular.
What we’re seeing is a new tension. On one side, LLMs are a powerful force for centralization, for creating a shared standard. On the other, human creativity is a constant force for decentralization, for variety, and for change. The future probably won’t be a bland monoculture. It will be more like a landscape with a huge, six-lane superhighway running through it, built by AI. But alongside it, there will still be thousands of smaller, more interesting dirt roads, built and explored by people.
Perhaps the most interesting thing about these models isn’t what they tell us about the future of AI, but what they tell us about ourselves. They are a mirror. What they choose to consolidate is simply what we, collectively, are already doing the most. If we don’t like the reflection, we’re the only ones who can change it.