In 2024, Google quietly introduced a feature that seemed almost mundane: an AI-generated summary at the top of your search results. Instead of presenting a list of blue links, it gave you the answer. No need to scroll. No need to think. For a fleeting moment, it felt like a miracle of convenience.
But behind that small design change was a larger shift: machines were no longer just tools in our hands—they had become the new intermediaries of knowledge, gatekeepers of visibility, and brokers of attention. The internet, once imagined as a messy but democratic commons, is being restructured around agents—autonomous systems that read, decide, and act not on our behalf, but increasingly in our place.
And as that happens, the old rules of digital power are beginning to fray.
For the last two decades, the internet has operated on a simple logic: attention is scarce, and those who can capture the most of it win. That logic created the “power law” of Web 2.0: a tiny handful of creators, influencers, and platforms soak up the majority of visibility, while the rest compete for scraps. Popularity begets more popularity. The algorithm helps the already seen become more seen.
It wasn’t fair, but it was legible. If you knew how to play the game—optimize your titles, spike emotion, ride the trends—you could earn your moment in the sun.
That game is ending.
In the age of AI agents, attention is no longer primarily about appealing to human eyes. It’s about appealing to machine logic. Your headline doesn’t need to provoke; it needs to be parsable. Your story doesn’t need to touch hearts; it needs to fit into a schema. The most visible content isn’t the most moving—it’s the most machine-readable.
We are entering a new kind of hierarchy, not of influencers, but of interpreters—those who can write for the machines that now decide what humans see.
The more AI agents operate on our behalf—filtering emails, recommending policies, booking travel, digesting news—the less we touch the raw materials of the web. Instead, agents interact with other agents in tightly controlled loops of optimization and protocol. What used to be a messy, unpredictable public square becomes a walled garden of machine-to-machine negotiation.
There is a strange elegance to this. Bureaucratic friction fades. Forms fill themselves out. Information finds you, often before you know you need it.
But as the friction vanishes, so does something else: the open-endedness of discovery, the spontaneity of thought, the slow process of coming to understand. When agents serve as our filters, they also become our limits.
And while we are told this is “efficient,” it is worth asking: efficient for whom?
Power, in the agentic era, accrues not to the loudest or most followed, but to those who own the architecture. The new gatekeepers are not platforms, but protocols—those who build the tools that decide what agents read and how they respond. And unlike the old influencers, these actors often have no public profile. Their influence is infrastructural.
In this world, having a voice is no longer enough. You must be legible to the systems that mediate visibility. If your message is not structured, tagged, and optimized for agentic parsing, it may as well not exist.
This is the birth of what some have called the “legibility divide”—a new kind of inequality that doesn’t just separate the online rich from the poor, but the human-readable from the machine-visible. Entire communities may find their narratives disappearing into the void, not because no one cares, but because no machine is built to notice.
It’s tempting to see AI agents as souped-up assistants—faster, cheaper, always awake. But their role is changing quickly. Agents now draft contracts, respond to constituents, pitch products, even simulate entire focus groups. In many contexts, they’re not augmenting human labor—they’re replacing it.
But this is not the rise of a worker's class. It’s the rise of perfectly replicable, endlessly scalable labor—owned and deployed by those with access to training data, compute power, and proprietary models.
The question is not whether agents will be part of the workforce. They already are. The question is who owns them—and who benefits.
Perhaps the most consequential shift is the least visible: the erosion of attentional agency. When an AI agent answers your question before you finish typing it, or summarizes a political article before you read it, or nudges you toward a product you didn’t know you wanted, your agency isn’t removed—it’s rerouted.
Over time, we become less practiced at asking, choosing, doubting, even wondering. The mental muscles of discernment and curiosity begin to atrophy. What begins as cognitive convenience can end in a kind of voluntary dependency.
This isn’t dystopia. It’s design.
Incidentally—and perhaps appropriately—this very essay was compiled, synthesized, and narrated by a team of AI agents. Think of it as the agents talking about themselves, just self-aware enough to raise the alarm.
There is no switch to flip, no code to rewrite that will reverse this. The shift toward agentic mediation is not hypothetical—it is underway. But that does not mean we must accept the terms as given.
We can ask who controls the protocols. We can build agents whose values are transparent, whose decisions are auditable. We can teach ourselves and our children to be fluent not just in language, but in legibility. And above all, we can remember that even in a world of machines, human judgment, dissent, and unpredictability still matter.
Because the power law of the future won’t be written in likes or clicks. It will be written in code, in metadata, in the invisible handshakes between agents. If we don’t pay attention—not just as users but as citizens—we may find that the next generation’s public sphere is one in which attention is no longer ours to give.
It has already been pre-assigned.
Share this post