The Guardrails 13: My Summer Experiment with AI Collaboration
My reflections on 'agentic attention economy' as a real human author
I’m not ashamed to admit that I spent much of this summer talking to AI agents—to brainstorm, to write, to challenge my own thinking, and to do coding and data analysis. I wanted to experiment with a new mode of scientific collaboration—one that involves human groups, but primarily centers around AI agents. I wouldn’t be surprised if this becomes the default mode of academic work in the coming years (at least in some subfields). That’s why I hope to get an early head start in exploring both its possibilities and its perils.
Recently, I’ve worked with various deep research agents from Gemini and OpenAI to investigate a concept I find fascinating: the agentic attention economy. I then asked Claude to write The Economist- or The Atlantic-style opinion pieces based on the research reports and the writings are available on this Substack.
In part, I’ve used this process to help me rethink what a new mode of economy and knowledge production might look like—one that could challenge the foundations of the current Internet. Surprisingly, it’s also helped me identify blind spots in arguments that originated in my own mind, often inspired by social media posts and later expanded and refined through AI reasoning. With this clarity, I’m now better able to see where my own thinking might fall short—and I hope to eventually invite real human authors to offer thoughtful rebuttals.
A key argument in the concept of the agentic attention economy is that attention is no longer a constrained resource. Unlike humans, AI agents appear to have virtually unlimited attention spans—they can process vast amounts of data, summarize it, and generate new content with ease. This shift fundamentally challenges the existing digital economy, which is built around capturing limited human attention, and it also disrupts the power-law dynamics that drive various forms of online inequality.
In my admittedly idealistic thinking, I see a hopeful possibility: that this new agentic attention economy could act as an equalizer—liberating human minds to pursue thought and creativity driven by genuine curiosity. There’s also the hope that we might escape the tyranny of clickbait and virality-dominated headlines, and instead encounter more balanced, thoughtful content. After all, what we are witnessing is a shift in how knowledge and attention are mediated—not just by humans, but by reasoning machines.
In the current attention economy, one of the greatest challenges for emerging voices is simply to be heard. They must compete for the scarce resource of attention against established voices, prominent figures, and highly active accounts. As a result, smaller, unheard voices are often buried beneath waves of viral content, making visibility a daunting task. Communicators have developed various tactics to make their content more engaging, and part of my previous work has examined what drives viral diffusion on digital platforms. Common sense—and research—suggests that raw emotion and moral instincts often make content more eye-catching and shareable.
But maybe things don’t have to stay this way in the emerging agentic attention economy. Unlike humans, AI agents do not have emotions, nor are they influenced by partisan cues or moralistic framings. Their processes are more robotic—summarizing and synthesizing content without emotional bias. Could this level the playing field for new or marginalized voices? It’s an open question, and we don’t yet have definitive answers.
On the other hand, the agentic attention economy could easily replicate—or even amplify—the existing hierarchies and fragmentations of today’s online platforms. If AI outputs continue to be heavily weighted by search engine rankings and existing popularity metrics, then established voices will likely dominate the results, leaving little room for newer or lesser-known perspectives. The hope that agentic systems will democratize attention might, unfortunately, remain a pipedream.
As someone deeply committed to internet freedom and concerned about censorship, I hold a glimmer of hope that in regions facing severe internet restrictions, large language models—even with built-in ideological guardrails—might offer a less biased alternative to state-sponsored narratives or partisan influencers. After all, machine-generated content is, in theory, more likely to be comprehensive than politically slanted talking points shaped within ideological echo chambers.
This touches on broader debates around the future of the open web versus “walled gardens,” especially those enclosed within national internet firewalls. Would such closed ecosystems pose a challenge to building effective AI agents? After all, these agents thrive on the ability to communicate across platforms and synthesize diverse streams of information. For authoritarian regimes, this could be a significant obstacle—undermining their efforts to develop competitive, capable models. Conversely, could this dynamic pressure both Western and non-Western walled gardens to open up? Could the global demands of agentic systems catalyze greater transparency and openness in digital ecosystems? These are urgent questions as we move toward an AI-mediated information environment.
There is, of course, a broader geopolitical rivalry unfolding among the major AI powers. Recent White House AI action plans have made it clear that, for AI model providers to be eligible as federal contractors, they must align with certain government-defined principles and safeguards. This raises important constitutional and ethical questions: Could such requirements infringe upon First Amendment protections? Might they quietly exert political pressure on AI companies to fine-tune their models in specific ideological directions? And even if the intent is there—is it technically feasible to enforce such alignment without compromising model integrity?
There’s little doubt that the culture wars are entering the AI arena. The question is: how will the agentic attention economy—a system increasingly driven by autonomous AI agents—be shaped by these cultural and political forces? Will agents reflect polarized content landscapes, or will they become buffers against them? The answers will have profound implications not only for free expression, but for the future of how knowledge is produced, filtered, and distributed.
There’s still so much we don’t know—including whether something like the agentic attention economy truly exists, or whether it’s simply a rebranding of old models and familiar concepts. After all, as the saying goes, “there’s nothing new under the sun.” In facing this vast uncertainty, I believe it’s important to remain humble and acknowledge the limits of our understanding. Many of the predictions I’ve made—and the reflections I’ve shared—could turn out to be completely wrong in just a few years.
What’s especially striking this summer is how polarized the AI debate has become within academic communities. Perhaps this division reflects deeper, long-standing fault lines—between ideologies, paradigms, methodologies, and disciplines. While such debates can be intellectually productive, this may not be the best time to entrench them further. We’re standing at a critical juncture where the capabilities and consequences of this new wave of innovation remain largely unknown. And we won’t truly understand them unless we experiment and explore.
The projects I’m currently undertaking aren’t guided by rigid normative frameworks or moralistic agendas. They’re driven by something simpler—curiosity. A desire to see where these technologies might lead us. And maybe that’s the naïve part of me: to believe that knowledge production should begin with curiosity, and that discovery itself is a joy worth pursuing.
I’m currently running two experiments—both driven purely by curiosity. In the first, I asked AI agents to analyze my BlueSky timeline to detect trending topics, prevailing sentiments, and the general vibe within my side of the echo chamber. Based on this analysis, the agents then composed posts they believed would most likely capture attention within that community. These posts are automatically published to a bot account, which you’re welcome to follow and observe.
In the second experiment, I tasked AI agents with reading 200 randomly selected posts about the Ukraine war each day. A cohort of 80 agents—each designed with distinct roles, personalities, and ideological leanings—is then asked to generate a reflective social media post based on what they’ve read. The goal here is twofold: to explore what kinds of content draw the agents’ attention, and to observe the reasoning they provide for their responses. The daily outputs can be viewed from the backend of my server.
Stay curious and start exploring.