The Guardrails 16: Why Social Scientists (Especially Communication & Media Researchers) Should Care about the Coming of Agentic Attention
We need to touch it. Tinker with it. Argue with it. Build with it. Break it.
As someone who’s not a shopaholic, I dread every purchase decision. Browsing shopping sites, comparing prices, reading reviews, it all feels like a tedious literature review for a research project. Naturally, I tend to delegate this to my wife, who’s great at navigating shopping apps and maximizing coupons. Or better yet, I outsource the task to AI. And, as everyone seems to call (and debate it) now, this is cognitive offloading.
This experience captures some of the most fundamental features of the digital economy we live in. My attention—as a human user—is limited. I’m multitasking with shifting priorities, and, like everyone else, I’m a fallible human being. In this economy where attention is a scarce commodity, whoever captures it wins: the narrative, the influence, the clout. Attention is currency. Attention is the path to social change. This is the core thesis of the attention economy, and based on which we have a robust body of scholarship that explains everything from viral tweets to collapsing nations.
This experience also reveals micro-level psychological processes underlying how our brains work. As predicted by the Heuristic-Systematic Processing Model and the Elaboration Likelihood Model, I mostly rely on System 1 thinking—gut feelings, mental shortcuts, and familiar cues. Scholars often debate the role of heuristics in the spread of misinformation, political polarization, and cyber vulnerability. Yet we rely on them to survive. And with Nudge Theory, we’ve learned that changing even minor environmental cues—like a menu layout or the order of checkboxes—can lead to large-scale behavioral shifts.
But the world shaped by constrained attention and reliance on heuristics is quietly shifting.
Users are flocking to AI mode in Google Search, leading to sharp traffic drops for many sites. Stack Overflow is seeing plummeting visits because developers now turn to AI chatbots to debug their code—or even vibe-code entire apps. Retail investors can now leave behind the costly Bloomberg Terminal in favor of Perplexity AI, which autonomously synthesizes information across sources to deliver deep analysis. AI agents are booking flights via experimental “agentic browsers.” Chinese firm Manus.AI spins up websites and slide decks in seconds with agentic workflows. And for us researchers? The “Deep Research” feature (available across major chatbots) can now turn a dreaded literature review into a 5-10 minute process with decent results (with caveats of course).
In all of these cases, it’s no longer our constrained attention or heuristics we see that shape information flow or decisions. It’s the behavior of autonomous systems—AI agents—with their own capabilities, limits, and blind spots.
That’s why we might need to consider a new term: Agentic Attention Economy. In this new season, I’ll be working closely with my research agents to define, challenge, and refine this idea. I’ve asked them to be skeptics, not sycophants. To push back. Some of the questions I want to explore: Is this “agentic attention economy” even real Is this just a flashy term my AI agents invented to flatter me? Do AI agents really have “unbounded” attention spans? (They don’t—but what are their limits?) Do heuristics that shape human attention, like social proof, familiarity, the availability heuristic, or affect, still matter when AI is doing the reading, synthesis, and decision-making? And most importantly: Does this even matter to social scientists who are not developing AI systems? Why should we care?
Why Social Scientists (Especially Communication & Media Researchers) Should Care
It’s tempting to treat AI behavior as “just a machine thing.” We study people, not code. Even if AI systems exhibit human-like reasoning, they don’t have consciousness or a soul. As a Christian, I’m especially wary of elevating soulless systems to the level of sentient beings or moral agents. But here’s why this does matter to us:
1. The Locus of Gatekeeping and Agenda-Setting Has Moved
For decades, we’ve studied how journalists, politicians, influencers, and even bots act as gatekeepers. But now AI systems—ChatGPT’s “Deep Research” tool included—are setting agendas too. When students ask for help researching political polarization, it’s the AI agent that selects, filters, and prioritizes the sources. This process is still emerging and far from mainstream, but it’s spreading fast. And even if we’ve long studied how humans act as agenda-setters, we now need to understand how AI systems are shaping information environments—and how they interplay with human cognition.
2. Agentic Attention Resets Power, Not Just Information
If communication research is about power, then the rise of agentic attention is a paradigm shift. It won’t level the playing field—but it will redistribute attention and influence. Perplexity AI might challenge Bloomberg Terminal. But Google, with its AI-boosted search, could consolidate even more power by cutting off web traffic and ad revenue to publishers. In this new game, the winners and losers may shift and the rules are being rewritten. What shapes AI decisions? If not human heuristics, then what? Who controls these AI “nudges”—the prompts, the APIs, the filters? Are they becoming the new digital landlords?
3. Ambient Curation and the Rise of the Cultivation Machine
One of my AI research agents suggested that AI systems are “the ultimate cultivation machines.” I’m not sure I fully agree, but there’s something to it. Much of this AI-driven curation is ambient and invisible. Users aren’t always aware that their news summaries, search results, or general media content are AI-curated. But over time, this invisible exposure can shape beliefs, knowledge, and perception—just as Gerbner’s Cultivation Theory warned decades ago.
4. Communication Theories Still Matter—As Vocabularies
We shouldn’t force-fit theories like framing, agenda-setting, or moral foundations onto machines. But these theories still offer useful vocabularies to help us describe what’s happening. For instance, think of LLMs like Gemini Flash as “System 1” responders—fast, shallow, instant. Meanwhile, Gemini Pro or Claude Opus function more like “System 2”—thoughtful, reasoned, deliberative. But strangely, recent studies show that the more these models “think,” the worse some answers become. That’s where human psychology and “machine psychology” could start to collide.
Here’s where I see future communication research going
AI agents will be contested for their roles as pseudo-social actors, anthropomorphic beings, and human substitutes
AI agents will be studied as a brand-new class of communicators who curate and produce information and participate in the digital public spheres (many agents are already doing that!). While at the same time, AI agents will be used as substitutes of human participants to understand how society works. Both directions open the door to bold and potentially wild applications of social science theories.At present, there’s growing interest in how AI agents model human behavior, simulate reasoning, and even participate in political persuasion. Large language models (LLMs), in particular, may serve as low-cost, accessible proxies for experimental tasks such as rapid A/B testing, simulated dialogue, or behavioral modeling. It remains to be seen what kinds of theoretical, ethical or even theological debates this trend will spark.
Existing theories as vocabularies and frameworks, not predictions.
Communication theories were designed to explain human behavior, not machine outputs. But that doesn’t mean they’re obsolete. Even if a theory like framing or heuristic-systematic processing doesn’t “apply” to machines in a strict scientific sense, it can still help us make sense of what machines are doing. Think of it like metaphor or analogy: these theories offer interpretive scaffolding. They help us notice patterns, spot differences, or draw parallels between algorithmic logic and media power. Especially when it comes to understanding the influence of agentic powers, the theories give us language to ask better questions, not necessarily definitive answers.
Demystify AI systems by building, breaking, and bettering them.
This is where things get controversial. For decades, communication scholars have studied “the algorithm” as a black box—unseeable, untouchable, often governed by platforms we don’t have access to. And to be fair, that critique was valid. Algorithms were proprietary, and most of us couldn’t peer inside. But the LLM era changes that—at least partially.
Yes, LLMs are still black boxes in many ways. But they’re also equitably mysterious—even the engineering teams behind them don’t fully understand how they work. That levels the playing field a bit. And with the rise of open-source models, open APIs, and the coming of vibe coding, the barriers to experimentation are lower than ever. With the right motivation and a bit of training, communication researchers can now build, observe, and interrogate AI systems directly.
That’s what I’ve been doing. I’m building systems like Vibe Infoveillance, which deploys multiple AI agents, each with distinct personalities, risk profiles, and ethical values, to digest Reddit discussions and identify early signals in public opinion. I’m also prototyping another system where different AI agents select social media posts for reposting, and generate content that maximizes engagement based on their evolving understanding of a digital community.
All of these prototypes are designed to test this evolving idea of agentic attention. And importantly, these systems don’t require massive funding or months of dev time. Some of them take less than 20 minutes to build and run. That’s the point: they’re lightweight, flexible, and theoretical, and they serve as a pathway to demystify AI systems.
If the starting thesis of this piece is that social scientists should care about this thing called agentic attention, then I’ve arrived at a stronger conclusion: we shouldn’t just care—we should engage. We need to touch it. Tinker with it. Argue with it. Build with it. Break it. And in the process, let our theories evolve alongside our practices.