The Attention Economy Meets AI: A Marriage of Inconvenience?
Technology Science Economy- English
- 日本語
- 简体字
- 繁體字
- Français
- Español
- العربية
- Русский
The Battle for Attention
The spread of the internet and smartphones—and the encroachment of digital platforms into everyday life—have created a world in which anyone can publish anything at any time. The result has been an explosion of information that has produced a distinctive competitive landscape shaped by the demands of the “attention economy.” Despite its many downsides, this situation has persisted for quite some time.
In an earlier era, when information was scarce, value lay in possessing information or being able to distribute it. Today, though, in a society flooded with information, people’s attention—the capacity to receive and process information—is the scarce resource, and capturing it becomes a source of value.
Now that the supply of content expands by the minute, the bottleneck is cognitive. Human beings have only so many hours in a day, and only a fraction of that time can be devoted to absorbing what circulates online.
Inevitably, competition has shifted toward capturing a sliver of people’s attention, rather than demonstrating the accuracy or quality of information. This dynamic helps explain the proliferation of click-bait headlines, hyper‑edited images, sensational or exaggerated claims, and anxiety-provoking ads. The intensifying battle for attention has also created fertile ground for misinformation, including fake videos, misleading content, and outright fabrications.
In this environment, a particular mindset has taken hold: a fixation on efficiency in information consumption. In Japan, this is often described through the shorthand of kosupa (cost performance) and taipa (time performance). The impulse is understandable. Faced with an avalanche of information, people try to minimize the time they spend on any single item so they can quickly move on to the next. While often framed as a generational trait prevalent among younger users, it may be more accurate to see it as a structural response to a distorted media environment, one that appears most clearly among heavy smartphone users.
Is a Zero-Click Internet on the Way?
The arrival of generative AI—and with it, features like AI Overview and AI Mode on Google Search—appears, at first glance, to offer a technological solution to this efficiency mindset. If an AI system can deliver a direct answer, why spend time sifting through multiple websites? In theory, generative AI can offer a more efficient way to obtain information.
This is the logic behind what is described as “zero-click” behavior. Instead of following the links to find what they are looking for, users can simply read the AI-generated summary and move on. Consequently, some online publishers that rely on search-engine traffic worry about the prospect of declining clicks.
But the reality is more complicated. If answers provided by AI tools like ChatGPT were truly satisfying, and if they genuinely improved efficiency, one would expect search‑engine use to decline. Yet a 2025 study by the US software marketing firm SparkToro suggests the opposite: as people experiment with AI, their use of search engines has not fallen. In many cases, people who use generative AI tools end up using search engines more; AI has become not a replacement for search but a prelude or supplement to it.
This pattern suggests that answers provided by ChatGPT or AI Overview do not directly meet users’ needs. If people do not fully trust AI‑generated answers, such tools may not be improving efficiency as much as expected.
A 2025 survey by the online advertising company Out of the Box targeting Japanese AI users in their twenties found that only a small minority consider AI to be consistently reliable. An overwhelming 92% said they do not fully trust AI responses. When an answer feels incomplete, more than 70% turn to Google Search for verification, and over half visit official corporate websites.
Other studies suggest that AI Overview affects only a narrow slice of queries, with brand-specific searches and product‑related queries remaining largely unaffected. In this light, zero-click behavior looks less like a sweeping transformation and more like a surface‑level shift confined to particular contexts. The more consequential question may be how information providers can deliver content that people genuinely need. The fixation on page views that has shaped online media for years is itself a product of the attention-economy mindset.
Why, then, does this gap between promise and practice persist? One explanation is that generative AI remains technically immature. But a more fundamental reason likely lies in the architectural constraints of the technologies underpinning generative AI—deep learning and large language models—which generate probabilistic outputs by design.
AI Versus Algorithm
Deep learning, the foundation of today’s generative AI, takes inspiration from the neural circuitry of the human brain. It layers neural networks into deep, multitiered structures and trains them on vast amounts of data. What these systems learn is not a set of rules but statistical tendencies: given a particular input, what output is likely to follow?
Large language models apply this logic to text. Trained on books, websites, academic papers, and conversational transcripts, they generate sentences by predicting which word is most likely to come next.
The result is a system that is highly flexible and versatile but also structurally unstable. Ask the same question repeatedly, and the answer is likely to be subtly different each time. Sometimes it may contain false or misleading claims. This phenomenon, called “hallucination”—explanations that appear plausible but are not based on fact—is a consequence of the probabilistic model itself.
Users have begun awakening to such hallucinatory risks, and many now approach AI-generated answers with a grain of salt, aware that verification may be necessary. As a result, the use of search engines and other sources of confirmation has grown alongside the use of generative AI, not diminished.
This stands in contrast to rule-based algorithms, which produce the same output every time as long as the input and initial conditions are the same. Should there be an unexpected result, engineers can adjust the outcome by tweaking the rules. Such reliability formed the basis of people’s trust in digital platforms and turned them into highly efficient tools for navigating the attention economy. Setting aside judgments about the social consequences, it was possible to establish a more efficient path through the jungle of information by refining the rules.
The probabilistic model used by generative AI cannot guarantee identical outputs, even when prompted in identical ways. If an answer contains an error, there is no clear way of adjusting the input to avoid the same mistake again. A prompt that works once may not work the next time. Thus, what appears to be an efficient shortcut is, in practice, a system defined by uncertainty and inconsistency—and one that cannot be entrusted to make the right judgment.
The need to reiterate prompts and cross-check answers is not an inconvenience that will vanish as the technology matures. These issues stem from the fundamental design of probabilistic models. The gap may narrow, but it will not disappear. Compared with rule‑based algorithmic systems, the risks are higher, and the cognitive burden on users is heavier.
A Return to Human Agency?
Because probabilistic AI produces inherently unstable outputs, responsibility ultimately returns to the human user: Verification becomes necessary, as does second‑guessing. In exchange for the speed of an AI‑generated answer, we incur a different kind of cognitive cost.
The irony, therefore, is that despite its promise of efficiency, generative AI may be more inefficient when viewed through the broader lens of kosupa and taipa. The need to check answers necessitates greater cognitive resources, which runs counter to the demands of the attention economy.
The unreliability of generative AI, paradoxically, may introduce a subtle shift in a media environment long oriented toward efficiency and driven by the logic of the attention economy: a return to human agency in deciding what information deserves our time. Users must actively evaluate whether an AI‑generated claim is trustworthy, and attention becomes something we deploy deliberately, rather than passively.
Rule-based algorithms optimized for attention capture have engendered and reinforced distortions. Generative AI, operating on a different logic, disrupts that optimization. The resulting gap is not merely technical; it has a social dimension as well. It invites us to ask what kind of media environment we want and what values we hope to prioritize. As we imagine a post-attention-economy media environment, the question is not simply how convenient AI can become but also how we choose to structure the interplay between AI, algorithms, and human judgment. The future will depend less on the capabilities of technology than on the choices we make about how it is used.
(Originally published in Japanese on January 16, 2026. Banner photo: OpenAI CEO Sam Altman speaking at a June 2025 AI conference in San Francisco. © Jiji.)