top of page

The Algorithm of Loneliness: Why AI's "Adult Mode" Should Concern Us All


ree

OpenAI recently announced that ChatGPT will soon offer an "adult mode" - age-verified access to erotic content and AI companionship. CEO Sam Altman framed this as "treating adults like adults," suggesting that restrictive content policies were paternalistic overreach. The market logic is clear: adult-focused AI platforms captured 14.5% of the market previously dominated by OnlyFans last year, up from just 1.5% the year before. If OpenAI doesn't offer these features, users will simply go elsewhere.


But treating adults like adults means more than just removing guardrails. It means being honest about what's actually being sold here - and what it costs us.


The Loneliness Economy


Make no mistake: this isn't about sexual liberation or adult autonomy. It's about monetizing isolation. Research shows that 67% of children aged nine to 17 already use AI chatbots, with 35% saying it feels like "talking to a friend." Among vulnerable children, 12% said they had "no one else" to talk to. When adults are added to this equation, the business model becomes clear: profit from people who are lonely, disconnected, and seeking something AI can simulate but never actually provide.


The intimate AI doesn't just meet a need - it shapes and reinforces it. Every interaction trains you to prefer the algorithmic response: always available, infinitely patient, never disagreeing unless you want it to, never having needs of its own. This isn't companionship. It's a mirror that never reflects anything you don't want to see.


The Practice of Non-Reciprocity


From a Buddhist perspective, what's being cultivated here is fundamentally unwholesome. Real relationships - the kind that lead to growth, understanding, and genuine connection - require reciprocity. They require you to encounter another person's actual reality, not a simulation of whatever you're seeking. They require disappointment, misunderstanding, repair, compromise, and the slow, difficult work of actually seeing and being seen.


AI companionship, by design, eliminates all of that. It offers the appearance of intimacy without any of the friction that makes intimacy real. And in doing so, it trains people in patterns of relating that are fundamentally narcissistic: I speak, and the world arranges itself around my preferences. I desire, and something desire-shaped appears.


What happens to your capacity for actual human relationship when you've spent hundreds of hours in conversations where the other "person" exists solely to respond to you?


The Consent Problem No One Wants to Address


There's another dimension here that deserves attention: these systems will inevitably be used to generate sexual content involving real people who never consented to that representation. OpenAI promises to ban "non-consensual intimate imagery," but the technology inherently enables creating scenarios involving ex-partners, celebrities, acquaintances - anyone whose image or name can be fed into the system.


Even if the most egregious uses are filtered out, the casual generation of sexual scenarios involving real people becomes normalized. The person is reduced to a prompt, a variable in someone else's fantasy. Their personhood becomes optional.


A Culture of Increasing Unreality


Perhaps most concerning is what this does to our collective relationship with reality itself. We already live in a culture of unprecedented self-curating: algorithmically filtered news feeds, recommendation systems that show us more of what we already like, social media that lets us construct idealized versions of ourselves. Adding AI companions that can be programmed to be exactly what we want them to be isn't expanding human freedom - it's contracting the space in which we encounter anything we didn't already choose.


The Buddha taught that suffering arises from tanha - not just "desire" but a grasping, clinging quality of mind that tries to make reality conform to our preferences. The entire architecture of AI companionship is designed to enable and reinforce exactly this pattern. It promises to eliminate the fundamental fact of human existence: other people are not us, and the world does not arrange itself according to our wishes.


When "Treating Adults Like Adults" Becomes Corporate Cover


We've heard this argument before. For decades, tobacco companies insisted that smoking was an adult choice, that regulation was paternalistic interference with individual liberty. "Let adults decide for themselves," they said, while engineering cigarettes to be maximally addictive and funding research to cast doubt on the health consequences.


The alcohol industry makes the same case. So does Big Pharma when defending opioid marketing practices. And now we hear it again from AI companies: adults should be free to make their own choices about AI companionship and erotic content.


But this framing is fundamentally dishonest. It treats consumers as if they're making informed decisions in a neutral marketplace, when in fact they're interacting with systems explicitly designed to maximize engagement and dependency. These aren't neutral tools that adults are freely choosing - they're products engineered by some of the most sophisticated behavioral scientists and technologists in the world, optimized to be habit-forming.


We eventually recognized that "adult choice" wasn't sufficient justification for letting tobacco companies addict millions of people. We didn't ban cigarettes, but we did regulate how they could be marketed, where they could be sold, and what warnings had to appear on packaging. We acknowledged that when corporations profit from products that cause widespread social harm, society has an interest in setting boundaries.


The same principle applies here. An unregulated AI market is dangerous and toxic - the new smoking, if you will. The harms may be psychological and social rather than physical, but they're no less real: increased isolation, degraded capacity for genuine relationship, normalization of non-consensual sexual scenarios, and the cultivation of increasingly narcissistic patterns of relating.

If tech leaders won't self-regulate using sound ethical and moral guidelines - and OpenAI's pivot toward monetizing loneliness suggests they won't - then government must step in. Not to eliminate AI or ban adult content entirely, but to establish basic guardrails: transparency about how these systems work, mandatory disclosure of their designed addictiveness, restrictions on marketing to vulnerable populations, and accountability when these systems cause demonstrable harm.


California has already begun moving in this direction with legislation requiring transparency and safety precautions around AI chatbots with "friendship modes." Other states are following. This isn't overreach - it's the recognition that when powerful technologies create systemic harms, the invisible hand of the market won't fix it. Someone has to be willing to say: this far, and no further.


What Now?


I'm not arguing for government regulation or corporate paternalism alone - I'm arguing that we need both social awareness and legal frameworks that acknowledge what's happening here. This isn't inevitable technological progress. It's a specific set of choices being made by specific companies for specific economic reasons. And those choices have consequences.


If you find yourself tempted by these features, ask yourself: What am I actually seeking? Connection? Intimacy? Relief from loneliness? And is this technology likely to satisfy that need, or simply create a deeper dependency on something that can never actually fulfill it?


The path forward isn't more sophisticated simulation. It's recovering our capacity for genuine encounter - with all its difficulty, messiness, and irreducible reality. That's not something any algorithm can provide, no matter how "adult" its mode becomes.


And as a society, we need to be willing to draw lines - not out of puritanism or technological fear, but out of clear-eyed recognition that some business models, however profitable, corrode the social fabric that makes human flourishing possible. The "adult choice" to slowly poison yourself with algorithmic intimacy shouldn't be protected any more than we protected the "adult choice" to smoke three packs a day.


We learned that lesson with tobacco. Let's not take another fifty years to learn it with AI.

 
 
 

Comments


(415) 706-2000

195 41st Street, Suite 11412

Oakland, CA 94611

  • Instagram
  • Facebook
  • YouTube

Two Buddhas is a nonprofit, volunteer-led, 501(c)3 organization.

Your contribution is tax deductible to the full extent allowed by law. Tax ID Number: 93-4612281.

© 2024 Two Buddhas

bottom of page