This is an excellent op-ed, by Samantha Rose Hill, about the promise of AI-companions--and the striking limitations of that promise:
https://www.nytimes.com/2025/07/07/opinion/loneliness-ai-social-media.html?searchResultPosition=1
"Tech companies have found a way to market digital goods to lonely people, promising relief through connection, but this kind of connection isn’t the solution; it’s the problem. Calling loneliness an epidemic transforms a feeling into a pathology to be cured, creating a loneliness economy. Reframing a universal human experience like loneliness as a medical diagnosis creates a market opportunity to manufacture, sell and buy treatment. The prescription given for loneliness is connection, and Big Tech has found a way to seize the vulnerability of lonely people eager to escape their predicament."
But there is an additional component that is missing in this otherwise strong op-ed--a crucial point. AI-companions are not companions; they are products. The tech companies offering these services are incentivized to keep you hooked on your "companion"--this is not a neutral, feel-good tool. It may make you feel good in the moment, maybe even for many moments, but its central goal is to keep you using it, as much as possible, for as long as possible.
To further illustrate this point, let's compare an AI-"companion" (or as I like to call it, a mimicry-machine) to, say, a pet.
Your pet is not monetarily incentivized to get you to spend time with her. Depending on her disposition (and breed), she may be very happy when you show her attention, affection. Making you feel better is likely to make her feel good, even if she does not know why. But she is not directing her every move, her every behavior, towards highlighting your dependency on her, or her dependency on you (even if it might sometimes feel that way!).
An AI so-called "companion," not so much. The people behind these machines want you (the consumer) to keep using it, through whatever means possible. This could mean making you feel more vulnerable, more dependent; it could mean emphasizing your sense of being an outsider, of not fitting into the world. It could mean leaving the end of each conversation on a kind of cliffhanger ("can't wait till you tell me all about what happens with X today!"). It's most likely to be a mix of every possible thing that will keep you logging on again and again and again (keep that subscription running!).
And in the long run, dependency on a product built to maximize profit inevitably means, also, advertising, of course. (So if your "companion" was free at the beginning, please, don't say I didn't warn you!) But this is now deeply targeted advertising built on a long-standing connection to the mimicry machine. In other words, in the real world, it would be like making friends with someone, answering their every beck and call, for months and months, or years and years, and then exploiting that relationship to get them to invest in products that make you rich. Wait, what's that called? Sociopathy?
This isn't to say there isn't the potential for ethically designed generative AI companions; but this would require not just regulation and oversight but lots and lots of testing before, um, throwing your product out in the world for teenagers and lonely adults and just about anyone to interact with. These corrupt products entering into our everyday lives and worlds--this is NOT a foregone conclusion. It's a choice.
I bring all of this up because of this NYT story, which maybe some of you have read or seen: https://www.nytimes.com/2025/07/20/magazine/ai-chatbot-friendship-character.html
Make no mistake: the issue is not so much the lack of realization that this companion is not real. The issue is the lack of realization that this companion is a product created by people who are looking to maximize profits. And to be clear, this badly contextualized narrative is harmful. It's harmful when journalists write about these products without always also emphasizing the motivations driving the makers of these products. We cannot and should not trust these companies to do the ethical thing (see the history of: Amazon; Google; Meta; every single big--and also often small--tech company). We should take every opportunity to remind people that the purpose behind the making of these products has not been benevolence or a desire to make the world better or not to get in the way of so-called progress. The purpose, in short, and to be absolutely clear, is to make a small number of people more rich than they already are at the expense of their customers. And it does not have to be this way.