
Eileen Gow writes:
Even if you don’t have an AI friend yourself, you probably know someone who does. a A recent study It turns out that one of the top uses of generative AI is companionship: on platforms like Character.AI, Replica, or MetaAI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up.
It’s wild how easily people say these relationships can develop. And more than one Studies It turns out that the more conversational and human-like an AI chatbot is, the more likely we are to trust and be impressed by it. This can be dangerous, and chatbots have been accused of pushing some people into harmful behavior. including, Some extreme examples, suicide
Some state governments are taking notice and starting to regulate companion AI. New York AI partners require companies to create safeguards and report expressions of suicidal ideation, and last month California passed A more detailed bill requiring AI partner companies to protect children and other vulnerable groups.
But clearly, one area that the laws fail to address is user privacy.
This is despite the fact that AI companions, even more than other types of productive AI, rely on people to share deeply personal information from their daily routines, inner thoughts and questions they may not feel comfortable asking real people.
After all, the more users tell their AI companions, the better the bots get at keeping them engaged. In an op-ed published last year, MIT researchers Robert Mahary and Pete Pattarantapurna called it “addictive intelligence,” warning that developers of AI companions make “intentional design choices . . . to maximize user engagement.”