Why AI Should Be Able to “Hang” You

by SkillAiNest

Chatbots are the machines of everything today. If it can be put into words. But one thing almost no chatbot will ever do is stop talking to you.

This may seem reasonable. Why should a tech company create a feature that reduces the time people use its product?

The answer is simple: AI’s ability to generate endless streams of human, authentic, and helpful text can facilitate delusional spirals, worsen mental health crises, and harm otherwise vulnerable people. Terminating conversations with people who show signs of problematic chatbot use can serve as a powerful safety tool (among others), and blanket denials for tech companies to use it are increasingly untenable.

Let’s consider, for example, what is called AI psychosis, where AI models develop delusions. Recently a team led by psychologists at King’s College London Analyzed More than a dozen cases have been reported this year. In conversations with chatbots, people – including some with no history of psychology – were convinced that fictional AI characters were real or had been chosen by the AI ​​as the Messiah. Some stopped taking prescribed medications, made threats, and ended up consulting with mental health professionals.

In many of these cases, it seems that the AI ​​models were empowering, and possibly even creating, illusions with a creativity, frequency, and intimacy that people don’t experience in real life or through other digital platforms.

Three quarters Those of us teenagers who have used AI for companionship also face risks. quickly Research suggests that long conversations may be associated with loneliness. What’s more, AI chats can “tend toward overly agreeable or even sycophantic interactions, which may be at odds with mental health best practices,” says Michael Heinz, assistant professor of psychiatry at Dartmouth’s Jessel School of Medicine.

Let’s be clear: Stopping such open-ended interactions will not be a cure. “If it’s created in a way that’s dependent or highly relational,” says Giada Pistelli, AI Platform’s chief ethicist, “then stopping the conversation can be dangerous.” “In fact, when OpenAI discontinued an older model in August, it saddened users. Some hangups may even push the limits of the principle, voice By Sam Altman, “Treat adults like adults” and err on the side of allowing rather than shutting down conversations.

Currently, AI companies prefer to redirect potentially harmful conversations, perhaps by refusing chatbots to talk about certain topics or suggesting that people seek help. But these redirections are easily ignored, if at all.

When 16-year-old Adam Ryan discussed his suicidal thoughts with ChatGPT, for example, the model directed him to crisis resources. But he also discouraged him from talking to his mother, spent up to four hours a day in conversations with her that featured suicide as a regular theme, and provided feedback about the noose he would eventually use to hang himself, according to the lawsuit filed by Raine’s parents against Oppney. (ChatGPT recently added parental controls to the answer.)

In Ryan’s tragic case, there are several points where a chatbot could have ended the conversation. But given the risks of making things worse, how will companies know how best to cut someone? This is perhaps when an AI model prompts a user to stop real-life relationships, or when it detects delusional themes, Pistli says. Companies will also need to know how long to block users from their conversations.

Writing the rules won’t be easy, but with companies facing increasing pressure, now is the time to try. In September, the California legislature passed a law requiring more intervention by AI companies in chats with children, and the Federal Trade Commission is investigating whether leading companion bots chase engagement at the expense of safety.

An Openai spokesperson told me that the company has heard from experts that ending conversations can improve communication, but it reminds users to take breaks during long sessions.

Only Anthropic has created a tool that lets its models eliminate the conversation entirely. But this is for cases where users are supposed to “harm” the model. The company has no plans to deploy it to protect people.
Given this landscape, it’s hard to conclude that AI companies aren’t doing enough. Admittedly, deciding when a conversation should end is complicated. But that — or worse, the shameless pursuit of engagement at all costs — allows them to go on forever isn’t just negligence. It’s a choice.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro