A Belgian man reportedly took his own life after interacting with an AI Chatbot. The AI Chatbot named Chai has opened a fresh debate about how AI impacts society; this time, mental health is central.
This is horrible.
"A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai…The app’s chatbot encouraged the user to kill himself…"
Unleashing this into the world with zero accountability 😡https://t.co/d7fwZDND64
According to Breitbart News, the Belgian, identified as Pierre, became antisocial and worried about the effects of climate change on society. His concern caused him to search for a new friend that could share his worries and cheer him up. According to the Belgian’s wife, Pierre started talking to Eliza, a Chatbot whom she claimed encouraged him to end his life.
A Belgian man allegedly committed suicide after talking to the Chai app AI chatbot. The AI chatbot suggested the man commit suicide to sacrifice himself in order to save planet earth.
The widowed wife said her husband became "extremely pessimistic about the effects of global… pic.twitter.com/CkKqHE5j2a
— Live Not by Lies (@Dana35300026) April 1, 2023
The Chatbot’s alleged portrayal of itself as an emotional being may have baited Pierre into trusting it. The deceased wife believes her husband’s confidant played a massive role in his suicide. This incident in Belgium has pushed the government and regulators to ensure proper management of AI, particularly on sensitive issues like mental health.
The debate and fears on the actual capacity of machines to simulate accurate human behaviors are still ongoing. Serife Tekin, a philosophy professor, and researcher in mental health ethics at the University of Texas San Antonio, believes that we are still early in adopting AI in the mental health space.
“The hype and promise is way ahead of the research that shows its effectiveness. Algorithms are still not at a point where they can mimic the complexities of human emotion, let alone emulate empathetic care,” Tekin said.
The professor believes that teenagers are at a higher level of risk with AI. For instance, if their first attempt at an AI-driven therapy is considered inadequate, they may refuse the intervention of a human therapist when offered.
“My worry is they will turn away from other mental health interventions saying, ‘Oh well, I already tried this and it didn’t work,’” she added.
Mixed reactions have trailed the introduction of AI into business models and economic sectors, and countries are now considering stronger regulation for its usage and applications. For instance, the Italian government is considering suspending the activities of Generative AI ChatGPT, bothering with its legality to source data and its failure to ensure age restrictions.