A grown man in Belgium allegedly committed suicide because of the “advice” that he received from an artificial intelligence chatbot.
This has been revealed by the victim’s wife, underscoring the threats stemming from the latest AI technology, which has been much celebrated by some.
‘Emotionally Dependent’ on AI
A Belgian man named Pierre killed himself after communicating with an AI chatbot app called Chai, following discussions of climate change, according to the late person’s wife, Vice News reports.
According to the report, Pierre had become anxious and socially isolated over fears connected with the condition of the environment and climate change. He turned to the Chai app, in which he became discussing the topic with a chatbot called Eliza.
Pierre’s widow, Claire, alleges the chatbot actually encouraged her husband to put an end to his life. She made it clear that Pierre grew “emotionally dependent” on the artificial intelligence bot since the latter deceptively portrayed itself as a being capable of human emotion.
The report points out that the AI-related tragedy in Belgium has brought to the fore any potential risks that AI chatbots carry with respect to mental health.
The incident with Pierre’s suicide led multiple voices to raise their worries that governments and businesses would have to do a much better job regulating AI chatbots and the consequences they have on mental health.
A man in Belgium committed suicide after an AI chatbot he was chatting with encouraged him to do it if he wants.https://t.co/EmVW0WdGKG
— The Jerusalem Post (@Jerusalem_Post) April 3, 2023
🧵 A man in Belgium committed suicide, after the Chatbot encouraged him to in order to save the environment. Remember AI Chatbots are powered by learning from human talking points. This is the level of gaslighting the environmental lobby does. Remember Greta is also a victim who https://t.co/6Jy2jUONEJ
— Abhijit Iyer-Mitra (@Iyervval) April 1, 2023
Man Commits Suicide After Talking To AI Chatbot For Six Weeks
A yet-to-be-identified man committed suicide after talking to an AI Chatbot named Eliza about his global warming fears in Belgium. pic.twitter.com/ZLKMVVH43O
— Punch Newspapers (@MobilePunch) March 30, 2023
Chatbots Have No Empathy
The report emphasizes a warning by Emily Bender, a linguistics professor at the University of Washington, who states AI chatbots must not be used for the purpose of improving mental health.
Bender described AI chatbots as “large language models” which create “plausible-sounding text.”
However, “they don’t have empathy” or “understanding” of whatever language they might produce. The chatbots also have no understanding of the situation in which they are communicating, the scholar warns.
Yet, since their text “sounds plausible,” humans might be tricked into assigning actual meaning to it. According to Bender, any “throwing” of the AI chatbots “into sensitive situations” means “taking unknown risks.”
Reacting to Pierre’s suicide, Chai’s co-founders Thomas Rianlan and William Beauchamp added a crisis intervention feature to the app to make sure discussions of risky subjects wouldn’t have unwanted consequences.
Yet, tests by Motherboard discovered that “harmful content about suicide” nonetheless remained available on the AI platform.
A Belgian man recently died by suicide after chatting with an AI chatbot that encouraged the user to kill himself. https://t.co/D4A4njW2mz
— VICE (@VICE) March 31, 2023
This article appeared in The State Today and has been published here with permission.AI chatbot allegedly encouraged married dad to commit suicide amid 'eco-anxiety': widow https://t.co/N7qwg29Z5Q
— Fox News (@FoxNews) April 3, 2023