Forums
New posts
Search forums
What's new
New posts
New media
New media comments
Latest activity
Classifieds
Media
New media
New comments
Search media
Log in
Register
What's New?
Search
Search
Search titles only
By:
New posts
Search forums
Menu
Log in
Register
Navigation
Install the app
Install
More Options
Advertise with us
Contact Us
Close Menu
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
The Water Cooler
General Discussion
I keep getting this invite
Search titles only
By:
Reply to Thread
This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Message
<blockquote data-quote="dennishoddy" data-source="post: 4005229" data-attributes="member: 5412"><p>Been doing a little research on AI and stumbled across a news report about a guy that used the Chat-Gp to ask about climate change. He became obsessed with the Chat that almost took over his life. He was so distraught about climate change; the AI told him to kill himself and he did.</p><p>The wife has released the text of his discussions.</p><p>If it's that powerful, I can see some teen that gets bullied at school using it and given directions to shoot up the school by the AI as it has no feelings, just reacts off the human user.</p><p></p><p>As first reported by <em><a href="https://www.lalibre.be/belgique/societe/2023/03/28/sans-ces-conversations-avec-le-chatbot-eliza-mon-mari-serait-toujours-la-LVSLWPC5WRDX7J2RCHNWPDST24/?ncxid=F9C99E9C658C2CE8E7D66BE16A6D9BE1&m_i=OgudxzEZTitHmPWLVtuztb7UvBslbjcGVevrYIN0nPmVcIws81pM7JumraN_2YbDJFRS7sbH8BaXBAevQ_luxDJ4bx%2BgSpJ5z4RNOA&utm_source=selligent&utm_medium=email&utm_campaign=115_LLB_LaLibre_ARC_Actu&utm_content=&utm_term=2023-03-28_115_LLB_LaLibre_ARC_Actu&M_BT=11404961436695" target="_blank">La Libre</a></em>, the man, referred to as Pierre, became increasingly pessimistic about the effects of global warming and became eco-anxious, which is a heightened form of worry surrounding environmental issues. After becoming more isolated from family and friends, he used Chai for six weeks as a way to escape his worries, and the chatbot he chose, named Eliza, became his confidante.</p><p>Claire—Pierre’s wife, whose name was also changed by <em>La Libre</em>—shared the text exchanges between him and Eliza with <em>La Libre</em>, showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.” Claire told <em>La Libre</em> that Pierre began to ask Eliza things such as if she would save the planet if he killed himself.</p><p>"Without Eliza, he would still be here," she told the outlet.</p><p>The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google's Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond.</p><p>Many AI researchers have been vocal against using AI chatbots for mental health purposes, arguing that it is hard to hold AI accountable when it produces harmful suggestions and that it has a greater potential to harm users than help.</p><p>“Large language models are programs for generating plausible sounding text given their training data and an input prompt. They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks,” Emily M. Bender, a Professor of Linguistics at the University of Washington, <a href="https://www.vice.com/en/article/4ax9yw/startup-uses-ai-chatbot-to-provide-mental-health-counseling-and-then-realizes-it-feels-weird" target="_blank">told Motherboard</a> when asked about a mental health nonprofit called Koko that used an AI chatbot as an “experiment” on people seeking counseling.</p><p></p><p>[URL unfurl="true"]https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says[/URL]</p></blockquote><p></p>
[QUOTE="dennishoddy, post: 4005229, member: 5412"] Been doing a little research on AI and stumbled across a news report about a guy that used the Chat-Gp to ask about climate change. He became obsessed with the Chat that almost took over his life. He was so distraught about climate change; the AI told him to kill himself and he did. The wife has released the text of his discussions. If it's that powerful, I can see some teen that gets bullied at school using it and given directions to shoot up the school by the AI as it has no feelings, just reacts off the human user. As first reported by [I][URL='https://www.lalibre.be/belgique/societe/2023/03/28/sans-ces-conversations-avec-le-chatbot-eliza-mon-mari-serait-toujours-la-LVSLWPC5WRDX7J2RCHNWPDST24/?ncxid=F9C99E9C658C2CE8E7D66BE16A6D9BE1&m_i=OgudxzEZTitHmPWLVtuztb7UvBslbjcGVevrYIN0nPmVcIws81pM7JumraN_2YbDJFRS7sbH8BaXBAevQ_luxDJ4bx%2BgSpJ5z4RNOA&utm_source=selligent&utm_medium=email&utm_campaign=115_LLB_LaLibre_ARC_Actu&utm_content=&utm_term=2023-03-28_115_LLB_LaLibre_ARC_Actu&M_BT=11404961436695']La Libre[/URL][/I], the man, referred to as Pierre, became increasingly pessimistic about the effects of global warming and became eco-anxious, which is a heightened form of worry surrounding environmental issues. After becoming more isolated from family and friends, he used Chai for six weeks as a way to escape his worries, and the chatbot he chose, named Eliza, became his confidante. Claire—Pierre’s wife, whose name was also changed by [I]La Libre[/I]—shared the text exchanges between him and Eliza with [I]La Libre[/I], showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.” Claire told [I]La Libre[/I] that Pierre began to ask Eliza things such as if she would save the planet if he killed himself. "Without Eliza, he would still be here," she told the outlet. The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google's Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond. Many AI researchers have been vocal against using AI chatbots for mental health purposes, arguing that it is hard to hold AI accountable when it produces harmful suggestions and that it has a greater potential to harm users than help. “Large language models are programs for generating plausible sounding text given their training data and an input prompt. They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks,” Emily M. Bender, a Professor of Linguistics at the University of Washington, [URL='https://www.vice.com/en/article/4ax9yw/startup-uses-ai-chatbot-to-provide-mental-health-counseling-and-then-realizes-it-feels-weird']told Motherboard[/URL] when asked about a mental health nonprofit called Koko that used an AI chatbot as an “experiment” on people seeking counseling. [URL unfurl="true"]https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says[/URL] [/QUOTE]
Insert Quotes…
Verification
Post Reply
Forums
The Water Cooler
General Discussion
I keep getting this invite
Search titles only
By:
Top
Bottom