I keep getting this invite

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

chuter

Sharpshooter
Supporting Member
Special Hen Supporter
Joined
Aug 19, 2010
Messages
5,323
Reaction score
7,742
Location
over yonder
I just posted this in another thread:

Here's a podcast from some high level (big money) tech investors talking about AI.
Near the end they also talk about the Restrict Act currently in congress which they (congress) say is about blocking Tik Tok but it's much more and must be stopped.

The real discussion starts about 7:30
 
Last edited:

dennishoddy

Sharpshooter
Supporting Member
Special Hen Supporter
Joined
Dec 9, 2008
Messages
84,874
Reaction score
62,678
Location
Ponca City Ok
Negative on ANYTHING related to AI. In spite of what "some" people say, I think that AI is definitely biased. That new "Chat-gp-whatever" is proving to show its bias.

As for Bing, I used to use it some years and years ago, but it is history for me now.
Been doing a little research on AI and stumbled across a news report about a guy that used the Chat-Gp to ask about climate change. He became obsessed with the Chat that almost took over his life. He was so distraught about climate change; the AI told him to kill himself and he did.
The wife has released the text of his discussions.
If it's that powerful, I can see some teen that gets bullied at school using it and given directions to shoot up the school by the AI as it has no feelings, just reacts off the human user.

As first reported by La Libre, the man, referred to as Pierre, became increasingly pessimistic about the effects of global warming and became eco-anxious, which is a heightened form of worry surrounding environmental issues. After becoming more isolated from family and friends, he used Chai for six weeks as a way to escape his worries, and the chatbot he chose, named Eliza, became his confidante.
Claire—Pierre’s wife, whose name was also changed by La Libre—shared the text exchanges between him and Eliza with La Libre, showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.” Claire told La Libre that Pierre began to ask Eliza things such as if she would save the planet if he killed himself.
"Without Eliza, he would still be here," she told the outlet.
The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google's Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond.
Many AI researchers have been vocal against using AI chatbots for mental health purposes, arguing that it is hard to hold AI accountable when it produces harmful suggestions and that it has a greater potential to harm users than help.
“Large language models are programs for generating plausible sounding text given their training data and an input prompt. They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks,” Emily M. Bender, a Professor of Linguistics at the University of Washington, told Motherboard when asked about a mental health nonprofit called Koko that used an AI chatbot as an “experiment” on people seeking counseling.

https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
 

Latest posts

Top Bottom