"But, I have caught it making errors, especially in math."
Seems strange. Those old TI hand held calculators may come in handy and more valuable.
Seems strange. Those old TI hand held calculators may come in handy and more valuable.
So the guys who invented and are developing AI are wrong about their warnings?AI is the new buzzword in marketting. Everything is AI, Artificial intelligence will make you rich, AI will reduce your workload, etc etc. Its all crap really. What semi legitimate AI we do have is still very much limited by its design, and the costs associated with that. For example:
ChatGPT is technically artificial intelligence, however, it is not "intelligent" in the same means that you and I are intelligent. It is not sentient. It is, however, a program that has been exhaustively trained to use a program to string together the correct words to describe a very expansive data set. It works well, and has helped me recently with some stuff in school, but it has its shortcomings.
For example, I use it to help generate outlines for my research. I can tell ChatGPT to "give me a timeline on the inventions of John Moses Browning" to which I receive:
View attachment 432632
Looks accurate, and likely is. But, I have caught it making errors, especially in math. But what I get is a solid basis to research for a presentation or paper, and some basic points that I can expound upon. For math, I can ask it some questions or give it a problem, asking for the steps and theory of how to solve said problem, and I can get the problem broken down for me into easy to understand and follow steps, where I do the math myself. Easier and better than asking my unhelpful professors for help.
Is AI gonna take over the world? I doubt it. At the end of the day, its simply a dictation machine for most people. Is it hazardous for those of us who dont know how to differentiate reality from the digital world, or for those who don't want to take responsibility for making bad moral decisions (like AI controlled weapons).
Fun bit of info, AI is expensive to implement. ChatGPT, a relatively simple AI, was "trained" by harnessing the power of a data center. GPT3.0 cost just a little under $4.6 million dollars to train. Think about that next time someone says their $5 phone app has AI and will revolutionize your life. The 10,000 Nvidia Processing cards used to train the programs language model cost roughly $15,000 a piece in 2020. The newest generation cards run anywhere from $20k-50k from what I have seen.
Considering that computers have literally been made to do math from their inception, I highly doubt it is because the program is actually getting the problem wrong. I think it’s more of a cheating thing, where if you’re just relying upon the AI to just do your homework for you, either you’ll get some wrong, or your teacher will be able to detect when you’re cheating using the AI."But, I have caught it making errors, especially in math."
Seems strange. Those old TI hand held calculators may come in handy and more valuable.
There needs to be a watermark or something to identify Ai has been used.Considering that computers have literally been made to do math from their inception, I highly doubt it is because the program is actually getting the problem wrong. I think it’s more of a cheating thing, where if you’re just relying upon the AI to just do your homework for you, either you’ll get some wrong, or your teacher will be able to detect when you’re cheating using the AI.
Seems reasonable, but look how long it took to get GMO's labeled as "bioengineered"There needs to be a watermark or something to identify Ai has been used.
Like those cheating wonder what those profiting from AI will think of that?There needs to be a watermark or something to identify Ai has been used.
I never said that. Reread my post.So the guys who invented and are developing AI are wrong about their warnings?
‘We are a little bit scared’: OpenAI CEO warns of risks of artificial intelligence
Sam Altman stresses need to guard against negative consequences of technology, as company releases new version GPT-4www.theguardian.comMSN
www.msn.com
“Training” in this aspect isn’t anything like training a dog, or training your kids to not be POSes when they grow up. “Training” AI is developing the decision making algorithm so that it has the information needed, knows how to access and “understand” the information it has, and also outputs said information for the operator properly. Many of these browser specific chatbots are mere shadows of the OpenAI ChatGPT, they’re more of a finger in the pie, rather than an actual slice. Many of these smaller “AI” models are simply search aggregators with a chatbot built in, which is something we have had for years. This setup is typically trained to cater to users needs, and is thus, trained by user inputs. If it was targeted by groups of people who decided they wanted to “make the AI a rascist Nazi” then it’s gonna end up as so.Lot of money to make chatbot, guess it wasn't enough to train it right?
Microsoft shuts down AI chatbot after it turned into a Nazi
Microsoft's attempt to engage with millennials went badly awry within 24 hourswww.cbsnews.com
(quote)"Is it hazardous for those of us who dont know how to differentiate reality from the digital world, or for those who don't want to take responsibility for making bad moral decisions (like AI controlled weapons)."
Those questions you asked are good examples of what needs addressed before not after implementation but apparently they're not. What effects could chatbots on some people?
People are grieving the 'death' of their AI lovers after a chatbot app abruptly shut down
Smartphone app Soulmate AI let users create AI-powered friends and lovers. Then it closed down, and its thousands of users are heartbroken.www.businessinsider.com
As for AI weapons they're here, but as long as the "right people" are behind them and they can't get "weird" it'll all be ok right?
MSN
www.msn.com
What is this mythical story you’re referring to?It's said Einstein regretted his work that lead to nuke bombs. Nuclear is neat stuff that can bring good results but then there's 3 Mile Island, Chernobyl, and Fukushima...The law of unintended consequences?
Does the message in the mythical story of Atlantis have a valid meaning?
Much of what AI does is repeating the same information and points over and over in different ways. It looks good, but then when you read it and look at it as a whole you can tell it was done by AI. Try it out, it’s kinda fun. Ask it to “write me three paragraphs each six sentences long about XyZ”.There needs to be a watermark or something to identify Ai has been used.
Are you using AI to write these posts about AI?Much of what AI does is repeating the same information and points over and over in different ways. It looks good, but then when you read it and look at it as a whole you can tell it was done by AI. Try it out, it’s kinda fun. Ask it to “write me three paragraphs each six sentences long about XyZ”.
You’ll see it.
Enter your email address to join: