AI, it's fans and questions

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

Billybob

Sharpshooter
Special Hen
Joined
Feb 14, 2007
Messages
4,703
Reaction score
419
Location
Tulsa
AI is the new buzzword in marketting. Everything is AI, Artificial intelligence will make you rich, AI will reduce your workload, etc etc. Its all crap really. What semi legitimate AI we do have is still very much limited by its design, and the costs associated with that. For example:

ChatGPT is technically artificial intelligence, however, it is not "intelligent" in the same means that you and I are intelligent. It is not sentient. It is, however, a program that has been exhaustively trained to use a program to string together the correct words to describe a very expansive data set. It works well, and has helped me recently with some stuff in school, but it has its shortcomings.

For example, I use it to help generate outlines for my research. I can tell ChatGPT to "give me a timeline on the inventions of John Moses Browning" to which I receive:
View attachment 432632
Looks accurate, and likely is. But, I have caught it making errors, especially in math. But what I get is a solid basis to research for a presentation or paper, and some basic points that I can expound upon. For math, I can ask it some questions or give it a problem, asking for the steps and theory of how to solve said problem, and I can get the problem broken down for me into easy to understand and follow steps, where I do the math myself. Easier and better than asking my unhelpful professors for help.

Is AI gonna take over the world? I doubt it. At the end of the day, its simply a dictation machine for most people. Is it hazardous for those of us who dont know how to differentiate reality from the digital world, or for those who don't want to take responsibility for making bad moral decisions (like AI controlled weapons).

Fun bit of info, AI is expensive to implement. ChatGPT, a relatively simple AI, was "trained" by harnessing the power of a data center. GPT3.0 cost just a little under $4.6 million dollars to train. Think about that next time someone says their $5 phone app has AI and will revolutionize your life. The 10,000 Nvidia Processing cards used to train the programs language model cost roughly $15,000 a piece in 2020. The newest generation cards run anywhere from $20k-50k from what I have seen.
So the guys who invented and are developing AI are wrong about their warnings?

Lot of money to make chatbot, guess it wasn't enough to train it right?

(quote)"Is it hazardous for those of us who dont know how to differentiate reality from the digital world, or for those who don't want to take responsibility for making bad moral decisions (like AI controlled weapons)."

Those questions you asked are good examples of what needs addressed before not after implementation but apparently they're not. What effects could chatbots on some people?

As for AI weapons they're here, but as long as the "right people" are behind them and they can't get "weird" it'll all be ok right?


It's said Einstein regretted his work that lead to nuke bombs. Nuclear is neat stuff that can bring good results but then there's 3 Mile Island, Chernobyl, and Fukushima...The law of unintended consequences?
Does the message in the mythical story of Atlantis have a valid meaning?
 

HoLeChit

Here for Frens
Special Hen
Joined
Sep 26, 2014
Messages
6,532
Reaction score
10,508
Location
None
"But, I have caught it making errors, especially in math."

Seems strange. Those old TI hand held calculators may come in handy and more valuable.
Considering that computers have literally been made to do math from their inception, I highly doubt it is because the program is actually getting the problem wrong. I think it’s more of a cheating thing, where if you’re just relying upon the AI to just do your homework for you, either you’ll get some wrong, or your teacher will be able to detect when you’re cheating using the AI.
 
Joined
Dec 9, 2008
Messages
87,928
Reaction score
70,791
Location
Ponca City Ok
Considering that computers have literally been made to do math from their inception, I highly doubt it is because the program is actually getting the problem wrong. I think it’s more of a cheating thing, where if you’re just relying upon the AI to just do your homework for you, either you’ll get some wrong, or your teacher will be able to detect when you’re cheating using the AI.
There needs to be a watermark or something to identify Ai has been used.
 

Billybob

Sharpshooter
Special Hen
Joined
Feb 14, 2007
Messages
4,703
Reaction score
419
Location
Tulsa

HoLeChit

Here for Frens
Special Hen
Joined
Sep 26, 2014
Messages
6,532
Reaction score
10,508
Location
None
So the guys who invented and are developing AI are wrong about their warnings?
I never said that. Reread my post.


Lot of money to make chatbot, guess it wasn't enough to train it right?
“Training” in this aspect isn’t anything like training a dog, or training your kids to not be POSes when they grow up. “Training” AI is developing the decision making algorithm so that it has the information needed, knows how to access and “understand” the information it has, and also outputs said information for the operator properly. Many of these browser specific chatbots are mere shadows of the OpenAI ChatGPT, they’re more of a finger in the pie, rather than an actual slice. Many of these smaller “AI” models are simply search aggregators with a chatbot built in, which is something we have had for years. This setup is typically trained to cater to users needs, and is thus, trained by user inputs. If it was targeted by groups of people who decided they wanted to “make the AI a rascist Nazi” then it’s gonna end up as so.

Also, in the world of marketing, who is to say that Microsoft didn’t already have that programmed in or staged the entire incident, as a way to generate media coverage? All media coverage is good media coverage in marketing, right?

(quote)"Is it hazardous for those of us who dont know how to differentiate reality from the digital world, or for those who don't want to take responsibility for making bad moral decisions (like AI controlled weapons)."

Those questions you asked are good examples of what needs addressed before not after implementation but apparently they're not. What effects could chatbots on some people?

I’m well aware of all that. Hence my statement.

As for AI weapons they're here, but as long as the "right people" are behind them and they can't get "weird" it'll all be ok right?


Again, I never said that. I trust people even less than I trust computers.

It's said Einstein regretted his work that lead to nuke bombs. Nuclear is neat stuff that can bring good results but then there's 3 Mile Island, Chernobyl, and Fukushima...The law of unintended consequences?
Does the message in the mythical story of Atlantis have a valid meaning?
What is this mythical story you’re referring to?

I’m open to have a constructive conversation on these subjects, it’s fascinating stuff. But with how you come out of the gate swinging and making argumentative counterclaims to anyone statements in the matter, it seems more like you’re looking for an argument and that you already know that you’re right and everyone else is wrong. If that’s the case, you aren’t gonna get the argument you want from me.
 

HoLeChit

Here for Frens
Special Hen
Joined
Sep 26, 2014
Messages
6,532
Reaction score
10,508
Location
None
There needs to be a watermark or something to identify Ai has been used.
Much of what AI does is repeating the same information and points over and over in different ways. It looks good, but then when you read it and look at it as a whole you can tell it was done by AI. Try it out, it’s kinda fun. Ask it to “write me three paragraphs each six sentences long about XyZ”.

You’ll see it.
 

O4L

Sharpshooter
Staff Member
Special Hen Moderator Moderator
Joined
Aug 13, 2012
Messages
14,805
Reaction score
19,105
Location
Shawnee
Much of what AI does is repeating the same information and points over and over in different ways. It looks good, but then when you read it and look at it as a whole you can tell it was done by AI. Try it out, it’s kinda fun. Ask it to “write me three paragraphs each six sentences long about XyZ”.

You’ll see it.
Are you using AI to write these posts about AI? :D
 

Latest posts

Top Bottom