Well, if you know nothing about a subject, you can't talk about it. AI knows a little bit about everything. For example, I know nothing about how airplane engines work, so I won't talk about it. Fuel goes in, burns and spins propeller, pushes air out. End of story. That's all I know. And I'm not going to embellish the story, purposely misleading someone, adding myriad of details that I have no idea about. Yes, people can do that. And if I were to do that, that would be considered either evil or mischief. But as far as AI is concerned, I don't think it is purposely designed to be mischievous or misleading. This is probably a glitch or error in the software, not purposeful design. And AI has no conscience either, so it feels no remorse or guilt. It may say things like "sorry" or "you are right," but even if it does that, it's just a piece of man-made software, and at this point, it is still a poor imitation of real human intelligence. | [reply] |
| [reply] |
In my experience you yourself more often than not don't apologise or admit wrongdoing when someone corrects you. In fact you often double down, whereas you've experienced AI doing the opposite. You campaigned to have ChatGPT integrated here, (Incorporating ChatGPT into PerlMonks), currently the second worst node of the year.
| [reply] |
| [reply] |
I think the accurate human form is an unlogical fever dream.
LLMs don't think rationally or deduce, they "remember" associations.
This is also part of the human thinking process, but we have higher cognitive functions to correct hallucinations and apply logic.
And we can do this without requiring new nuclear power plants.
| [reply] |