in reply to AI in the workplace
What do you think college graduates should know about AI?
Currently, "AI" tools look like the space ship computers from SciFi tv series, novels, and films (minus the blinking lights) - at first sight. But unfortunately, that's exactly how they are tuned / trained - to produce results that look convincing AT FIRST SIGHT. If you start scratching at the surface, you will find a lot of bullsh*t and plain nonsense.
And unlike in the SciFi series, where it takes a bold starship captain to talk a computer into suicide, our real-life "AI"s can be forced to produce nonsense by any teenager.
How much do you think they should rely on "AI" tools?
Pandoras box is open, we won't get rid of "AI". "AI" is a bubble, and I really hope it will burst real soon. It is a gigantic waste of resources with only very little gain.
It was sufficiently hard to get people to learn that all software has errors, some times severe errors. Now people happily blame "the computer" or "the software" for anything that goes wrong if a computer is near by. Now people have to learn that "AI" is just software, and even more, that it is badly trained software, tuned to look for a minute or two like a 24th century computer from the movies.
Based on that, ANYBODY should be able to judge for themselves how much they want to rely on "AI" tools.
Is vibe coding a real thing?
Of course it is. People are lazy. I remember a small sign on the wall, at my university. It roughly said "There is hardly anything that people would not do in order not to have to think."
I've seen enough code where you could later reconstruct how it was written: You have a problem. You type it as a question into Google or Stack Overflow. You copy and paste the very first search result into your code, start the compiler, and fix trivial problems like mismatching variable names. Wash, rinse, repeat, for every little step of the problem. We have seen that here, too, several times.
With "AI" tools, you can delegate fixing the variable names to the "AI", so given a sufficiently long "discussion", you may end with running code, copied and pasted by an "AI" for you.
Mark Dominus has a blog post showing exactly that: Claude and I write a utility program. He has some more blog posts about AI experiments, from trivia to math problems.
And he has a very nice summary, Talking dog:
These systems are like a talking dog. It's amazing that anyone could train a dog to talk, and even more amazing that it can talk so well. But you mustn't believe anything it says about chiropractics, because it's just a dog and it doesn't know anything about medicine, or anatomy, or anything else.
I think that's a pretty good summary. We have wasted, and still are wasting, a lot of resources to create a pack of talking dogs simulated by computers.
Alexander
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: AI in the workplace
by LanX (Saint) on May 31, 2025 at 17:04 UTC | |
by afoken (Chancellor) on May 31, 2025 at 20:21 UTC | |
by LanX (Saint) on May 31, 2025 at 20:44 UTC |