in reply to AI in the workplace

What do you think college graduates should know about AI?

Currently, "AI" tools look like the space ship computers from SciFi tv series, novels, and films (minus the blinking lights) - at first sight. But unfortunately, that's exactly how they are tuned / trained - to produce results that look convincing AT FIRST SIGHT. If you start scratching at the surface, you will find a lot of bullsh*t and plain nonsense.

And unlike in the SciFi series, where it takes a bold starship captain to talk a computer into suicide, our real-life "AI"s can be forced to produce nonsense by any teenager.

How much do you think they should rely on "AI" tools?

Pandoras box is open, we won't get rid of "AI". "AI" is a bubble, and I really hope it will burst real soon. It is a gigantic waste of resources with only very little gain.

It was sufficiently hard to get people to learn that all software has errors, some times severe errors. Now people happily blame "the computer" or "the software" for anything that goes wrong if a computer is near by. Now people have to learn that "AI" is just software, and even more, that it is badly trained software, tuned to look for a minute or two like a 24th century computer from the movies.

Based on that, ANYBODY should be able to judge for themselves how much they want to rely on "AI" tools.

Is vibe coding a real thing?

Of course it is. People are lazy. I remember a small sign on the wall, at my university. It roughly said "There is hardly anything that people would not do in order not to have to think."

I've seen enough code where you could later reconstruct how it was written: You have a problem. You type it as a question into Google or Stack Overflow. You copy and paste the very first search result into your code, start the compiler, and fix trivial problems like mismatching variable names. Wash, rinse, repeat, for every little step of the problem. We have seen that here, too, several times.

With "AI" tools, you can delegate fixing the variable names to the "AI", so given a sufficiently long "discussion", you may end with running code, copied and pasted by an "AI" for you.

Mark Dominus has a blog post showing exactly that: Claude and I write a utility program. He has some more blog posts about AI experiments, from trivia to math problems.

And he has a very nice summary, Talking dog:

These systems are like a talking dog. It's amazing that anyone could train a dog to talk, and even more amazing that it can talk so well. But you mustn't believe anything it says about chiropractics, because it's just a dog and it doesn't know anything about medicine, or anatomy, or anything else.

I think that's a pretty good summary. We have wasted, and still are wasting, a lot of resources to create a pack of talking dogs simulated by computers.

Alexander

--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)

Replies are listed 'Best First'.
Re^2: AI in the workplace
by LanX (Saint) on May 31, 2025 at 17:04 UTC
    > "There is hardly anything that people would not do in order not to have to think."

    I really like that quote, but couldn't find the source. (Just a similar one from Jung and avoiding ones soul.)

    But I found this study interesting in this context

    To Avoid Thinking Hard, We Will Endure Anything—Even Pain A study

    Regarding your analysis, it's sound if you only expect LLMs to continue the same way, that is being trained on human data stolen from the Internet.

    But do you remember this Go playing program which trained itself and was beating all human champions? (See AlphaGo Zero)

    I could imagine future helper AIs interacting with software designers who provides kind of a test suite defining the expected outcome and a self trained machine provides solutions.

    But this software designer will need a lot of skills, more than the usual user of ChatGPT.

    Because like always this code has to stay maintainable. (In a way)

    In contrast Vibe programs from amateurs are likely to be throw away code.

    In the end all of these are economic questions

    • how expensive will it be to keep the code maintained?
    • (Update: Or can we just keep and modify the prompts to always create new programs instead of maintaining them? )
    • Or even more basic: Can we afford/risk to rely on computer systems which are not anymore maintainable by humans?

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    see Wikisyntax for the Monastery

      > "There is hardly anything that people would not do in order not to have to think."

      I really like that quote, but couldn't find the source. (Just a similar one from Jung and avoiding ones soul.)

      The text was / is German, it was something close to "Es gibt nichts, was der Mensch nicht tun würde, um nicht denken zu müssen." (And I don't know the source.)

      Regarding your analysis, it's sound if you only expect LLMs to continue the same way, that is being trained on human data stolen from the Internet.

      Because that's what gains the most attention in the media and in the public, and that is what is commonly called "AI", at least in Germany

      But do you remember this Go playing program which trained itself and was beating all human champions? (See AlphaGo Zero)

      Faintly.

      I could imagine future helper AIs interacting with software designers who provides kind of a test suite defining the expected outcome and a self trained machine provides solutions.

      But this software designer will need a lot of skills, more than the usual user of ChatGPT.

      Because like always this code has to stay maintainable.

      I've recently watched a public talk at DESY in Hamburg, where one of the presenters talked about training neuronal networks to enhance medical diagnostics. DESY plans a new x-ray facility (PETRA IV), and he proposed to analyze medical samples both using conventional medical imaging (CT, PET, MRI, ...), and using the new x-ray facility. Of course, you can not use the high-energy x-ray facility on living tissue, it would be lethal. But the image quality is way better than what you can get from conventional medical imagers. The idea is to train that neuronal network on all images of those samples, and then use that neuronal network to help analyze the results from the imaging machines in hospitals. Effectively, the neuronal network would become an expert at interpreting imaging results with a lot of experience gained from the high energy, high resolution PETRA IV images.

      Compared to PETRA IV, those medical imaging machines are dirt cheap (sorry!), and a software update and perhaps a more powerful PC connected to the imaging machine would be sufficient to improve the diagnostics.

      Of course, this is digging down in the noise floor of the medical imagers. A neuronal network may find patterns that a human can't see, but it can't work miracles.

      This is, like the Go problem, a very limited problem, compared to the generic "fake a 24th century computer" problem.

      And, to continue comparing with dogs, that would NOT be a talking dog. It would be a guide dog, or a detection dog, or some other working dog. They can do impressive work, in their field. And we trust them, in their field. You would not expect a drug detection dog to lead a blind human, or vice versa.

      In contrast Vibe programs from amateurs are likely to be throw away code.

      I don't think so. Yes, people may start with throw-away code. But again, people are lazy: "Hey, $AI can generate helloworld.c and trivialsort.c. Let's make it write a solution for $complexproblem." I've seen exactly that lazyness in code cobbled together from ten or twenty first results from Google and Stackoverflow.

      In the end all of these are economic questions
      • how expensive will it be to keep the code maintained?
      • (Update: Or can we just keep and modify the prompts to always create new programs instead of maintaining them? )
      • Or even more basic: Can we afford/risk to rely on computer systems which are not anymore maintainable by humans?

      Let me ask an evil question: How much code is currently maintained that should be maintained? There is tons of unmaintained and probably unmaintainable code hidden below wrappers and abstraction layers. "AI" will make that only worse.

      One of my current projects at work suffers from exactly this problem. We outsourced an isolated part of the problem, and got back code that is in dire need of reworking or rewriting. It is far away from what we specified, both formally and in requirements. We lack both time and money to fix that, and so we currently add layers of toilet paper over that poop to hide the smell. (To be fair, our written spec lacked a lot of details due to a lack of time, but what we got back did not even match that spec. It is almost a not even wrong problem.)

      We are not the first to do so, and "AI" won't improve the situation. Probably, we will in future feed "AI"s with junk code and ask them to fix the current problem. Then start the compiler, perhaps run a few minimal tests, and release. Guess what will happens when someone gets the task to write test cases and has access to an "AI".

      Clients are rarely willing to pay for code quality. The medical and aerospace industries have come to the conclusion that a minimum of risk management, project management, and software design is required, but people had to suffer or die for the industry to learn that lession, over and over again. I think the automotive industry is on almost the same level. And still, people still desperately look for ways to bypass risk management, project management and software design to pay less and get results faster.

      Alexander

      --
      Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
        I agree on many points.°

        The fact that software production will become cheaper doesn't mean we will loose our jobs. We will just expand our tool set.

        And yes, also producing test suites will get cheaper. Contrary to you I think this will scale well.

        Regarding bosses not understanding the point of software quality ... well... my last client still thinks it was a good idea to let me go, because the products I left behind didn't fail in the last two years.🤷🏻‍♀️😁

        Edit

        Hence, from an evolutionary perspective it's bad to invest in SW quality... ;)

        Cheers Rolf
        (addicted to the Perl Programming Language :)
        see Wikisyntax for the Monastery

        Update
        °) which doesn't mean you addressed all my points. :)