I sit on a Program Advisory Committee for a local college as an industry representative, and at next week's meeting we're going to be asked about 'AI in the workforce'. They'll be asking about specific areas, a) How organizations are integrating AI tools, b) Whether AI can be used as a supplement or replacement, and c) What implications AI has for productivity and skill sets.

What do you think college graduates should know about AI? How much do you think they should rely on AI tools? Is vibe coding a real thing?

Alex / talexb / Toronto

For a long time, I had a link in my .sig going to Groklaw. I heard that as of December 2024, this link is dead. Still, thanks to PJ for all your work, we owe you so much. RIP Groklaw -- 2003 to 2013.

Replies are listed 'Best First'.
Re: AI in the workplace
by hippo (Archbishop) on May 31, 2025 at 10:29 UTC
    What do you think college graduates should know about AI?

    That the LLM crawlers are destroying the web faster than all the walled gardens put together. That the people behind LLM training have no morals whatsoever and will steal all of your IP in the blink of an eye. That the A is correct but the I is not.

    How much do you think they should rely on AI tools?

    Not at all - nobody should rely on anything so prone to produce such utter garbage. Sure use it if you like but you'll spend more time proofing it than you will save. And even then you'll miss something and that will cost you big time. If you're lucky, nobody will die.

    There you go. That should bring them down to earth. :-)


    🦛

Re: AI in the workplace
by afoken (Chancellor) on May 31, 2025 at 10:30 UTC
    What do you think college graduates should know about AI?

    Currently, "AI" tools look like the space ship computers from SciFi tv series, novels, and films (minus the blinking lights) - at first sight. But unfortunately, that's exactly how they are tuned / trained - to produce results that look convincing AT FIRST SIGHT. If you start scratching at the surface, you will find a lot of bullsh*t and plain nonsense.

    And unlike in the SciFi series, where it takes a bold starship captain to talk a computer into suicide, our real-life "AI"s can be forced to produce nonsense by any teenager.

    How much do you think they should rely on "AI" tools?

    Pandoras box is open, we won't get rid of "AI". "AI" is a bubble, and I really hope it will burst real soon. It is a gigantic waste of resources with only very little gain.

    It was sufficiently hard to get people to learn that all software has errors, some times severe errors. Now people happily blame "the computer" or "the software" for anything that goes wrong if a computer is near by. Now people have to learn that "AI" is just software, and even more, that it is badly trained software, tuned to look for a minute or two like a 24th century computer from the movies.

    Based on that, ANYBODY should be able to judge for themselves how much they want to rely on "AI" tools.

    Is vibe coding a real thing?

    Of course it is. People are lazy. I remember a small sign on the wall, at my university. It roughly said "There is hardly anything that people would not do in order not to have to think."

    I've seen enough code where you could later reconstruct how it was written: You have a problem. You type it as a question into Google or Stack Overflow. You copy and paste the very first search result into your code, start the compiler, and fix trivial problems like mismatching variable names. Wash, rinse, repeat, for every little step of the problem. We have seen that here, too, several times.

    With "AI" tools, you can delegate fixing the variable names to the "AI", so given a sufficiently long "discussion", you may end with running code, copied and pasted by an "AI" for you.

    Mark Dominus has a blog post showing exactly that: Claude and I write a utility program. He has some more blog posts about AI experiments, from trivia to math problems.

    And he has a very nice summary, Talking dog:

    These systems are like a talking dog. It's amazing that anyone could train a dog to talk, and even more amazing that it can talk so well. But you mustn't believe anything it says about chiropractics, because it's just a dog and it doesn't know anything about medicine, or anatomy, or anything else.

    I think that's a pretty good summary. We have wasted, and still are wasting, a lot of resources to create a pack of talking dogs simulated by computers.

    Alexander

    --
    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
      > "There is hardly anything that people would not do in order not to have to think."

      I really like that quote, but couldn't find the source. (Just a similar one from Jung and avoiding ones soul.)

      But I found this study interesting in this context

      To Avoid Thinking Hard, We Will Endure Anything—Even Pain A study

      Regarding your analysis, it's sound if you only expect LLMs to continue the same way, that is being trained on human data stolen from the Internet.

      But do you remember this Go playing program which trained itself and was beating all human champions? (See AlphaGo Zero)

      I could imagine future helper AIs interacting with software designers who provides kind of a test suite defining the expected outcome and a self trained machine provides solutions.

      But this software designer will need a lot of skills, more than the usual user of ChatGPT.

      Because like always this code has to stay maintainable. (In a way)

      In contrast Vibe programs from amateurs are likely to be throw away code.

      In the end all of these are economic questions

      • how expensive will it be to keep the code maintained?
      • (Update: Or can we just keep and modify the prompts to always create new programs instead of maintaining them? )
      • Or even more basic: Can we afford/risk to rely on computer systems which are not anymore maintainable by humans?

      Cheers Rolf
      (addicted to the Perl Programming Language :)
      see Wikisyntax for the Monastery

        > "There is hardly anything that people would not do in order not to have to think."

        I really like that quote, but couldn't find the source. (Just a similar one from Jung and avoiding ones soul.)

        The text was / is German, it was something close to "Es gibt nichts, was der Mensch nicht tun würde, um nicht denken zu müssen." (And I don't know the source.)

        Regarding your analysis, it's sound if you only expect LLMs to continue the same way, that is being trained on human data stolen from the Internet.

        Because that's what gains the most attention in the media and in the public, and that is what is commonly called "AI", at least in Germany

        But do you remember this Go playing program which trained itself and was beating all human champions? (See AlphaGo Zero)

        Faintly.

        I could imagine future helper AIs interacting with software designers who provides kind of a test suite defining the expected outcome and a self trained machine provides solutions.

        But this software designer will need a lot of skills, more than the usual user of ChatGPT.

        Because like always this code has to stay maintainable.

        I've recently watched a public talk at DESY in Hamburg, where one of the presenters talked about training neuronal networks to enhance medical diagnostics. DESY plans a new x-ray facility (PETRA IV), and he proposed to analyze medical samples both using conventional medical imaging (CT, PET, MRI, ...), and using the new x-ray facility. Of course, you can not use the high-energy x-ray facility on living tissue, it would be lethal. But the image quality is way better than what you can get from conventional medical imagers. The idea is to train that neuronal network on all images of those samples, and then use that neuronal network to help analyze the results from the imaging machines in hospitals. Effectively, the neuronal network would become an expert at interpreting imaging results with a lot of experience gained from the high energy, high resolution PETRA IV images.

        Compared to PETRA IV, those medical imaging machines are dirt cheap (sorry!), and a software update and perhaps a more powerful PC connected to the imaging machine would be sufficient to improve the diagnostics.

        Of course, this is digging down in the noise floor of the medical imagers. A neuronal network may find patterns that a human can't see, but it can't work miracles.

        This is, like the Go problem, a very limited problem, compared to the generic "fake a 24th century computer" problem.

        And, to continue comparing with dogs, that would NOT be a talking dog. It would be a guide dog, or a detection dog, or some other working dog. They can do impressive work, in their field. And we trust them, in their field. You would not expect a drug detection dog to lead a blind human, or vice versa.

        In contrast Vibe programs from amateurs are likely to be throw away code.

        I don't think so. Yes, people may start with throw-away code. But again, people are lazy: "Hey, $AI can generate helloworld.c and trivialsort.c. Let's make it write a solution for $complexproblem." I've seen exactly that lazyness in code cobbled together from ten or twenty first results from Google and Stackoverflow.

        In the end all of these are economic questions
        • how expensive will it be to keep the code maintained?
        • (Update: Or can we just keep and modify the prompts to always create new programs instead of maintaining them? )
        • Or even more basic: Can we afford/risk to rely on computer systems which are not anymore maintainable by humans?

        Let me ask an evil question: How much code is currently maintained that should be maintained? There is tons of unmaintained and probably unmaintainable code hidden below wrappers and abstraction layers. "AI" will make that only worse.

        One of my current projects at work suffers from exactly this problem. We outsourced an isolated part of the problem, and got back code that is in dire need of reworking or rewriting. It is far away from what we specified, both formally and in requirements. We lack both time and money to fix that, and so we currently add layers of toilet paper over that poop to hide the smell. (To be fair, our written spec lacked a lot of details due to a lack of time, but what we got back did not even match that spec. It is almost a not even wrong problem.)

        We are not the first to do so, and "AI" won't improve the situation. Probably, we will in future feed "AI"s with junk code and ask them to fix the current problem. Then start the compiler, perhaps run a few minimal tests, and release. Guess what will happens when someone gets the task to write test cases and has access to an "AI".

        Clients are rarely willing to pay for code quality. The medical and aerospace industries have come to the conclusion that a minimum of risk management, project management, and software design is required, but people had to suffer or die for the industry to learn that lession, over and over again. I think the automotive industry is on almost the same level. And still, people still desperately look for ways to bypass risk management, project management and software design to pay less and get results faster.

        Alexander

        --
        Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
Re: AI in the workplace
by duelafn (Parson) on Jun 01, 2025 at 12:49 UTC

    Given the context, it seems like "AI is evil/dangerous" isn't going to fly well as a message to bring to this meeting.

    We are a small industrial machine manufacturer. We do not use AI as a company, but individuals can and do use AI on occasion. For example, I have a prompt I use that turns an SQL table definition into a Rust struct with accessors and documentation in the format that I like. I pull this out if I need to convert a large table or several tables at a time. I've also found AI search to be useful in cases where a standard search engine will fail due to an abundance of related but non-relevant results. As a bad example (made up because I can't recall my latest such search), attempting to resolve a networking issue on, say, Android can be difficult due to the high number of results that explain how to disable airplane mode. An AI query can understand an included phrase of "I checked and airplane mode it not enabled" to skip past such mundane debugging suggestions.

    For a college, therefore, my recommendations are,

    • Students should be familiar with the AI opportunities available including their significant limitations and dangers.
    • Emphasize that there is a time and a place for AI (and that is definitely not everywhere or all the time)
    • AI should never be used as a substitude for expertise
    • AI tools should only be used as a convenience in contexts where their output can be verified or corrected – first-round bug triage is a good example
    • AI can assist, not replace
    • At this time especially, AI will not provide large boosts in productivity – both because AI currently sucks and because the added load of determining its capabilities and limitations offsets most benefits it might provide and the rapid development means those analyses must be continually re-evaluated.

    Good Day,
        Dean

Re: AI in the workplace
by choroba (Cardinal) on Jun 01, 2025 at 10:19 UTC
    Just a tangential comment.

    I run a web application that distributes work to people and collects it back. There is a "Download a random file" button which not only downloads the file but also assigns the file to the user in a bookkeeping system. A few days ago, one of the users complained that they clicked on the button several times quickly, got assigned several files in the system, but only one file was downloaded.

    Instead of asking here, I tried asking an AI model we run at work for testing purposes. It told me how to fix the "double-submit" problem in several ways and I was able to fix the application quickly.

    The downside? No interaction with people, no XP shared to anyone answering my question. It took me years to learn English up to the level I could dare to ask a question online, it still makes me step out of my comfort zone to overcome my shyness. But I love that I can do that. I feel ashamed I talked to the AI instead.

    map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]
      > It took me years to learn English up to the level I could dare to ask a question online

      And I suppose you were able to ask the AI in Czech too.

      Edit

      BTW and probably also tangent.

      The whole situation reminds me of the time Google search became popular and many "developers" stopped reading manuals.

      Cheers Rolf
      (addicted to the Perl Programming Language :)
      see Wikisyntax for the Monastery

        For some reason, I didn’t this time. But it’s interesting how different the answers tend to be when the question is or is not in English.

        For example, when I played with the AI, I remembered a short story from a Soviet author (or authors, I think it was from Strugatsky brothers) I read in the 80s. It was about someone who built a computer that was able to ask or answer any question. The protagonist had to find a question the computer couldn’t answer in order to proceed, and when the computer was explaining the rules, it said: "You can’t ask questions with false presuppositions like ‘Is it really you, Ivan Ivanovich?’ or ‘Why do ghosts have their hair cut short?’" (In the end, the question that bricked the computer was "Can you find a question to which you can’t find an answer?"). So I tried asking the AI about ghosts’ haircut. In English, the answer was rather long, explaining that a ghost’s head is usually covered in a sheet so we don’t see its hair, blah blah. In Czech, the answer was quite short: "Because they’re afraid of long cuts." (Sounds like a punchline, but it’s not funny).

        map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]
Re: AI in the workplace
by afoken (Chancellor) on Jul 26, 2025 at 07:23 UTC

    Vibe coding: Two Major AI Coding Tools Wiped Out User Data After Making Cascading Mistakes (slashdot)

    Best described by Schadenfreude. Delegating the work to a talking dog, giving the talking dog full access and instructing it not to use that privilege, not using a version control system, not even having a backup. Taking each and every shortcut possible needs to be punished exactly like this.

    Alexander

    --
    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)

      Credit where credit is due -- AI is pretty awesome at analysis. I'm a lot less impressed with how it works as an automated developer.

      Giving an AI full access to a repo -- madness. A much better approach would be to tell it to submit a PR, and have that PR reviewed by a human. The feedback to that PR would help the AI going forward. And I know that some people are using AI to get work done -- props on that. If it's work that 100% has to be reviewed and checked, I'm not sure how much further ahead you are.

      Alex / talexb / Toronto

      As of June 2025, Groklaw is back! This site was a really valuable resource in the now ancient fight between SCO and Linux. As it turned out, SCO was all hat and no cattle.Thanks to PJ for all her work, we owe her so much. RIP -- 2003 to 2013.

Re: AI in the workplace
by cavac (Prior) on Jun 04, 2025 at 08:15 UTC

    Using AI can also have legal consequences, depending on jurisdiction and local law.

    • AI may straight up copy someone elses code when asked to solve a specific problem. This opens up your organization to lawsuits.
    • Code generated by AI will probably not enjoy copyright protection, similar to that thing were a monkey took a photo with a camera and the photo had no copyright protection associated with it. So a competitor might be able to legally copy those parts of your product.
    • Under certain circumstances, you are required to publicly state if (and what) part of your product was created by AI, especially if AI is included in the final product (EU AI act, publishing Steam games, ...). This might both restrict the circumstances under which the product can be used, as well as the pricing of the product.
    • Using an externally hosted AI may also violate internal policies, license agreements and/or contracts with other companies (giving protected code/data to a third party). It may as well violate privacy protection laws (EU GDPR, ...).

    AI is a pretty new field (in the sense that it exists outside labs and university courses). A lot of it has yet to be tested in courts of law around the world.

    PerlMonks XP is useless? Not anymore: XPD - Do more with your PerlMonks XP
    Also check out my sisters artwork and my weekly webcomics

      Yes! When I spoke in my meeting two days ago, my brain was unable to come up with the word 'provenance' and instead had to substitute 'source' (a word that has multiple meanings, including for software itself). But my complaint was that AI agents are scouring the web for information that may include source code from a variety of sour.. um, locations, and if that content is replayed by the AI, and if you use it, you have no idea whether it's original or just copied from somewhere. And that's a problem.

      I recommended that AI be used for analysis only, and not for content creation, for a number of reasons. I'm not sure that was the answer that they were looking for, but I really don't think AI will replace software developers. I'll change my tune when an AI can create a well-structured, commented piece of code that works correctly, and can be useful right away.

      Alex / talexb / Toronto

      As of June 2025, Groklaw is back! This site was a really valuable resource in the now ancient fight between SCO and Linux. As it turned out, SCO was all hat and no cattle.Thanks to PJ for all her work, we her you so much. RIP -- 2003 to 2013.

        I'll change my tune when an AI can create a well-structured, commented piece of code that works correctly, and can be useful right away.

        ... and isn't violating any licences.


        🦛

Re: AI in the workplace
by NERDVANA (Priest) on Jun 02, 2025 at 21:02 UTC
    It's easy to find examples where AI fails, or fails to meet expectations. It's only going to get better, with as much money is being invested in it. The degradation of training data is only a problem for the training as they're doing it now - eventually the models will be learning similar to humans and building their own training data from experience. But, most of these opinions are speculation.

    I've been making a list of AI successes, though I don't usually post them here. Here are the two most recent ones, both of which I gave to Claude.AI, and were both successful:

    I'd like a javascript component which plays a list of URLs (audio files) from a web server. This could be almost as simple as opening the URL and activating the browser-based player, but I also want to chain to the next song as each song completes. This is for internal use, so there don't need to be any permissions or fancy byte-range loading, unless those features are trivial to add. In other words, I'd be happy if I simple load up a bunch of browser-native players for each song, then trigger them to play one after another. But a fancy system of loading byte ranges into the player until a song file is exhaused and then loading ranges from the next file is also a workable solution. I currently have files in .flac and .mp3, but I can transcode to whatever is most convenient and compatible for the player. This player will go onto a standalone page, probably just jQuery with a simple mobile-friendly play/pause button. If you're pretty sure about the best approach, then just go ahead and write it, but otherwise please explain my options and pros/cons.

    which generated this code, which was a 100% success. I edited it to list my songs, named it index.html, and put it into a docker container alongside my song files for a complete music player off my home server in a mere 1-2 hours of effort.

    Another similar project that was 99% successful was:

    I'd like a simple Perl Mojolicious::Lite-based webapp that serves a page displaying the uptime and running status from "docker inspect ark", and an action that can run "docker stop ark" or "docker start ark" and corresponding buttons on the page. The page should poll the status every 30 seconds, but at 1 second intervals after requesting a change (start/stop) until the change completes.

    (followup)

    to pair with this, can you write a small program in C called ark-control-docker which takes exactly one parameter "inspect", "stop" or "start", and execs "docker","inspect","ark", and so on? This way I can give it set-gid with group docker and not give generic docker permissions to the webapp. Or, if you have a better idea for privilege isolation, let me know.

    (followup)

    One more change to the web script - as it waits for the start event, it should read the log file looking for a line like [2025.05.20-20.20.13:889][  0]Server: "(name)" has successfully started! The log file is currently located at [REDACTED_PATH]

    (followup)

    one more catch, you need to make sure the timestamp on the log file is newer than the timestamp of when the server started, or else it reads the old log file before the process has begun writing the new one

    (if you're curious, this was so a friend could start/stop a server for the video game ARK while VPN'd into my network, but without needing to give them a login to the server or permission to run docker, which is equivalent to root)

    The one mistake it made in the code was to declare a Mojo route using ":action" as the parameter name, which is a reserved parameter name in Mojo. I renamed it to ":verb" and all the rest of the code worked. You can also see that it didn't think to check the date on the log file at first... but then neither did I! If it had predicted that it would probably be a more capable programmer than me.

    Those two were smashing successes - I wrote almost no code at all and got something useful out of them that I wouldn't have had time to have written otherwise.

    This one was maybe 85% successful:

    I'd like a script in Perl that performs a binary search of ZFS snapshots. It should accept a dataset name, and list out available snapshots for that dataset, then prompt for the min and max timestamp, unmount the original dataset, then repeatedly create writable datasets from a snapshot (according to binary search) at the original mount point, then prompt me for whether that is a good or bad snapshot. When it determins the final good snapshot, it should give me the option to roll back the dataset to that snapshot, and remount the datset normally. Along the way, it should discard the temporary writable datasets it created. My dataset snapshots look like:
    $ zfs list -t snapshot [REDACTED_VOLUME_NAME]

    It messed up on this one by not knowing that ZFS clones auto-mount at a path the same as their name. It's no worse than I would have done, though, because I'm only moderately familiar with ZFS, and part of why I asked it this question was to see what sort of commands it would generate for this workflow. After running into that one problem, I was able to fix the commands by hand to specify the mount point at clone time, and got it working. The user interface was rather nice and I didn't have to write a single line of it.

    I'll conclude with a very recent advancement: ChatGPT introduced Codex, which is a system that links up with your GitHub account and will spend actual minutes reading and understanding your code, and running the unit tests in something like a docker container that you set up for it. I was able to get all my perl dependencies installed, and ChatGPT wrote a significant portion of my unit tests for Crypt::SecretBuffer. This effort wasn't nearly as big of a win as I was hoping for; I started Crypt::SecretBuffer trying to see if I could get AI to write the whole thing complete with Win32 compatibility (which I didn't even know how that would work, exactly). That ended up being my downfall - I tried asking it to do things that weren't possible on Win32 and then it wrote a bunch of code that could never work. But again, it's not hard to find examples of what fails. The interesting part is that things got notably better once I introduced my AGENTS.md. Once AI starts working more like a developer who understands the codebase and knows what the goals are, and spends actual time thinking about the problem and running the tests, it's going to be a million times better than blurting out a pile of code as an off-the-cuff response to a prompt, which is most of what people have experienced with AI so far.

    This is a coming revolution, like when the Internet first appeared. It's going to change everything. It's better to follow closely on the leading edge so you don't get buried under the wave.

    Update

    If there's one conclusion I'd draw from this on the current state of AI, it's that AI does way better when you choose an intelligent implementation and ask it to write the bothersome details, than when you ask AI to design the implementation for you. But, AI can also explain all the details of the technology you're about to use. So you can sit around asking it to compare technologies and tell you about design limitations, then make a decision about what to write, then ask it to write it for you. So a smart technical-minded human is still *currently* required to get good results. But there's no telling whether AI will eventually be capable of that central decision role!

Re: AI in the workplace
by marto (Cardinal) on May 31, 2025 at 19:07 UTC

    Without soliciting answers from the masses, what are your responses to these questions? also.

      I didn't want to influence the discussion with my thoughts, and I thank the monks for sharing their thoughts.

      My observations are that AI is pretty good at analysis, really not that good at writing code. And by writing code, I mean

      • A good description of how it's going to solve the problem;
      • A well structured piece of code that actually solves the problem; and
      • Comments in the code that explain each ste.
      If you get back a bunch of code that's a mess, a mish-mish of things that were found on the net, without any comments, that's not worth much.

      And I think 'vibe coding' is a joke. I'd like to see a demo of an AI being told to write some code, and actually have it produce something that a) works properly, and b) produces readable, useful code.

      Alex / talexb / Toronto

      For a long time, I had a link in my .sig going to Groklaw. I heard that as of December 2024, this link is dead. Still, thanks to PJ for all your work, we owe you so much. RIP Groklaw -- 2003 to 2013.

Re: AI in the workplace
by bliako (Abbot) on Jul 29, 2025 at 17:11 UTC

    I have the impression that LLM AI is good at summarising large chunks of text data like FAQs, minutes of meetings. What about learning the laws of a country and being a judge? Truly blind one.

    *This* AI is not democratic in the sense that it requires huge resources to be trained and used. Very few have these. This is a huge advantage for some and disadvantage for the rest. What happens when it will become a paid service? Hundreds of self-taught programmers and web-developers read the (cheap) books and manuals in order to roll out their own programs/ideas. This will no longer be possible with the big-corporations AI. Remember Guttenberg? Or how Altair came to be and the first homebrew computer enthusiasts, affordability was key and also transparency and sharing of knowledge? Not with the big-corps AI. It is a rabbit hole leading to a snake-pit. Unless of course it becomes Democratic. But it will not be. The way I see it being marketed is that big-brother has the electrons, the transistors, the data and the models and will allow you to use them for the benefit of all of us. Yeah right!

    It used to be that in the (so-called) free market if an idea sucks, it sinks and startup goes bankrupt. But with *this* AI it seems we have the same with banks: too big to fail. M$, Groogle Android et al are all trying to spoon-feed us their AI and incorporate their bots into our life, desperately, in order to at least get some money (or sellable data) out of it, though it looks to be free.

    OTOH *this* AI seems to me to be very potent. If one ignores some pitfalls. Too bad it is in the hands of fools, maniacs and genocidals. "Confusion will be my epitaph".

    10min Update: I would say this as a conclusion to the board:

    Spend efford in trying to make AI democratic, cheaper, less demanding on resources (and the planet), open-source rather than pushing demand for the big-corps' AI. Learning was never about showing an end result/product, it used to be about the journey. That journey can not be made with big-corporations AI. To support the above I would say: stop feeding them and teach them to catch a fish.

    bw, bliako

Re: AI in the workplace
by harangzsolt33 (Deacon) on Jul 27, 2025 at 00:50 UTC
    I recently asked Meta AI about computer programming related topics, and I was surprised by the sheer confidence it expressed with giving several wrong answers. Then when I confronted it, it kind of said yes, that is indeed true. And finished off with something like "I'm glad we figured this out" as if we mutually helped each other to come to the right answer when in fact, I knew the right answer and IT didn't. But anyway, AI does have good ideas occasionally, and it can tell you things you didn't know before. But always be more knowledgeable than AI, because the moment you wade into territory that you know nothing about, it can lead you astray. AI is like children in a foreign school. You ask them how do you say "I like you" and the kids will say "you're a pig" in their foreign language. And if you don't double check the answer, you'll be misled and fooled. The only difference is that the kids will be giggling, while AI spews nonsense with 100% confidence and conviction. lol

      "...while AI spews nonsense with 100% confidence and conviction."

      Which to be fair is incredibly common in humans.

        Well, if you know nothing about a subject, you can't talk about it. AI knows a little bit about everything. For example, I know nothing about how airplane engines work, so I won't talk about it. Fuel goes in, burns and spins propeller, pushes air out. End of story. That's all I know. And I'm not going to embellish the story, purposely misleading someone, adding myriad of details that I have no idea about. Yes, people can do that. And if I were to do that, that would be considered either evil or mischief. But as far as AI is concerned, I don't think it is purposely designed to be mischievous or misleading. This is probably a glitch or error in the software, not purposeful design. And AI has no conscience either, so it feels no remorse or guilt. It may say things like "sorry" or "you are right," but even if it does that, it's just a piece of man-made software, and at this point, it is still a poor imitation of real human intelligence.
          A reply falls below the community's threshold of quality. You may see it by logging in.
      LLMs are basically search engines with a chat interface. Instead of links we get an averaged answer of often questionable input.

      This is not a logical intelligence and peoples expectations are totally overblown, because they judge the book by the cover.

      In the past people had to learn the hard way that Google and Wikipedia often produced questionable output, they'll have to learn it again.

      But this doesn't mean we stopped using Google and WP, they found their niche in our workflows.

      And AI in general (not LLMs) might still find a robust niche in software development, but new tools require new "workflows".

      Anyway a little personal advice: one thing you could learn from the polished way LLMs talk is not to end every post with an irritating "LOL".

      I bet your acceptance rate here would increase dramatically. :)

      Cheers Rolf
      (addicted to the Perl Programming Language :)
      see Wikisyntax for the Monastery