in reply to Re^2: Incorporating ChatGPT into PerlMonks
in thread Incorporating ChatGPT into PerlMonks

You do have a point here, but due to the way LLMs work, they tend to produce answers that sound highly reasonable while containing hallucinations, and that's where I see Brandolini's law incoming, possibly needing more than a simple order of magnitude. The above Wikipedia article quotes
In fast-changing fields, like information technology, refutations lag nonsense production to a greater degree than in fields with less rapid change.
Try to dig for gold in locked_user sundialsvc4's posts - there are gems, but buried in misleading crap that to the newbie's eyes can look like diamonds. ChatGPT's training data certainly contains valuable insights, but also contaminated crap such that it is irresponsible to leave the task of sorting this out "as an exercise for the reader".
  • Comment on Re^3: Incorporating ChatGPT into PerlMonks

Replies are listed 'Best First'.
Re^4: Incorporating ChatGPT into PerlMonks
by LanX (Saint) on Oct 15, 2024 at 08:10 UTC
    I recently asked DDG/ChatGPT how much bigger the size of Mexico is compared to Germany.

    It gave me the correct amount in million square kilometers (1.9 vs 0.35) and continued saying 14 times!!!

    It got better in the meantime (5.5 times) but probably because of my training, not because the AI was capable of questioning it's own statement.

    This is just a tiny glimpse into the problem with code generated like chat ...

    Saying so, AIs can already facilitate code generation, but only if supervised by an expert.

    Now PMs intention is not to be a code writing service, but to help people to understand how to code Perl. We want them to become experts.

    Encouraging them to use generated code without understanding, is the direct opposite of this intention.

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    see Wikisyntax for the Monastery