in reply to Re^4: AI in the workplace
in thread AI in the workplace

This node falls below the community's minimum standard of quality and will not be displayed.

Replies are listed 'Best First'.
Re^6: AI in the workplace
by marto (Cardinal) on Jul 28, 2025 at 05:26 UTC

    It's not "just what I say", it's a culmination of the votes cast within the last 12 months by people using the site. The point being you claim now it's a good idea, while your own previous posts are critical of AI in providing answers to such questions. Consistently inconsistent and demonstrably exhibiting the facets you're critical of.

      (a longer meditation)

      Many search engines nowadays come up with an AI generated preface summing up the results before showing the hits.

      Of course we are not a search engine nor a code writing service, but a community tool helping others becoming better programmers.

      I initially said it's a horrible idea, bc the way harangzsolt is proposing would lead to too many problems, (apart from implementation problems)

      But let's try:

      So I took the liberty to feed a current question into Duck.AI using ChatGPT 4o

      > Why is "any" slow in this case?

      Answer:

      It reads interesting - like always with AI - but requires deeper analysis for disguised non-sense. (Disclaimer: I didn't)

      Definitely nothing an amateur could handle. But an expert can draw inspiration from this.

      For instance I was intrigued by the idea of $1 being slow because it can't optimize numification.(point 2)

      So I asked for clarification, guessing this being about dual-values.

      > why is numification of read-only slower

      Answer:

      Again interesting, but I'm not convinced. The claim (Point 2) that "read-only variables do not benefit from caching" can't be reproduced. Because $1 is indeed also a dual-value.

      Probably another caching mechanism is meant??? Nope ChatGPT confirms that dual-values are meant:

      MY CONCLUSION (so far):

      LLM output can inspire good ideas in our context but require an expert to deal with. The wordy answers are often full of hidden pitfalls and contrary to a human being the LLM doesn't even try to cross-check what it (hear-)says.

      So

      • Is there is a meaningful way to integrate an LLM into our question/answer process?

        I don't know. But using normal questions as prompts is obviously a horrible idea.

      • But can we stop amateurs posting AI generated texts here, in order to pose as an expert?

        Hardly. This is most likely already happening, at least with AnoMonk posts.

      • Will we be forced to increasingly deal with AI here?

        Very likely, at least in requiring us to increase the quality standards of answers or demanding POCs.

      Cheers Rolf
      (addicted to the Perl Programming Language :)
      see Wikisyntax for the Monastery

      IN HINDSIGHT

      This should probably better be a meditation in it's own thread.

      Disclaimer

      This test is far from methodical. I ran this conversation with Duck.AI after the original question already got many answers. Hence the replies might have already been training and influencing the LLM.

        The performance drop in "ugly" compared to "ugly_cr" can be attributed to the fact that `$1` and `$2` are read-only and are re-evaluated each time they are used in a comparison. This means that every time you check `$1` or `$2`, Perl has to numify them again, which adds overhead. In contrast, "ugly_cr" assigns these values to lexicals, which are faster to access.

        Perl does have to numify them again, but not for the reason given.

        no warnings qw( void ); use Devel::Peek qw( Dump ); "a" =~ /(.)/s or die; 0+$1; # Fetch and numify Dump($1); my $x = $1; # Fetch Dump($1);
        ... FLAGS = (GMG,SMG,POK,pIOK,pNOK,pPOK) ... FLAGS = (GMG,SMG,POK,pPOK) ...

        As you can see, $1 gets numified. But every time you read from it, it gets repopulated since it's a magic variable. This wipes the previous values.

        In the context of analyzing the AI's answer, it's worth noting that I missed the repeated numification in my answer. I stopped too soon.

        The "any_cr" method is slower than "any" because of the additional overhead of assigning values to lexicals before performing the checks.

        That can't be true since ugly_cr is way faster than ugly. The actual culprit is the overhead from the addition of capturing.

        In the context of analyzing the AI's answer, it's worth noting the response is self-contradicting. According to the AI, assigning to the lexicals makes the cr version faster by only doing numification once, but it makes the cr version slower because of the addition of an assignment.

Re^6: AI in the workplace
by LanX (Saint) on Jul 28, 2025 at 13:38 UTC
    > That's not the second worst node of the year. That's just what you say.

    "Worst Nodes of The Year" has a well defined meaning in the monastery.

    Worst Nodes of The Year # Node Author Rep 1 new perl distribution pault -21 2 Incorporating ChatGPT into PerlMonks harangzsolt33 -17 + <---- 3 Re^4: set proto string vincentaxhe -14

    Insisting on semantics by introducing personal metrics for good and bad won't help you here.

    Now I even played along and tried a POC (or rather a nonPOC) of your "idea" here°.

    You are more than welcome to demonstrate a positive outcome by creating automated prompts for OPs stemming from real world discussions in the monastery.

    IOW "ideas" are not enough as long as you can't demonstrate how this is supposed to work out.

    As long as you just repeatedly throw your creativity + LOLs at us, many people won't take you seriously, and reward you with down-votes.

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    see Wikisyntax for the Monastery

    °) Re^7: AI in the workplace (... in the Monastery)

      ""Worst Nodes of The Year" has a well defined meaning in the monastery."

      They are aware. See also the posts complaining about downvoting of "good ideas" & "automated downvotes" etc.

      Consider this then the assertion that it's still a "good idea" to provide such slop in an automated response when someone posts in SoPW, for someone else to come clean up while likely wasting the users time.