in reply to AI in the workplace

Just a tangential comment.

I run a web application that distributes work to people and collects it back. There is a "Download a random file" button which not only downloads the file but also assigns the file to the user in a bookkeeping system. A few days ago, one of the users complained that they clicked on the button several times quickly, got assigned several files in the system, but only one file was downloaded.

Instead of asking here, I tried asking an AI model we run at work for testing purposes. It told me how to fix the "double-submit" problem in several ways and I was able to fix the application quickly.

The downside? No interaction with people, no XP shared to anyone answering my question. It took me years to learn English up to the level I could dare to ask a question online, it still makes me step out of my comfort zone to overcome my shyness. But I love that I can do that. I feel ashamed I talked to the AI instead.

map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]

Replies are listed 'Best First'.
Re^2: AI in the workplace
by LanX (Saint) on Jun 01, 2025 at 10:27 UTC
    > It took me years to learn English up to the level I could dare to ask a question online

    And I suppose you were able to ask the AI in Czech too.

    Edit

    BTW and probably also tangent.

    The whole situation reminds me of the time Google search became popular and many "developers" stopped reading manuals.

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    see Wikisyntax for the Monastery

      For some reason, I didn’t this time. But it’s interesting how different the answers tend to be when the question is or is not in English.

      For example, when I played with the AI, I remembered a short story from a Soviet author (or authors, I think it was from Strugatsky brothers) I read in the 80s. It was about someone who built a computer that was able to ask or answer any question. The protagonist had to find a question the computer couldn’t answer in order to proceed, and when the computer was explaining the rules, it said: "You can’t ask questions with false presuppositions like ‘Is it really you, Ivan Ivanovich?’ or ‘Why do ghosts have their hair cut short?’" (In the end, the question that bricked the computer was "Can you find a question to which you can’t find an answer?"). So I tried asking the AI about ghosts’ haircut. In English, the answer was rather long, explaining that a ghost’s head is usually covered in a sheet so we don’t see its hair, blah blah. In Czech, the answer was quite short: "Because they’re afraid of long cuts." (Sounds like a punchline, but it’s not funny).

      map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]
        On another note, I usually activate subtitles on YouTube videos, because I'm often unsure with all those English accents.

        It's often funny to see some results, like "fartland"° for "fatherland" , or "David Davis the breakfast coordinator in brothels" for "brexit coordinator in Brussels".

        And these are the bases for subtitles in other languages, which worsens the chances of our human error correction (what?) to step in.

        It's also reassuring me that I'm not the problem when struggling to understand English speakers. ;)

        Cheers Rolf
        (addicted to the Perl Programming Language :)
        see Wikisyntax for the Monastery

        °) actually "Furzland", because it defaulted to German subtitles

        > "Can you find a question to which you can’t find an answer?"

        I was expecting this, self-referential problems are the core of many proofs like Gödel's incompleteness or Turing's halting problem.

        Your author just wrapped a story around it. :)

        Which is also a reason why you can't generally prove that algorithms are correct.

        To get back to the topic, the problem with AI/LLM at the moment is that they are based on statistical pattern matching not logic.

        Simplified, they might produce solutions on assumptions like "all odd numbers are prime" because they saw the pattern till 7.

        (I hope there is a God protecting us once a vital system encounters 9. ;)

        This is simplified because LLMs are normally trained on human input making clear that 9 isn't prime.

        But the lack of more human input - the Internet is finite - is the sharpest needle pointing at the current bubble.

        Still AI is very good at areas which don't require logic, like English orthography, and helping me finding the distinction between "proofs" and "proves" ;)

        Cheers Rolf
        (addicted to the Perl Programming Language :)
        see Wikisyntax for the Monastery