in reply to What's your view on AI coding assistants?

Writing in my answer: it depends. I have had mixed results with them. In some cases, they have told me that certain libraries do things that I know, from my own personal experience, that they do not, and when I point this out, they politely apologized, offered me another solution that I knew would not work, and then politely apologized and referred me back to the original "solution" it provided. In some cases, I have asked them to integrate a solution into my existing code, and they have broken my code so badly that I've had to restore from my most recent commit to fix it.

But in other cases, they have been very helpful, and this includes both pointing me in the right direction for a tool or library I didn't know existed, combing through large quantities of data more quickly than I could write a regexp for, or even, in some cases, templating out a whole project in a language I didn't know very well to give me a good place to get started from. I suppose their utility is probably inversely proportionate to my own skill level in a given area or base competency in solving a particular problem. At worst, they can cost me an hour or two going down a garden path, but at best, they can save me literal weeks of work. So it depends.

  • Comment on Re: What's your view on AI coding assistants?

Replies are listed 'Best First'.
Re^2: What's your view on AI coding assistants?
by LanX (Saint) on Oct 31, 2025 at 10:31 UTC
    > I suppose their utility is probably inversely proportionate to my own skill level in a given area or base competency in solving a particular problem.

    Yes that's it basically.

    When we solve a problem we intuitively try out strategies from solving other similar problems.

    Without having much prior exposure to a special field LLMs can give you a start by finding or mixing known solutions they where trained on, but only if these solutions already exist.

    They are basically sophisticated search engines with a better language interface.

    Here is an anecdote where OpenAI embarrassed themselves.

    It's highlighting that LLMs are not good at reasoning (thinking) but remembering (reproducing).

    So as long as you ask things which require to reproduce learned knowledge they have a benefit. But asking them to solve new problems, not so much.

    from https://techcrunch.com/2025/10/19/openais-embarrassing-math/ (emphasize added)

      OpenAI’s ‘embarrassing’ math “Hoisted by their own GPTards.”

      That’s how Meta’s chief AI scientist Yann LeCun described the blowback after OpenAI researchers did a victory lap over GPT-5’s supposed math breakthroughs.

      Google DeepMind CEO Demis Hassabis added, “This is embarrassing.”

      The Decoder reports that in a since-deleted tweet, OpenAI VP Kevin Weil declared that “GPT-5 found solutions to 10 (!) previously unsolved Erdős problems and made progress on 11 others.” (“Erdős problems” are famous conjectures posed by mathematician Paul Erdős.)

      However, mathematician Thomas Bloom, who maintains the Erdos Problems website, said Weil’s post was “a dramatic misrepresentation” — while these problems were indeed listed as “open” on Bloom’s website, he said that only means “I personally am unaware of a paper which solves it.”

      In other words, it’s not accurate to claim GPT-5 was able to solve previously unsolved problems. Instead, Bloom wrote, “GPT-5 found references, which solved these problems, that I personally was unaware of.”

      Sebastien Bubeck, an OpenAI researcher who’d also been touting GPT-5’s accomplishments, then acknowledged that “only solutions in the literature were found,” but he suggested this remains a real accomplishment: “I know how hard it is to search the literature.”

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    see Wikisyntax for the Monastery

      “Hoisted by their own GPTards.”

      It makes me sad that they got the expression "Hoist with his own petard" wrong, as so many do. In Elizabethan English, "hoist" was the past participle of "hoise".

      And to excuse it as being translation into Modern English doesn't fly, because the entire expression makes no sense in Modern English. It can only be understood by its original context, in Hamlet.

        yes:

        It can only be understood by its original context, in Hamlet.

        but:

        Darmok and Jalad at Tanagra

        This is something that actors (and directors) of Shakespeare have to deal with in every rehearsal: words meant different things to Shakespeare's audiences than they do to ours, and part of the actor's job is to understand those words (as best as possible) in the context in which Shakespeare meant them, and then to make them meaningful for the living audience hearing them today. So that meaning is constantly created and re-created through shifting contexts and usages.

        "Hoist," in the transitive sense "to lift" has been with us for a long time, and is enough a part of modern English vernacular that most people won't have a hard time understanding it, especially thanks to the ubiquity of the metaphor from Hamlet. No translation needed.... except perhaps insofar as "hoist," as Hamlet means it, is itself a metaphor for being blown to bits.

        "Petard," on the other hand, was getting to be obsolete in English by the time that Shakespeare used it -- he used obsolete diction somewhat regularly, though we have to speculate at his motives -- and this is something that most people will understand only through an understanding of the metaphor from Hamlet, usually, correctly, in terms of "foiled by his own plan." That being said, I don't think it matters that the audience doesn't know that petard is analogous to a bomb in the modern sense as long as the actor does and can say it meaningfully. And just so, I think LeCun understood the meaning well enough to create a somewhat poetical flourish that makes ChatGPT analogous to Hamlet's engineer's bomb.

        Which is my long winded way of saying I agree with you in the strictest technical sense of the specification of language, but not in the sense of language as a practice with everyday uses and customs. But if language as practice did not have power and precedent over language as specification, none of us would have any idea what "Darmok and Jalad at Tanagra" means.

      All in all, artificial and human intelligence teamed up to push the envelope of stupidity even further. Good news for this team is that Einstein conjured, I guess after ample experience, that the stupidity horizon is infinite. So plenty of ground there before this bubble bursts.

      I also see how miniscule these "academic" types, for example the "mathematician Thomas Bloom", seem today. The maintainer of *THE* Erdos Problems website was unaware of 10 solutions and 11 more breakthroughs on the one and only field that his site specialises and is being advertised as such. Bloom gets away with just "I was unaware".

        The website already lists 1105 problems, and the creator doesn't claim that his hobby project is "*THE* Erdos Problems website"°

        If you had a grasp how many publications are made every day, you'd know how hard it is already to read only the relevant ones.

        And Erdős has such a prolific output and impact in the math community, that the Erdős number was created.

        °) Which BTW can't be very old. From the FAQ

        > This website was made by Thomas Bloom, a mathematician who likes to think about the problems Erdős posed. Technical assistance with setting up the code for the website was provided by ChatGPT ¹ and the logo was made by Midjourney. Since this website was launched, many people have helped with spotting typos and pointing to updated references, or suggesting new problems. These people are credited individually under each relevant problem.

        ¹) LOL

        Cheers Rolf
        (addicted to the Perl Programming Language :)
        see Wikisyntax for the Monastery