Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Re: Did ChatGPT do a good job?

by haukex (Archbishop)
on Mar 29, 2023 at 20:25 UTC ( [id://11151327]=note: print w/replies, xml ) Need Help??


in reply to Did ChatGPT do a good job?

Any other problems i didn't catch?

I'd enjoy proofreading that about as much as I've enjoyed proofreading the ramblings of some of our more notorious posters. Because it's almost guaranteed to have something wrong with it (Update: I was right), it's a waste of time; better to throw it out and start fresh from reputable sources.

ChatGPT is a bullshit generator.

Large Language Models (LLMs) are trained to produce plausible text, not true statements. ChatGPT is shockingly good at sounding convincing on any conceivable topic. But OpenAI is clear that there is no source of truth during training.

So do you really want to ask someone who doesn't actually know how to program, but can write code that looks plausible, to write some code for you, and then dig through it, debug it, and fix all of the bugs?

I'll wait until they train an AI that will take your test suite and write code against it that actually works. That AI is not ChatGPT.

And even when that AI does come out: My experience is that most programmers like writing code more than they like writing tests, and writing tests takes longer anyway, so I wonder if that will catch on at all. (And don't think about having an AI write the test suite: same problem as above.)

Show me an AI that cares about correctness as much as I do and I'll be interested.

</rant> Related: Re: Is ChatGPT worth $42 a month

Replies are listed 'Best First'.
Re^2: Did ChatGPT do a good job?
by LanX (Saint) on Mar 30, 2023 at 01:00 UTC
    > I'll wait until they train an AI that will take your test suite and write code against it that actually works. That AI is not ChatGPT.

    This nails it.

    > And even when that AI does come out: ...

    ChatGPT was trained with human text and human trainers.

    There was this Go playing AI³ which was trained by playing against itself. I can imagine this for programming, because contrary to human speech, programs are testable to a certain degree. They need to compile, not throw errors, etc.

    I think in the next 5 max 10 years we will see IDEs with attached AI to help programming.

    I'm not sure how yet, but this will come.

    As an analogy, do you remember the time before google? Could you have imagined back then people just searching and copy pasting code instead of doing it the hard way? It's because searching° code examples is cheap enough to change the "economics" of programming.

    What I mean is, our way of coding changed fundamentally.

    Next generation AI will add another layer to this.

    > and writing tests takes longer anyway, so I wonder if that will catch on at all.

    It depends on how fast and cheap this "code" is generated. (I've read the ChatGPT swallows a lot of energy right now)

    But imagine an AI which writes a thousands of lines of code in a blink of an eye. And the "tests" are generated interactively by successive human (experts) input saying "that's wrong", "this must be zero", etc. And the next iteration of the application is generated on the fly.

    This will change the economics of coding fundamentally.

    Alas not everywhere, I wouldn't want to fly in an air-plane running with software which was tested by try-and-error...²

    Bottom line: we don't know how the game will look like in the future, because "AI" - or whatever we call it - will change the rules in ways we can't predict yet.

    But we agree, ChatGPT isn't that AI.

    Cheers Rolf
    (addicted to the 𐍀𐌴𐍂𐌻 Programming Language :)
    Wikisyntax for the Monastery

    °) And ironically it's one of Perl's biggest problems that people keep finding and copying old Perl4 code into their applications.

    ²) But if we are honest to our-self, each flight accident is another iteration of try-and-error, because often regulations are adjusted. We already accept that flying is sometimes deadly.

    ³) AlphaGo_Zero

Re^2: Did ChatGPT do a good job?
by Bod (Parson) on Mar 29, 2023 at 23:22 UTC
    Because it's almost guaranteed to have something wrong with it, it's a waste of time

    Just because AI and ChatGPT is not perfect does not make it a waste of time. PerlMonks is not perfect but we all use it because it is valuable. Perl itself is not perfect but everyone here either likes it or is forced to use it for $work.

    I am using AI a lot right now. Not just ChatGPT but other tools. They are making my work easier, quicker and sometimes better. I rarely get AI to write anything from scratch and when I do I make sure it is tweaked accordingly. But I am using it to generate ideas for marketing materials, overviews of materials such as training courses, planning schedules and those kind of things.

    But more than that, I am using it to take long articles and summarise them into the key points. I have to read quite a bit of material and getting it summarised means that I can consume more content in less time.

    To give you a Perl example of ChatGPT being very helpful...
    I have recently written a module for Stripe webhooks and needed to document it. I always forget the POD syntax as I don't use it every day and I have to look it up. So I gave ChatGPT the Perl module and asked it to write the documentation using POD.

    Did it get it perfect? No way...
    But I don't think even the best programmers here would either because some of the things that needed documenting were firmly in my head and far from obvious from the source code.

    I still had to write quite a bit of documentation but ChatGPT got me to a good first draft. It filled in all the syntax I needed so I could copy and paste and fill in the rest. It saved me a few hours of work and got me started on something that I had been procrastinating on.

      Just because AI and ChatGPT is not perfect does not make it a waste of time.

      That's taking what I wrote very much out of context. I did not call them a waste of time, I called proofreading code written by ChatGPT (and our local trolls) a waste of time. Both the article and my other node that I linked to name some useful uses of AI and ChatGPT - did you read either? Or did you use ChatGPT to summarize them, which might explain the misunderstanding? ;-P

      Edit: Added last sentence ;-)

        > written by ChatGPT (and our local trolls)

        FWIW: I suspicioned many times that some of our local trolls are actually AI using us as training ground.

        I think that doesn't sound too weird anymore.

        Cheers Rolf
        (addicted to the 𐍀𐌴𐍂𐌻 Programming Language :)
        Wikisyntax for the Monastery

        That's taking what I wrote very much out of context

        Yes - I see that...

        I was suffering from that very human problem of reading and replying when it was late; I was tired and thinking mostly of getting away from the screen and into bed. Maybe the lack of fatigue is one area where AI will excel despite the lack of creativity and ingenuity.

      If haukex is thinking what I'm thinking, a different way to phrase it is that it is often more effort to find bugs in code that looks pretty good than to write it from scratch. This is the flip-side of the "Not-Invented-Here Syndrome". Sometimes I re-invent things just because I don't want to deal with someone else's hidden assumptions and design limitations that I won't realize until I'm way too committed to building on top of someone else's system. And, that's for programmers who know what they're doing. I looked at this code and didn't catch either of the bugs even after 3 reads, but I wouldn't have made either mistake if I wrote it myself. If I used this code, it would be a time bomb waiting to go off and debugging it would use up more time than it would take to write that boilerplate by hand.

      Meanwhile I've found ChatGPT to be an amazing search engine. I can ask an abstract question that would be hard to google and it will pop out example code or paths or config files, and then I use the keywords in those examples to go look up the actual documentation. It's also generally faster and more to the point than google because I don't have to dig through spam results and run into a bunch of articles targeted at the wrong experience level that spend 50 pages explaining things I already know, or get distracted reading some flame war on a mailing list from 5 years ago.

        Meanwhile I've found ChatGPT to be an amazing search engine. I can ask an abstract question that would be hard to google

        Agreed - that's one of the best uses I have found for ChatGPT, Bard and other AI services...

      G'day Bod,

      I use module-starter. I use the Module::Starter::PBP plugin; I like the templating features and have written my own custom ones; I'm less interested in the "Perl Best Practices" features and have removed a fair bit of that.

      With this, I can create skeleton modules including POD, a number of standard test files, and other associated files. Just like you, I need to add the program code and POD details. My results are consistent every time, which you won't get with ChatGPT. Furthermore, it does a lot more work than ChatGPT and, I'm reasonably certain, with less effort and in a shorter time. Consider the following which only took about a minute.

      ken@titan ~/tmp/pm_11151331_module_starter $ module-starter --module=Nod::To::Bod Added to MANIFEST: Changes Added to MANIFEST: lib/Nod/To/Bod.pm ... multiple similar lines for other files created ... Created starter directories and files ken@titan ~/tmp/pm_11151331_module_starter $ cd Nod-To-Bod/ ken@titan ~/tmp/pm_11151331_module_starter/Nod-To-Bod $ perldoc lib/Nod/To/Bod.pm ... displays a couple of screenfuls with TODOs where details need to b +e added ... ken@titan ~/tmp/pm_11151331_module_starter/Nod-To-Bod $ perl Makefile.PL; make; make test Checking if your kit is complete... Looks good Generating a Unix-style Makefile Writing Makefile for Nod::To::Bod Writing MYMETA.yml and MYMETA.json cp lib/Nod/To/Bod.pm blib/lib/Nod/To/Bod.pm Manifying 1 pod document PERL_DL_NONLAZY=1 "/home/ken/perl5/perlbrew/perls/perl-5.36.0/bin/perl +.exe" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test:: +Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t t/00-load.t ............. 1/1 # Testing Nod::To::Bod 0.001 t/00-load.t ............. ok t/99-00_pod.t ........... ok t/99-01_pod_coverage.t .. ok t/99-02_manifest.t ...... ok All tests successful. Files=4, Tests=5, 1 wallclock secs ( 0.01 usr 0.03 sys + 0.29 cusr + 0.50 csys = 0.84 CPU) Result: PASS

      And, of course, I have the added benefit that none of this code ever hallucinates. :-)

      — Ken

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11151327]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others studying the Monastery: (4)
As of 2024-04-20 00:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found