cavac has asked for the wisdom of the Perl Monks concerning the following question:
I played around with ChatGPT today and got it to generate me a Perl script that forks children and restarts any that failed. Here's my initial prompt:
Write a Perl program that forks 5 child processes. The main process
should handle SIGCHILD and restart any child processes that have exite
+d.
The child processes have a function called "doWork" that have a
"Your code here" comment.
There was a bit back-and-forth to fix some bugs and to get ChatGPT to "use English". On the plus side, ChatGPT used "use strict; use warnings;" right from the start AND used readable indenting as well(*). Here's what it came up with:
Edit: Copied an old version at first, this is now the latest version
use strict;
use warnings;
use English qw( -no_match_vars );
use POSIX qw( :sys_wait_h );
# Fork 5 child processes
my $num_processes = 5;
for (my $i = 0; $i < $num_processes; $i++) {
my $pid = fork();
if ($pid == 0) {
# Child process
doWork();
exit;
} elsif (!defined $pid) {
die "Failed to fork: $OS_ERROR";
}
}
# Handle SIGCHILD and restart any child processes that have exited
$SIG{CHLD} = sub {
while ((my $pid = waitpid(-1, WNOHANG())) > 0) {
my $exit_code = $CHILD_ERROR >> 8;
if ($exit_code == 0) {
print "Child process $pid exited successfully\n";
} else {
print "Child process $pid exited with code $exit_code, res
+tarting\n";
my $new_pid = fork();
if ($new_pid == 0) {
# Child process
doWork();
exit;
} elsif (!defined $new_pid) {
die "Failed to fork: $OS_ERROR";
}
}
}
};
# Main process waits for child processes to exit
while (1) {
sleep(1);
}
# Function for child processes to do work
sub doWork {
# Your code here
print "Child process $$ doing some work\n";
sleep(10);
}
It's not perfect. The program doesn't exit when all children have finished and i would probably have to prompt it to add that functionality. But overall, i'd say it looks like a reasonable base framework.
What do you think? Any other problems i didn't catch?
(*) Any chance we could use whatever OpenAI used to train ChatGPT to train our newbies here on how to format code?
Re: Did ChatGPT do a good job?
by haj (Priest) on Mar 29, 2023 at 21:38 UTC
|
What do you think? Any other problems i didn't catch?
Two things:
- The detection of the child process is bogus. In Perl, ($pid == 0) evaluates to true if $pid is undefined. So, if forking fails, then the parent will call doWork before exiting normally. elsif (!defined $pid) will never be evaluated.
- $SIG{CHLD} is defined only after the round of forks. If a child dies before the sub is defined, then it won't be restarted.
| [reply] |
Re: Did ChatGPT do a good job?
by haukex (Archbishop) on Mar 29, 2023 at 20:25 UTC
|
Any other problems i didn't catch?
I'd enjoy proofreading that about as much as I've enjoyed proofreading the ramblings of some of our more notorious posters. Because it's almost guaranteed to have something wrong with it (Update: I was right), it's a waste of time; better to throw it out and start fresh from reputable sources.
ChatGPT is a bullshit generator.
Large Language Models (LLMs) are trained to produce plausible text, not true statements. ChatGPT is shockingly good at sounding convincing on any conceivable topic. But OpenAI is clear that there is no source of truth during training.
So do you really want to ask someone who doesn't actually know how to program, but can write code that looks plausible, to write some code for you, and then dig through it, debug it, and fix all of the bugs?
I'll wait until they train an AI that will take your test suite and write code against it that actually works. That AI is not ChatGPT.
And even when that AI does come out: My experience is that most programmers like writing code more than they like writing tests, and writing tests takes longer anyway, so I wonder if that will catch on at all. (And don't think about having an AI write the test suite: same problem as above.)
Show me an AI that cares about correctness as much as I do and I'll be interested.
</rant> Related: Re: Is ChatGPT worth $42 a month
| [reply] [d/l] |
|
> I'll wait until they train an AI that will take your test suite and write code against it that actually works. That AI is not ChatGPT.
This nails it.
> And even when that AI does come out: ...
ChatGPT was trained with human text and human trainers.
There was this Go playing AI³ which was trained by playing against itself. I can imagine this for programming, because contrary to human speech, programs are testable to a certain degree. They need to compile, not throw errors, etc.
I think in the next 5 max 10 years we will see IDEs with attached AI to help programming.
I'm not sure how yet, but this will come.
As an analogy, do you remember the time before google? Could you have imagined back then people just searching and copy pasting code instead of doing it the hard way? It's because searching° code examples is cheap enough to change the "economics" of programming.
What I mean is, our way of coding changed fundamentally.
Next generation AI will add another layer to this.
> and writing tests takes longer anyway, so I wonder if that will catch on at all.
It depends on how fast and cheap this "code" is generated. (I've read the ChatGPT swallows a lot of energy right now)
But imagine an AI which writes a thousands of lines of code in a blink of an eye. And the "tests" are generated interactively by successive human (experts) input saying "that's wrong", "this must be zero", etc. And the next iteration of the application is generated on the fly.
This will change the economics of coding fundamentally.
Alas not everywhere, I wouldn't want to fly in an air-plane running with software which was tested by try-and-error...²
Bottom line: we don't know how the game will look like in the future, because "AI" - or whatever we call it - will change the rules in ways we can't predict yet.
But we agree, ChatGPT isn't that AI.
°) And ironically it's one of Perl's biggest problems that people keep finding and copying old Perl4 code into their applications.
²) But if we are honest to our-self, each flight accident is another iteration of try-and-error, because often regulations are adjusted. We already accept that flying is sometimes deadly.
³) AlphaGo_Zero
| [reply] |
|
Because it's almost guaranteed to have something wrong with it, it's a waste of time
Just because AI and ChatGPT is not perfect does not make it a waste of time. PerlMonks is not perfect but we all use it because it is valuable. Perl itself is not perfect but everyone here either likes it or is forced to use it for $work.
I am using AI a lot right now. Not just ChatGPT but other tools. They are making my work easier, quicker and sometimes better. I rarely get AI to write anything from scratch and when I do I make sure it is tweaked accordingly. But I am using it to generate ideas for marketing materials, overviews of materials such as training courses, planning schedules and those kind of things.
But more than that, I am using it to take long articles and summarise them into the key points. I have to read quite a bit of material and getting it summarised means that I can consume more content in less time.
To give you a Perl example of ChatGPT being very helpful...
I have recently written a module for Stripe webhooks and needed to document it. I always forget the POD syntax as I don't use it every day and I have to look it up. So I gave ChatGPT the Perl module and asked it to write the documentation using POD.
Did it get it perfect? No way...
But I don't think even the best programmers here would either because some of the things that needed documenting were firmly in my head and far from obvious from the source code.
I still had to write quite a bit of documentation but ChatGPT got me to a good first draft. It filled in all the syntax I needed so I could copy and paste and fill in the rest. It saved me a few hours of work and got me started on something that I had been procrastinating on.
| [reply] [d/l] |
|
Just because AI and ChatGPT is not perfect does not make it a waste of time.
That's taking what I wrote very much out of context. I did not call them a waste of time, I called proofreading code written by ChatGPT (and our local trolls) a waste of time. Both the article and my other node that I linked to name some useful uses of AI and ChatGPT - did you read either? Or did you use ChatGPT to summarize them, which might explain the misunderstanding? ;-P
Edit: Added last sentence ;-)
| [reply] |
|
|
|
|
|
|
If haukex is thinking what I'm thinking, a different way to phrase it is that it is often more effort to find bugs in code that looks pretty good than to write it from scratch. This is the flip-side of the "Not-Invented-Here Syndrome". Sometimes I re-invent things just because I don't want to deal with someone else's hidden assumptions and design limitations that I won't realize until I'm way too committed to building on top of someone else's system. And, that's for programmers who know what they're doing. I looked at this code and didn't catch either of the bugs even after 3 reads, but I wouldn't have made either mistake if I wrote it myself. If I used this code, it would be a time bomb waiting to go off and debugging it would use up more time than it would take to write that boilerplate by hand.
Meanwhile I've found ChatGPT to be an amazing search engine. I can ask an abstract question that would be hard to google and it will pop out example code or paths or config files, and then I use the keywords in those examples to go look up the actual documentation. It's also generally faster and more to the point than google because I don't have to dig through spam results and run into a bunch of articles targeted at the wrong experience level that spend 50 pages explaining things I already know, or get distracted reading some flame war on a mailing list from 5 years ago.
| [reply] |
|
|
G'day Bod,
I use module-starter. I use the Module::Starter::PBP plugin; I like the templating features and have written my own custom ones; I'm less interested in the "Perl Best Practices" features and have removed a fair bit of that.
With this, I can create skeleton modules including POD, a number of standard test files, and other associated files.
Just like you, I need to add the program code and POD details.
My results are consistent every time, which you won't get with ChatGPT.
Furthermore, it does a lot more work than ChatGPT and, I'm reasonably certain, with less effort and in a shorter time.
Consider the following which only took about a minute.
ken@titan ~/tmp/pm_11151331_module_starter
$ module-starter --module=Nod::To::Bod
Added to MANIFEST: Changes
Added to MANIFEST: lib/Nod/To/Bod.pm
... multiple similar lines for other files created ...
Created starter directories and files
ken@titan ~/tmp/pm_11151331_module_starter
$ cd Nod-To-Bod/
ken@titan ~/tmp/pm_11151331_module_starter/Nod-To-Bod
$ perldoc lib/Nod/To/Bod.pm
... displays a couple of screenfuls with TODOs where details need to b
+e added ...
ken@titan ~/tmp/pm_11151331_module_starter/Nod-To-Bod
$ perl Makefile.PL; make; make test
Checking if your kit is complete...
Looks good
Generating a Unix-style Makefile
Writing Makefile for Nod::To::Bod
Writing MYMETA.yml and MYMETA.json
cp lib/Nod/To/Bod.pm blib/lib/Nod/To/Bod.pm
Manifying 1 pod document
PERL_DL_NONLAZY=1 "/home/ken/perl5/perlbrew/perls/perl-5.36.0/bin/perl
+.exe" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::
+Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ............. 1/1 # Testing Nod::To::Bod 0.001
t/00-load.t ............. ok
t/99-00_pod.t ........... ok
t/99-01_pod_coverage.t .. ok
t/99-02_manifest.t ...... ok
All tests successful.
Files=4, Tests=5, 1 wallclock secs ( 0.01 usr 0.03 sys + 0.29 cusr
+ 0.50 csys = 0.84 CPU)
Result: PASS
And, of course, I have the added benefit that none of this code ever hallucinates. :-)
| [reply] [d/l] [select] |
Re: Did ChatGPT do a good job?
by hippo (Bishop) on Mar 30, 2023 at 08:41 UTC
|
for (my $i = 0; $i < $num_processes; $i++) {
A C-style for loop I can forgive but doing all that with a control variable which then isn't even used inside the loop is not going to cut it.
| [reply] [d/l] |
|
Given that it's trained on human code, your "a control variable which then isn't even used" probably needs an "anymore" added. I don't know about others, but that's a sin i committed many, many, many times: You try all kinds of stuff to get code working the way you want it right now(*), including adding all sorts of counters to support debug output. When the stuff finally works, you remove the prints, and then close the file as fast as possible. Leaving all the unused and dead code alone, because you don't want to break it again by accident.
(*) Friday evening. All coworkers have already gone home for the weekend. You are desperate to finish the current task any way possible so you can leave, too. There's always time to do a cleanup of the code next week, right?
| [reply] |
|
I don't think it learned to mimic "sins".
It's just averaging a collage of different real life sources it found.
From my understanding it'll fail with tasks without prior boilerplate available in the web.
And ChatGPT shouldn't be capable to port a piece of foreign code to Perl.
(But this could be a task machine learning might achieve, because it's testable and trainable)
| [reply] |
|
Re: Did ChatGPT do a good job?
by Bod (Priest) on Mar 29, 2023 at 23:11 UTC
|
Any chance we could use whatever OpenAI used to train ChatGPT to train our newbies here on how to format code
I predict that will be a big use of AI in the not-too-distant future.
As I see it, AI is most powerful at taking something that already exists and summarising it, documenting it, altering it, etc. Much more than it is good at writing things from scratch.
| [reply] |
Re: Did ChatGPT do a good job?
by Bod (Priest) on Mar 29, 2023 at 23:31 UTC
|
Following this trial by cavac, the next post I looked at was Print %HoH values
So, I thought I would try out ChatGPT to debug the code in that question. I'll leave you to decide how well it did but I feel it would, at the very least, help me with starting the debugging process. It failed to suggest adding use strict.
------------------------------------------
It looks like there is a syntax error in the Perl code that might be causing the issue with the output not being displayed properly. Specifically, the "+" symbol at the end of the line in the inner for loop seems to be an error.
Here is the corrected code:
for my $dia (sort keys %HoH) {
print "<br>$dia: <br>";
for my $horarios (sort keys %{$HoH{$dia}}) {
print "$horarios=>" . $HoH{$dia}->{$horarios} . "<br>"
+;
}
}
print "VALUE:<br>";
print ${"2023-01-01"}{"00:00"};
In the last line, the code is attempting to print the value of the key "00:00" in the hash associated with the key "2023-01-01". However, it is not clear from the code snippet whether this key exists in the hash %{"2023-01-01"}. If this key does not exist, the output will be undefined.
| [reply] [d/l] [select] |
|
> Specifically, the "+" symbol at the end of the line in the inner for loop seems to be an error.
you have to click download before copying the code. did you?
It seems like you copied the monasteries wrap-around symbol + and fed it to the beast.
> key exists in the hash %{"2023-01-01"}
For me it's a good hint what's wrong, tho.
Alas ChatGPT was trained with human text not Perl code, that's what makes this whole discussion bogus.
It doesn't grasp that dia and horarios are Spanish for what "2023-01-01" and "00:00" stand for and thinks %{"2023-01-01"} is a legit hash.
| [reply] [d/l] [select] |
|
|