Both reading the log and updating the database are apt to be I/O bound, so threads or a forked process make sense (particularly if the log and db are on different spindles). I'm not too familiar with perl threads, so lets fork the db updater with a pipe from the parent to ship data over,
That leaves out your desire to accumulate a large instruction, possible autoflushing of the pipe, @SIG{'CHLD','PIPE} handling and/or wait, and much error handling. Take it as a skeleton.sub update_db { # ... } sub mung_logline { # ... } pipe my($rd, $wr); my $cpid; { $cpid = fork; die $! if not defined $cpid; unless ($cpid) { close $wr; my $dbh = connect(,,,); while (<$rd>) { update_db( $dbh, $_); } close $rd; exit 0; } } close $rd; { open my $fh, '<', '/path/to/log.file' or die $!; while (<$fh>) { $_ = mung_logline($_); print $wr $_; } close $fh; } close $wr;
It may be best to produce a $sth = $dbh->prepare('whatever ?') with placeholders right after the $dbh is obtained. Then you can pass the $sth already set up to update_db, instead of $dbh.
After Compline,
Zaxo
In reply to Re: faster with threads?
by Zaxo
in thread faster with threads?
by js1
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |