Are you sure that your numbers are correct and the Perl parser is really the bottleneck? I created the following, very simple script that maybe reproduces your situation in a (completely Perl) environment, and it takes 10 seconds (total) to complete, spending about 4 seconds parsing the created file:
#!/usr/bin/perl -w use strict; use File::Temp qw(:mktemp); my $REs = 9000; my ($filename) = mktemp('tmpfileXXXXX'); my $lines = 0; my $template = q{ sub is_%s { my $re = qr(^%s(\\1)$); return shift =~ $re; }; }; print "Generating $REs regular expressions in $filename\n"; open FH, ">", $filename or die "Couldn't create $filename: $!"; for my $i (1..$REs) { my $name = "re_${i}_"; my $code = sprintf $template, $name,$name; $lines += () = ($code =~ /\n/msg); printf FH $code or die "Couldn't write template to $filename: $!"; }; close FH; my $start = time(); system($^X,"-w",$filename) == 0 or die "Couldn't spawn created file $filename: $!/$?"; my $stop = time(); my $duration = $stop-$start; print "It took me $duration seconds to parse $REs regular expressions +($lines lines)\n";
Now, my regular expressions are quite simple and the rest is even simpler, but my machine is a P-II 400 with 256MB RAM, so it should be even slower than your program ...
Maybe you can optimize/simplify the generated Perl code? Maybe the data structures are inefficient? Maybe the file I/O /network I/O is the bottleneck?
perl -MHTTP::Daemon -MHTTP::Response -MLWP::Simple -e ' ; # The $d = new HTTP::Daemon and fork and getprint $d->url and exit;#spider ($c = $d->accept())->get_request(); $c->send_response( new #in the HTTP::Response(200,$_,$_,qq(Just another Perl hacker\n))); ' # web
In reply to Re: Saving parsed perl for faster loading?
by Corion
in thread Saving parsed perl for faster loading?
by wimpie
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |