Thanks for all your comments. I am sorry I was too terse in my description. Allow me to explain a bit more.
Firstly, my production code is CPU bound; the example above is not. My single threaded production code on my 4 core box runs at 100% of one core. I can run the program four times and get the CPU up to about 70-80% over all 4 cores but if one program has a fault I need to kill the other three programs and manually tidy up before I can restart. I thought a single program running four threads might be a little bit faster, but much cleaner to manage when things go wrong.
As I said the example is just intended to show a problem I have run in to. I was expecting the output of the example program to show the processing times for the individual files to go up to say 3 or 4 seconds, and the overall execution time of the three *_process calls to remain about the same. Instead the thread_process is taking 20 times longer to read the same 24MB while the CPU remains idle.
If you want to try this at home, the following code can create suitably named and sized files suitable for testing.
#!/usr/bin/perl # - Creating some sample 4MB files for testing threads. use strict; use warnings; my @files = qw( SyslogR_20110615_134243.txt SyslogR_20110615_162146.txt SyslogR_20110620_090237.txt SyslogR_20110620_090308.txt SyslogR_20110620_092240.txt SyslogR_20110620_092328.txt ); for my $file (@files) { open my $OUT, ">" , $file or die "Cannot write to $file : $!"; my $line = "123456789 "x 25; for (my $i= 0; $i< 4e6/250; $i++ ) { print $OUT $i, " ", $line, "\n"; } close $OUT; }
In reply to Re: Reading from file in threaded code is slow
by amcglinchy
in thread Reading from file in threaded code is slow
by amcglinchy
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |