Update: Increased chunk size to 400.
Below, a parallel version with chunking enabled for the solution provided by monk Laurent_R. I ran against an input file containing 500k records.
Serial: 2.574 seconds. Parallel: 0.895 seconds, which includes the time to fork and reap children under a Unix environment. Afterwards, the output contains 500k lines.
The test machine is a 2.6 GHz Haswel Core i7 with RAM at 1600 MHz.
Optionally, the script can receive the input_file and output_file as arguments.
use strict; use warnings; use MCE::Loop; use MCE::Candy; my $input_file = shift || 'input.txt'; my $output_file = shift || 'output.txt'; open my $ofh, ">", $output_file or die "cannot open '$output_file' for writing: $!\n"; MCE::Loop::init { use_slurpio => 1, chunk_size => 400, max_workers => 4, gather => MCE::Candy::out_iter_fh($ofh), RS => "\nINTERPOLATED HYDROGRAPH", }; ## Each worker receives many records determined by chunk_size. ## Output order is preserved via MCE::Candy::out_iter_fh mce_loop_f { my ( $mce, $chunk_ref, $chunk_id ) = @_; open my $ifh, "<", $chunk_ref; my $output = ""; while ( my $line = <$ifh> ) { chomp $line; # remove newline character from end of line if ( $line =~ /INTERPOLATED HYDROGRAPH AT (\w+)$/ ) { $output .= $1; $line = <$ifh> for 1..6; # skip 5 lines my $val2 = (split / /, $line)[1]; # get the second column $output .= " $val2"; $line = <$ifh> for 1..2; # skip one line chomp $line; my $val3 = (split / /, $line)[-1]; # get the last column $output .= " $val3\r\n"; } } close $ifh; MCE->gather( $chunk_id, $output ); } $input_file; close $ofh;
Kind regards, Mario.
In reply to Re: Perl solution for current batch file to extract specific column text
by marioroy
in thread Perl solution for current batch file to extract specific column text
by oryan
For: | Use: | ||
& | & | ||
< | < | ||
> | > | ||
[ | [ | ||
] | ] |