Update: Workers remove the input file after running.
Update: Changed from FS to RS option.
Update: The OP mentioned having a big file. Also, a sequence file. I added the FS option to chunk the input file by records, not by lines. This works quite well. A chunk size value of 100 means 100 records, not 100 lines.
There are may possibilities with various modules on CPAN. Below, I describe a way using MCE. I can follow up with another post with a version which unlinks tmp files orderly while running if processing in the thousands.
MCE::Signal provides a $tmp_dir location. MCE itself is a chunking engine. Thus, each chunk comes with a chunk_id value. The sprintf is used mainly to have ordered output from running cat *.out.
#!/usr/bin/env perl use strict; use warnings; use MCE::Signal qw($tmp_dir); use MCE::Flow; my $proteinFile = shift; mce_flow_f { RS => "\n>", chunk_size => 100, max_workers => 20, use_slurpio => 1 }, sub { my ($mce, $slurp_ref, $chunk_id) = @_; # pad with zeros -- 4 digits; e.g. 0001, 0002, ... $chunk_id = sprintf "%04d", $chunk_id; # create input file for java open my $out_fh, ">", "$tmp_dir/$chunk_id.in"; print $out_fh $$slurp_ref; close $out_fh; # launch java system("java -Xmx300m java_code/ALPHAtest -a tables/A.TRAINED -e ta +bles/E.TRAINED -c tables/conf.tat -f $tmp_dir/$chunk_id.in > $tmp_dir +/$chunk_id.out"); # unlink input file after running unlink "$tmp_dir/$chunk_id.in"; }, $proteinFile; # the tmp_dir is removed automatically when the script terminates system("cd $tmp_dir; cat *.out");
In reply to Re: Run a script in parallel mode
by marioroy
in thread Run a script in parallel mode
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |