Depending on your operating system (and potentially, network setup), opening and closing a file are relatively expensive operations. As you seem to have all the data in memory already, it might be faster to sort the data according to each customer and then print it out in one go for each customer:
my %files; foreach my $CDR (@RECList) { my ($filename,$row)= split(/,/, $CDR); $files{ $filename } ||= []; # start with empty array push @{ $files{ $filename }}, $row; # append row to that array }; # Now print out the data for my $filename (sort keys %files) { open my $csv_fh, '>>', "/ClientRrecord/$filename.csv") or die "cou +ldn't open [$filename.csv]\n".$!; print { $csv_fh } map { "$_\n" } @$row; # print out all lines (wit +h newlines) };
In reply to Re: Delay when write to large number of file
by Corion
in thread best way to fast write to large number of files
by Hosen1989
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |