in reply to best way to fast write to large number of files
Depending on your operating system (and potentially, network setup), opening and closing a file are relatively expensive operations. As you seem to have all the data in memory already, it might be faster to sort the data according to each customer and then print it out in one go for each customer:
my %files; foreach my $CDR (@RECList) { my ($filename,$row)= split(/,/, $CDR); $files{ $filename } ||= []; # start with empty array push @{ $files{ $filename }}, $row; # append row to that array }; # Now print out the data for my $filename (sort keys %files) { open my $csv_fh, '>>', "/ClientRrecord/$filename.csv") or die "cou +ldn't open [$filename.csv]\n".$!; print { $csv_fh } map { "$_\n" } @$row; # print out all lines (wit +h newlines) };
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Delay when write to large number of file
by thargas (Deacon) on Jun 24, 2014 at 11:36 UTC | |
by Corion (Patriarch) on Jun 24, 2014 at 11:40 UTC | |
by thargas (Deacon) on Jun 24, 2014 at 17:20 UTC | |
|
Re^2: Delay when write to large number of file
by Hosen1989 (Scribe) on Jun 23, 2014 at 14:57 UTC | |
by Corion (Patriarch) on Jun 23, 2014 at 15:19 UTC | |
by wjw (Priest) on Jun 23, 2014 at 17:40 UTC | |
by Anonymous Monk on Jun 24, 2014 at 07:20 UTC | |
by Hosen1989 (Scribe) on Jun 24, 2014 at 10:35 UTC | |
by Anonymous Monk on Jun 24, 2014 at 10:39 UTC |