Is there any reason why you can't just use push(). After all, is that not the whole point of tieing a file to an array?
use Tie::Array::CSV;
my $filename = 'tied.csv';
tie my @file, 'Tie::Array::CSV', $filename;
push(@file,[4,5,6]);
untie @file;
| [reply] [d/l] [select] |
re "I suppose I could read each....": Alternately, see perldoc -f sysread, seek, write,, inter alia.
re "some system to automatically manage all those copies of copies that keep piling up every day," that's a solved problem Google or Super Search this site for threads dealing with log management.
| [reply] [d/l] |
Either you load your file back into an array and push data on that array, or you could open the file in append mode and add your data at the end. (Of course, there are many other possibilities, as TIMTOWTDI...). The first solution seems to be more reliable, though, in terms of keeping consistent format.
| [reply] |
Consider using Berkeley DB, SQLite, or some other database instead of CSV files. Your two dozen CSV files might be replaced with two dozen corresponding tables—or possibly just one table, if the CSV files share a common definition.
Jim
| [reply] |
Thanks for quick replies.
Re. Perldoc's sysread - I see I can read a certain number of bytes from file end, which definitely helps as they grow bigger, but don't I need to read certain number of LINES to put .csv into array? That would be neat. Do I need to open another question for this?
Re. backup management - I certainly can do that but from my POV it makes it slightly more complicated than I want - today's "new" file is tomorrow's "old" and has to be found in the old location and has to have the old filename. Simply appending a few lines seems to be easier, but yes, I already plan to control what scripts I want to execute with a relatively simple BASH program, I can add a line at the end, when everything else is done, and copy "new" files into the place of "old" ones. That's one solution.
Re. push(@file,[4,5,6]); - this sounds exactly like what I was looking for. Alternatively, I see I can open files in append mode but then I'm not sure how I could add multiple lines.
For brevity, in append mode, I'd have to have something like this
$csv->print($fh, $row);
going in a loop before I close $fh?
push() looks a bit more appropriate, will go an test it.
Re. databases - I plan to work with these files on two different computers alternatively, so sometimes I have to add only one or two lines and sometimes would I need to catch up and add twenty or even two hundred. It's easier to have them in files I can just copy over with a usb stick if need arises. With perl scripts on the same stick I can do my things on practically any computer, I upload results to be seen online anyway.
| [reply] [d/l] [select] |
Re. Perldoc's sysread - I see I can read a certain number of bytes from file end, which definitely helps as they grow bigger, but don't I need to read certain number of LINES to put .csv into array? That would be neat. Do I need to open another question for this?
I don't understand why sysread was even mentioned in the context of your simple problem of appending lines to existing files. Low-level I/O doesn't lend anything useful to you in the trivial case of your small text files of CSV data.
Alternatively, I see I can open files in append mode but then I'm not sure how I could add multiple lines. … For brevity, in append mode, I'd have to have something like this $csv->print($fh, $row); going in a loop before I close $fh?
Yep, that's the usual way, I think.
use autodie qw( open close );
use Text::CSV_XS;
my @records = get_records();
open my $fh, '>>', $file;
for my $record (@records) {
$csv->print($fh, $records);
}
close $fh;
But based on your explanation of the requirement to create a new CSV file each time you add records to it and to keep the old file around as part of a file backup management strategy, I don't think you really want to append lines to an existing CSV file in place, do you? You simply want to create a new file each time you have new records to add to it. Your file naming convention might use ISO 8601 timestamps for the "old" files. First, rename the file to its new "old" name (e.g., one with the current timestamp in it), then open the just-renamed file for reading and open the main file name for writing. This seems like the simplest strategy to me and it's the one I regularly use.
Re. databases - I plan to work with these files on two different computers alternatively, so sometimes I have to add only one or two lines and sometimes would I need to catch up and add twenty or even two hundred. It's easier to have them in files I can just copy over with a usb stick if need arises. With perl scripts on the same stick I can do my things on practically any computer, I upload results to be seen online anyway.
As an employee of a well-known international advisory services company, I'm obliged to recommend to you that you store your Big Data in the cloud. ☺
Jim
| [reply] [d/l] [select] |