There's always a better way. ;-)
I'd do something like this (untested):
my $sth1 = $dbh->prepare("INSERT INTO dev.blocks (names) VALUES(?)"); + my $sth3 = $dbh->prepare("INSERT INTO dev.vials (block_id, lt, well_po +sition, gene, barcode) VALUES(?,?,?,?,?)"); for my $file (@files) { open my $fh, '<', "/home/mydir/data/test_files/$file" or die "can't +open $file : $!\n"; while(my $line = <$fh>) { chomp $line; my ($well, $gene, $lt, $barcode, $name) = split(/\t/, $line); eval { $sth1->execute($name); }; if ($@) { die "database error 1: ", $dbh->errstr; } my $id = $dbh->{mysql_insertid}; eval { $sth3->execute($id, $lt, $well, $gene, $barcode); }; if ($@) { die "database error 2: ", $dbh->errstr; } } close($fh); }
Notes:
After the line with the split statement, you should probably add something like this for each field you expect to read in:
unless($well) { die "missing value for well in file $file, line $. \n"; }
Note that the special variable $. contains the current line number of the file being read. You could also change the 'die' into a 'print' (to write errors into a log file, maybe), followed by a 'next' (to skip to the next line).
The code you posted has a lot of problems. For example, you don't put anything into the @files array before looping over it - so that loop has nothing to work on. And you can't have a variable called $file that contains the file name and also use it as the file handle. And the file handle you close should be the same as the one you open, etc.
In reply to Re: Is there a more efficient way?
by scorpio17
in thread Is there a more efficient way?
by lomSpace
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |