Angharad has asked for the wisdom of the Perl Monks concerning the following question:
Quite simple really but what i dont understand is that the code wont print out the @code array unless I run the script twice. The file the subroutine reads is created on the first run but I think its not created quickly enough to then be processed in the subroutine. Hence why the subroutine only prints out the array after the second run (because the file was obviously been completely written by then).#!/usr/local/bin/perl -w # General perl libraries use strict; use English; use DBI; # output file my $outputfile = $ARGV[0]; open(FILE, ">>$outputfile") || die "Error: Can't open $outputfile for writing: $!\n"; my $dbh = DBI->connect("dbi:Pg:database;host=dbhost", "user", "userpwd", {AutoCommit => 1}); my $sth = $dbh->prepare("Tried and tested SQL query that simply return +s a list of id numbers"); $sth->execute(); #print values while(my $ref = $sth->fetchrow_hashref()) { print FILE "$ref->{'id'}\n"; } $sth->finish(); # disconnect from database $dbh->disconnect(); SendtoPrepare($outputfile); ################################################################### sub SendtoPrepare { my($file) = @_; print "$file\n"; open(INFILE, "$file") || die "Error: Can't open $file for reading: $!\n"; my @code = <INFILE>; print "@code\n"; }
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: slowing down perl script to ensure file is written before program tries to process it in subroutine
by Skeeve (Parson) on Sep 01, 2006 at 14:21 UTC | |
by aquarium (Curate) on Sep 01, 2006 at 14:25 UTC | |
|
Re: slowing down perl script to ensure file is written before program tries to process it in subroutine
by Angharad (Pilgrim) on Sep 01, 2006 at 14:28 UTC |