I think what is meant is that the standard approach is to have your own "hand-coded" module of subroutines for your system, which, in particular, should have two subroutines, say getF and putF, which handle the writing to
a file, for example:
sub getF
{
# warning: untested "dreamware" ... I'll test later and
# (if necessary) repost a correction
# General routine to read any file into a hash
# (key-indexed) whose targets are anonymous hashes
# (indexed by your column names)
my ( $hashAdr, # reference of hash to write back to - this on
+e
$fileHandle, # best not to hard-code these around the place
# but to have one subroutine per system
# that allocates unique ones, which would
# therefore get called in the main program
$fileName,
$fieldDelim, # best use only one per PROJECT
# and if in doubt use "$;"
$colNamArrAdr, # address of array of the column names -
# best to use all upper or all lower case
# if possible.
$indexPosn # can be e.g. "3" . $; . "4" to say
# that the index should be composed of columns
# 4 and 5 (surpri-ise!).
) = @_;
%$hashAdr = (); # initialise the hash
while( <$fileHandle> )
{
chop; # you know exactly one <CR> is there.
my @cols = split( $fieldDelim, $_ );
my $index = join( "", split( $;, $cols[ $indexPosn ] ) );
# above is designed to handle 1 or more index columns
foreach ( @$colNamArrAdr )
{
$$hashadr{ $index }{ $_ } = shift( @cols );
}
}
return;
}
# the corresponding put routine is obvious from this, but
# I'll just mention the important loop:
foreach my $key ( sort( keys( %$hashAdr ) ) )
{
my @output = ();
foreach ( @$colNamArrAdr )
{
push( @output, $$hashAdr{ $key }{ $_ } );
}
print $fileHandle join( $fileDelim, @output ), "\n";
}
|