in reply to Re: How can I deal with those big table data?
in thread How can I deal with those big table data?


Thank you for your help! BR>I am a beginner for perl, sorry for all my silly mistake! You are right. I need to read in two files, the first part
works good, the second part does not work. Now I fixed a little bit, and test it, it can give me only one letter of
the data, but my data should be the whole sequence, and my hash does not print out anything either. Please help!


#!/usr/bin/perl -w
use strict;
my ($file_list, $file_data)=@ARGV;
my %MYHASH;
#create hash
sub do_hash {
my $filename=shift;
open(FH, $filename) or die "Can't open $filename: $!\n";
while(<FH>){
my ($Name, $Data)=split,1;
while (my ($key, $value)=each (%MYHASH)){
rint $Name,"=>",$Data," ";
}
}
close FH;
}
do_hash('file_data');
exit;

Replies are listed 'Best First'.
Re: Re: Re: How can I deal with those big table data?
by graff (Chancellor) on May 19, 2002 at 22:40 UTC
    Okay, nice to hear things are improving. Now, regarding this line:
    my ($Name, $Data)=split,1;
    You need to look at "perldoc -f split", because the above is probably not doing what you intend. You also need to make sure that your reading logic matches the structure of the input file.

    I think what you're intending there is something like:

    my ($Name, $Data) = split( /\s+/, $_, 1 );
    which means that, even though the line contains lots of space-separated tokens, just the first space on each line will be split, the part before it becomes $Name, and everything after it becomes $Data.