in reply to Possible faster way to do this?
Hello Anonymous Monk.
Your code is not perl, it's calls to the cut and sort programs, probably inside a shell script.
But, will a hash be able to handle such a large file?Probably not, unless you have very little unique values, you are going to end up with several GB of data in your hash. But do you actually need all the values? If you're just trying to get the type of each column you just need to keep a few value:
In perl that might look something like that (untested since it's barely more than pseudo code):
You might notice the use of looks_like_number of Scalar::Util to help you with detecting numbers. Of course that's if you actually want to use perl, rather than whatever language your script is currently executed as.use strict; use warnings; use Scalar::Util 'looks_like_number'; my @result; open my $file, "<", "my_big_file" or die "Can't open my_big_file: $!"; while (<$file>) { my @data = split "\t", $_; for my $col (0..$#data) { my $len = $length($data[$col]); if (!looks_like_a_number($data[$col])) { $result[$col]{is_a_string} = 1; } $result[$col]{max_len} = $len if ($len > $result[$col]{max_len}); next if $result[$col]{is_a_string}; # Don't check number range if +this is a string # Check number range here ... # Other stuff ... } }
|
|---|