in reply to Calculating the average on a timeslice of data
It's too bad you don't have it in CSV format - you could just use DBD::CSV to get your average ;-) SELECT SUM(VALUE)/COUNT(*) FROM mytable WHERE DATE = ?
If you're feeling really chipper, you could put this data into a real database, and, again, the average will be trivial. While I'm mostly kidding with this one, it really depends on what else you're doing with those 40,000 lines - it may really be cheaper to put it in a database (even SQLite) and use SQL to get your information than to do it yourself. But that's only true if you have more than one query to make against it.
However, if you want to approach it directly, I'm not sure why it isn't feasible to use if ($date eq '0109'). It seems perfectly feasible to me.
As for your idea to load everything into hashes, that's fine, too. The huge disadvantage is the amount of RAM you'll use. You'll spend a bunch of time populating the hash, too. If you're only querying one thing out of it, that's all wasted time and space. If you're making multiple queries in the same process, then you can see a speed benefit from not having to re-read the file every time. It can be faster than a database, but will likely use more RAM, and will need to re-parse the file every time you load your perl script, whereas a database would have indexes that would speed things up across multiple processes. So, again, it all depends on your usage.my ($total, $count); while (<$fh>) { # omitting any error checking here - you shouldn't omit it, though. my ($id, $date, $value) = split ' '; if ($date eq $desired_date) { $total += $value; ++$count; } } my $avg = $total / $count;
Most likely, the above code that scans through the file with the if is more than sufficient.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Calculating the average on a timeslice of data
by perlbrother (Initiate) on Jul 06, 2011 at 16:13 UTC | |
by Tanktalus (Canon) on Jul 06, 2011 at 16:24 UTC |