I'm so close I can taste it!
The Data Restrictions section had the answer. The NULL values were what was killing me. I'm gonna look into DBD:AnyData.
I hate to keep asking questions, but I'm a Perl Novice. These files that I am working with could contain any number of NULL values. What would be the best way of handling these?
--SCRATCH THAT--
I think I got it. Now I just have 1 last question. How do I get it to output in csv format?
| [reply] |
How'd you handle the NULL's? I was about to say that options include converting the NULL's to 0's in your pre-processing step (maybe something like s/,(?=,)/,0/g, but doesn't work for last col) .. or you could run a bunch of "UPDATE results SET colname = 0 WHERE colname = ''" statements .. both of those, of course assume you want them treated as 0's and not as NULL's (will greatly affect the AVG value).
As for output, perhaps something like this (depends on what you need in terms of cols) (note change in $dbh method used):
my @avgs = $dbh->selectrow_array($avgSQL);
print join(",", @cols) . "\n";
print join(",", @avgs) . "\n";
| [reply] [d/l] [select] |
Excellent! That's awesome. Thank you! I think I even understand what we did! :-)
The approach I was trying to take was an example I found in the Data::Dumper page. I added:
$Data::Dumper::Pair = ",";
print Dumper($avgsHashRef);
I was able to get output in 2 columns, which was good, but it was wrapped with the $VAR1 = {, and the data was indented. I was looking for a way to remove that and just get the data. If I later decide to go with a 2 column output, is there a way to strip that output down?
Thank you again VERY MUCH!
| [reply] [d/l] |