So I wrote a script that takes a large data set (csv) and filters out records based on another large data set (also csv). they both match based on the email field and my script accounts for the positioning of it. the final data set is outputted to a 3rd file. Caveat: I'm a newbie.
#!/usr/bin/perl -w use strict; use Text::CSV; use Tie::File; #load all args to vars my($userCSV, $supCSV, $ufield, $sfield, $output) = @ARGV; #open each file into array for editing tie my(@userList), 'Tie::File', $userCSV or die; tie my(@supList), 'Tie::File', $supCSV or die; tie my(@output), 'Tie::File', $output or die; #load up CSV methods into vars my $uCSV = Text::CSV->new(); my $sCSV = Text::CSV->new(); #each line iterated into the line var foreach my $line (@userList) { #convert line to CSV constructor if($uCSV->parse($line)) { #open a new var and load csv values into it my @userCols = $uCSV->fields(); #run a check against the suppression if(check($userCols[$ufield])) { #write to output push @output, $line; } } } #check if email exists in suppresion list sub check { #var parsed set my($eCheck) = $_[0]; #loop through suppression list and get each line as a var foreach my $line (@supList) { #convert the var into a CSV format if($sCSV->parse($line)) { #convert the my @supCols = $sCSV->fields(); if($eCheck eq $supCols[$sfield]) { print "Busted\n"; return 0; } } } print "Looped\n"; return 1; }
So here's my problem. I run it against very small files for both inputs and the output comes out correctly, when I run it production state with the very large files, it does whatever it does efficiently but the output file shows nothing. I'm not good at error checking, -w and strict are all i know. Is there a more efficient way to achieve this? is there an error in my code that made my test case a fluke success? any and all guidance is welcome. Thanks.
In reply to Very Large CSV Filter by Locke-Eros
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |