in reply to Hash Search is VERY slow
G'day rtjensen,
Welcome to the Monastery.
"I have a script that loads a CSV file of around 800k lines, they're firewall logs, I'm trying to pull out the IP address and the URL they're hitting."
I use very large CSV files at $work. In my case, they hold biological data; however, that's completely immaterial. I have one file which I use for volume testing which is over 2Gb. I expect that's probably comparable to your logfiles.
I would take a different approach to what you show and use Text::CSV (if you also have Text::CSV_XS installed, it will run a lot faster). What follows is example code and data to show the technique; adapt it for your specific needs.
The data:
$ cat pm_11137097_csv_parse.csv A,B,IP0,C,D,URL0,E,F A,B,IP1,C,D,URL1,E,F A,B,IP2,C,D,URL2,E,F A,B,IP3,C,D,URL3,E,F A,B,IP9,C,D,URL4,E,F A,B,IP2,C,D,URL5,E,F A,B,IP1,C,D,URL6,E,F A,B,IP0,C,D,URL7,E,F A,B,IP1,C,D,URL8,E,F A,B,IP0,C,D,URL9,E,F
The code:
#!/usr/bin/env perl use strict; use warnings; use autodie; use Data::Dump; use Text::CSV; use constant { IP => 2, URL => 5, }; my $csv_file = 'pm_11137097_csv_parse.csv'; my %urls_for_ip; my $csv = Text::CSV::->new(); { open my $fh, '<', $csv_file; while (my $row = $csv->getline($fh)) { push @{$urls_for_ip{$row->[IP]}}, $row->[URL]; } } dd \%urls_for_ip;
The output:
{ IP0 => ["URL0", "URL7", "URL9"], IP1 => ["URL1", "URL6", "URL8"], IP2 => ["URL2", "URL5"], IP3 => ["URL3"], IP9 => ["URL4"], }
See also: autodie and Data::Dump.
— Ken
|
|---|