code updated and tested
#!/usr/bin/perl use warnings; use strict; $|++; use JSON; my $l; my @vals; my $json; my %pairs; while (<>) { $l = $_; chomp $l; @vals = split /;/, $l; if ($vals[0] =~ /Query/) { $pairs{$vals[1]}{$vals[2]} = $vals[3]; } elsif ($vals[0] =~ /Answer/) { $pairs{$vals[1]}{$vals[2]} = $vals[3]; $json = encode_json $pairs{$vals[1]}; print $json."\n"; delete $pairs{$vals[1]}; } }
[root@hadron ~]# ./t-1207429.pl t-1207429.txt {"ip":"1.2.3.4","host":"www.example.com"} {"ip":"2.3.4.5","host":"www.cnn.com"} {"ip":"3.4.5.6","host":"www.google.com"}
The real question is whether, if running this against 100GB file with >500000 hash entries, will delete actually reduce the size of the has or not?
Or is there a leaner way to do this?
In reply to Re: Memory utilization and hashes
by bfdi533
in thread Memory utilization and hashes
by bfdi533
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |