dbg has asked for the wisdom of the Perl Monks concerning the following question:
The example file in question is approximately 1.7 million records in length (about 140mb), with each row having about 7-8 fields. I'm torn between trying to handle this all in PERL's structures somehow, or importing it into a database and doing something with it there. I'm also unsure of the best way to go about actually counting the values (this is where perhaps a DB search function might be useful, and fast?). Some examples of what we'd look for include, "Top 10 source IP's" or "Number of blocked sessions by date". Each record would include a date/time, IP info typical to a firewall log, attack severity, etc.
Any input here would be most appreciated. The data structure isn't complex, but processing it efficiently and quickly may take some creative thinking. I also have to consider that a database may get rather large, since this is just one file of about 30 I would need to process each month, and hopefully keep around for a little while for historical reasons. Thanks!
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Creative sorting and totalling of large flatfiles (aka pivot tables)
by kvale (Monsignor) on Dec 02, 2004 at 00:58 UTC | |
|
Re: Creative sorting and totalling of large flatfiles (aka pivot tables)
by NetWallah (Canon) on Dec 02, 2004 at 05:31 UTC | |
|
Re: Creative sorting and totalling of large flatfiles (aka pivot tables)
by pearlie (Sexton) on Dec 02, 2004 at 05:42 UTC | |
|
Re: Creative sorting and totalling of large flatfiles (aka pivot tables)
by slife (Scribe) on Dec 02, 2004 at 10:30 UTC |