Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:
This is a piece of my code that is taking up way to much time. The contents of @data are as large as 60 meg
It needs to iterate through an array (@data) and then iterate through another array(@m_info) and look for a match.foreach my $record (@data) { foreach my $data (@m_info) { if ($record->[1] =~ /$data->[0]/) { $record->[1] = $data->[3]; $record->[1] = join ("", $record->[1], "_", $data->[1]); } } }
I have though about using a hash keyed by the record info in the @data array, but @m_info is an array of arrays.
Your thoughts are greatly appreciated !
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: More efficient Data Structure needed
by BrowserUk (Patriarch) on Dec 18, 2002 at 03:05 UTC | |
by Aristotle (Chancellor) on Dec 18, 2002 at 12:16 UTC | |
|
Re: More efficient Data Structure needed
by pg (Canon) on Dec 18, 2002 at 02:11 UTC | |
|
Re: More efficient Data Structure needed
by dws (Chancellor) on Dec 18, 2002 at 05:50 UTC | |
|
Re: More efficient Data Structure needed
by djantzen (Priest) on Dec 18, 2002 at 02:15 UTC | |
by BrowserUk (Patriarch) on Dec 18, 2002 at 02:43 UTC | |
by djantzen (Priest) on Dec 18, 2002 at 02:58 UTC | |
by iburrell (Chaplain) on Dec 18, 2002 at 17:10 UTC | |
|
Re: More efficient Data Structure needed
by Monky Python (Scribe) on Dec 18, 2002 at 07:43 UTC |