in reply to multidimensional hash emulation vs hash of hashes
My own personal feeling on this: I tend to look at hashes the same way I looked at hashes the same way I looked at structs in C programming. It's a handy-dandy way of grouping related information together so you can act on that group as a single unit. This is not to be confused with C++ classes in any way.
Since hashes are put together on the fly Perl has no way of enforcing all of the elements are present when the hash is created. Let's look at an example of what I'm talking about:
In the first employee record we see the fields fullname, position and manageent and in the second record we see realname, job and has_reports. OK, so which is real and which is Memorex?my $company = { employees => { fred => { fullname => "Freddy Freeloader", position => "panhandler", management => "no" }, clem => { realname => "Clem Kaddiddlehopper", job => "questionable", has_reports => "no" } } };
In spite of that if I am going to group together data without resorting to creating a Perl module I'll use a HoH structure, but just be sure you are keeping track of what you are putting in there.
You asked which is faster. I think this is a non-sequitor in this case. If you are using field name such as client_goldman_phone and friends you are burdening yourself as a programmer to coming up with a way to search all of the names, append subfields and all sorts of cruft that gets in the way of good programming. Think maintainability. When I write Perl code my assumption is that someday someone else may have to maintain it. If you are doing crufty things then you are leaving behind crufty code for folks inheriting your code to deal with and your name will be dragged through programming mud.
Coding keys %{$company{employees}} is easier to write and much easier to read than some sort of fugazy my @keys = grep "employees", keys %company; in my humble opinion.
HTH
|
|---|