in reply to Re: Reading file into a numbered hash
in thread Reading file into a numbered hash

I thought I mentioned I was looking for AWTDI. On a personal note, I enjoy hashes over arrays. I find it more intuitive to say $hash{10} to get the tenth line, than $array[9]. I also like being able to 'delete $hash{10}' and actually have my hash have 1 less key. And, I would have to benchmark this, but since I was just looking for AWTDI, I don't care enough to, but depending on the size of the file, the hash lookups should be faster.

PS: Before someone says that you can delete an array element.. yes, you can. But all it does it put that element back to an uninitialized state, it doesn't really delete the element. If you have an array with 10 elements, and you delete one, you still have 10 elements.

Cheers,
KM

Replies are listed 'Best First'.
Re: Re: Re: Reading file into a numbered hash
by Fastolfe (Vicar) on Nov 23, 2000 at 01:50 UTC
    I only offered that because if you're dealing with a largish amount of data, it's significantly more efficient to deal with a sequential data structure (such as an array) than a "random-access" one with a sequential key.
      I would disagree, since hash lookups aren't exactly slow. And, sometimes speed isn't always what you want in a script. If I always wanted the fastest way, I would likely use C much of the time instead of Perl. Sometimes, you want what makes most sense to "you", and what "you" find intuitive. Here is an example, sort of the same concept, but a little different...

      I had to work on a project that someone else worked on. All his DBI fetchs would be returned into arrays, maybe for the same reasons you are talking about. So, much of the code looked like:

      do something with $array[8] Now, do something with $array[9]

      Well, that means nothing to me. How do I know that the 8th element is the 'FNAME' field and the 9th is the 'LNAME' field? I don't! Also, what happens when you change the table? BAM! Things can get seriously out of whack if I make a 'MNAME' field after the 'FNAME' field.

      So, I started off by changing all his fetchrow_array's to fetchrow_hashref's. Now, I had a more intuitive (and scalable) way to do things like:

      do something with $hashref->{FNAME} do something else with $hashref->{LNAME} Don't forget $hashref->{MNAME}

      Did I sacrifice some speed by using a hash(ref)? Maybe. Is the script now easier to read and maintain? Definitely! So, which is more efficient? I say it is more efficient to have readable and maintainable code. This saves work-time, which is much more measurable than fractions of a second differences that may be gained/lost with using one data structure vs. another.

      So, I disagree that it is more efficient to use an array, based on my experiences. But, regardless, this was an exercise to find AWTDI ;-)

      Cheers,
      KM

        Now I understand KM's point in the more general example, but in the specific case that started this thread, I'd like to suggest that Fastolfe's use of a simple array could get you the same readability as a hash, as the keys you need are always natural numbers. The matter of the zero-based array is handled by tossing in an unshift(@array,'');, following the read, after which $array[5] really refers to line 5 of the original file (and I also note that despite its good looks, using '05' as a hash key isn't going to help future maintainability).

        Another benefit is simpler printing:

        print @array; # gives the same result as: print qq{$hash{$_}} for sort keys %hash; # this

        My conclusion (for now ;-): arrays are easier when your keys are always going to be sequential integers.