in reply to Converting a Flat-File to a Hash

I cannot help feeling there is a better way of doing things, which will make better use of the computer's resources.

Any improvement on the OP would be a matter of robustness (along the lines suggested by GrandFather), or of "standardizing" on a solution that is already available (i.e. using a CPAN module), or merely of style or perceived maintainability (e.g. using fewer lines of Perl code and/or adding commentary/POD to describe the expected input file format, etc).

All of those are possible, but none of them have any impact that I could imagine on "making better use of the computer's resources". I'm not really sure what you mean by that, but if you mean "make the process more efficient", I don't think any change of the OP code would have noticeable impact -- what you've posted is close enough to being as efficient as possible.

If there is some other aspect to "use of the computer's resources" that you're thinking of, that might make an interesting discussion.

BTW, I think, given the sample file contents, your use of  $date{$array[1]} = $array[2] would be wrong; the indexes should be 0 and 1 instead. Or better yet, something like this:

open INPUT, "<", $InputFile; my %data = map { chomp; split( /:/, $_, 2 ) } <INPUT>; close INPUT;

That's really minimalist -- maybe a bit too much so for some, but it's really a matter of taste and how much you can trust your data files to be as expected.

(Updated the split so that it returns at most two elements per line from the input file; this is still vulnerable to serious trouble if the file contains any sort of line that lacks a colon.)