in reply to Generating Hashes from arrays of arbitrary size

This returns the same thing:

use strict; use warnings; use Data::Dumper (); my %shash; my $line; while (defined($line = <DATA>)) { # Get rid of end of line. chomp($line); # Build hash from line. my $p = undef; $p = { $_ => $p } foreach (reverse(split(/--/, $line))); # Merge hashes. my $key; my $base = \%shash; for (;;) { ($key, $p) = each(%$p); last unless ($base->{$key}); $base = $base->{$key}; } $base->{$key} = $p; } print(Data::Dumper::Dumper(\%shash)); __DATA__ Item1--Item2--Item3 ItemX--Item2--ItemA Item1--ItemV--Item3--Item4

After the fix to my code, it ended up being bigger than your code. ah well. Do note the use of chomp. It's much better than those regexps of yours.

Replies are listed 'Best First'.
Re^2: Generating Hashes from arrays of arbitrary size
by graff (Chancellor) on Oct 01, 2004 at 02:44 UTC
    Do note the use of chomp. It's much better than those regexps of yours.

    Well, no. The OP did mention that the input data comes from a variety of OS's (though he didn't say which OS(s) his script is supposed to run on). Chomp will remove anything at the end of a record that matches "$/", whose default value is OS-dependent, which means that a when a script runs on any sort of unix, chomp will leave a "\r" untouched when the input happens to come directly from a CRLF source.

    I would just recommend simplifying the OP's regex:

    s/[\r\n]*$//;
    Also, I'd recommend a while loop instead of  foreach my $line ( <DATA> ), because the for loop causes the entire file to be slurped into a list before the first iteration begins. For small files, that's not a problem, but why invite this sort of trouble if the files happen to get really big?
      oops, you're right. Script running on multiple OS != Data generated by multiple OS.