Wilderness has asked for the wisdom of the Perl Monks concerning the following question:
If I just `do` this file, it will take only one value for alpha->beta since a Perl hash can have only one unique key. But I want to be able to parse this data structure and indicate that there are duplicate keys in the file. These structures might span multiple files. Is there a way to eval a block at a time from the file - in this example, eval just the first level alpha->beta->gamma->theta - store it in a local hash, and then eventually eval alpha->beta->gamma->zeta and flag it? I know I could have arrays instead and then iterate over them to find copies but I want to keep the intuitive structure intact but still be able to flag any duplicates. Any other thoughts or suggestions of creating files differently are welcome.{ alpha => { beta => { gamma => theta, delta => lambda, }, beta => { gamma => zeta, }, }, },
|
|---|