#!usr/bin/perl
use feature qw/say/;
use Data::Dumper;
$Data::Dumper::Terse = 1;
$hash = {
'testing' => {
'link' => "http://www.espn.com",
'bandwidth' => "100",
'r' => "2",
},
};
say Dumper($hash);
You'll want to invoke this as perl script.pl >data.txt or so, BTW. Here's what the resulting data file will look like:
{
'testing' => {
'link' => 'http://www.espn.com',
'bandwidth' => '100',
'r' => '2'
}
}
This in turn can be fed to another script again, perhaps something along the following lines:
#!/usr/bin/perl
use feature qw(say);
use Data::Undump;
undef $/;
$data = <>;
$undump = undump($data);
die "'testing' not found!" unless($undump->{'testing'});
say $undump->{'testing'}->{'link'} // (die "'link' not found!");
say $undump->{'testing'}->{'bandwidth'} // (die "'bandwidth not found!
+");
This should be invoked as perl script.pl <data.txt; it'll read in the data in one big chunk, undump it, verify that the testing key exists (and raise an error otherwise), and print the link and bandwidth values associated with it, again raising an error if they don't exist (by abusing the // defined-or operator, which I'm not convinced is considered good style).
Playing around with this, I've found that Data::Undump can be fairly fussy about exactly what the data you feed it should look like (which is perhaps what the author meant when they aluded to it being an "early release"), so depending on how much control you have over the processes creating these data files, it may not be suitable for you. Certainly it does not seem like a very robust module (in the Postelian sense), and the cleanest solution might be to write a proper parse using Parse::RecDescent or so. Whether that's worth the effort is perhaps another question.
|