JediWombat has asked for the wisdom of the Perl Monks concerning the following question:
This uses the $/ input field separator, and then uses while (<>) to read a block at a time. I'd like to do this in pure perl, but I can't find a way. I'm using the IO::Uncompress::Bunzip2 module, which gives me an IO:File object, but the only way I can seem to interact with this in a useful way is with getline() or getlines() - neither of which let me iterate through the file a block at a time, like the shell script does. Like I said, I am pretty new to Perl, so I'm sure there's a better way to do this. Can someone offer some assistance please? Here's the code I've got, which works, but is very slow:/usr/bin/bzcat $ldif | perl -e "$/ = \"\n\n\"; while (<>) { if (/uid=$match/) { print $_ ; last; } ; }"
my $z = new IO::Uncompress::Bunzip2 $file; $mbnum = $ARGV[0]; while ($line = $z->getline()) { if ($line =~ /^dn: uid=$mbnum,/) { $found = "true"; print $line; for (my $i=0; $i<100; $i++) { $matchLine = $z->getline(); print "$matchLine"; if ($matchLine =~ /^$/) { last; } } } if ($found eq "true") { last; } }
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Searching large files a block at a time
by roboticus (Chancellor) on Aug 02, 2017 at 01:42 UTC | |
by JediWombat (Novice) on Aug 02, 2017 at 02:04 UTC | |
by Anonymous Monk on Aug 02, 2017 at 03:56 UTC | |
|
Re: Searching large files a block at a time
by kcott (Archbishop) on Aug 02, 2017 at 05:21 UTC | |
by JediWombat (Novice) on Aug 02, 2017 at 05:53 UTC | |
by kcott (Archbishop) on Aug 02, 2017 at 06:52 UTC | |
by JediWombat (Novice) on Aug 03, 2017 at 23:56 UTC | |
by marioroy (Prior) on Aug 04, 2017 at 04:41 UTC | |
| |
|
Re: Searching large files a block at a time
by marioroy (Prior) on Aug 02, 2017 at 06:41 UTC |