in reply to Re^3: Perl binary file reading
in thread Perl binary file reading

Hi, Sadly, "read" doesn't work for me... but this does :)
my @r = <$fh>; my $data = join '', @r;
Thanks anyway guys. You're great. Regards. Kepler

Replies are listed 'Best First'.
Re^5: Perl binary file reading
by afoken (Chancellor) on May 03, 2016 at 07:07 UTC

      Hi Alexander

      That code is really awesome :)

      Thank you very much.

      "Read" gives me for some reason a problem with some particular byte - I didn't had the time to isolate it. It works - I've read correctly some more extra records for that matter. Still, I can't get the whole file. Maybe is from the fact I'm running Perl in my Windows 7, and might have some disconfiguration in the system... Either way my "malformed" solution worked - but yours is even better, simplier and... actually nice and clean. Thanks.

      Regards, Kepler

        "Read" gives me for some reason a problem with some particular byte - I didn't had the time to isolate it.

        Are you sure that binmode is enabled?

        Both ...

        open my $fh,'<:raw',$filename or die "Could not open $filename: $!"; my $data=do { local $/; <$fh> };

        ... and ...

        open my $fh,'<',$filename or die "Could not open $filename: $!"; binmode $fh; my $data=do { local $/; <$fh> };

        ... should do the trick. The first one requires a perl with support for I/O layers (introduced somewhere in the 5.8.x series, IIRC), the second one should also work with older perls. And this one is for really ancient perls:

        local *FH; open FH,"<$filename" or die "Could not open $filename: $!"; binmode FH; my $data=do { local $/; <FH> };

        See also Using ":raw" layer in open() vs. calling binmode() and PerlIO.

        Alexander

        --
        Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
Re^5: Perl binary file reading
by pryrt (Abbot) on May 03, 2016 at 02:14 UTC

    You didn't post your updated code, or say how it "doesn't work for" you.

    Did you read the documentation for read? Did you understand that you will have to tell it how many bytes to read in each call to read()? As a result, instead of slurping the whole file into an array using @r = <$fh>, you will either do one bulk read (which assumes you know the file length to begin with; that's possible, but it's left as an exercise for the reader, if you really want to go down that route, to research how to find the file size), or you will have to do an individual read for each 36-byte record (which would be my recommendation), or do reads for each sub-element of the record (which would be a more complicated loop structure).

    If you go down the road of a read() for each group of 36-bytes, I'll give you some more hints: in looking at the return values for read() in the documentation, you should be able to come up with a loop construct that will stop once you've reached the end of the file, and processes each individual record very similarly to how you processed each group of 36 bytes from the joined $data scalar.

    For example, I took your original code, converted over to the array/join you tried; then I output some equals, and reset to the beginning of the file handle and re-read it using a read() loop, parsing it very similarly (but appropriate for a $record that's exactly 36 bytes every time), so not using offsets in the substr. Given the input file

    This is the Name____________00099999This is another name ______11199999This is the third name......22299999
    ...(with windows CRLF line ending taking up two bytes between "name" and "______" to ensure 36-byte record lengths)

    I got the output

    This is the Name____________ - 49:49:49 This is another name ______ - 50:50:50 This is the third name...... - 51:51:51 ============================== This is the Name____________ - 49:49:49 This is another name ______ - 50:50:50 This is the third name...... - 51:51:51

    If you still cannot get the loop to work with read() after reading these hints and the documentation, feel free to post your updated code, and we can point out where you've gone wrong.