IlyaM is very right with read and pack/unpack. I do tons of binary data work in Perl (actually, have a denied proposal with ORA on it because it was deamed to rarly used :( ), and here are a few things i can add :
Try to use the code on the same machine the file is created on (or atleast same kind of machine) to avoid :
endian problems (IRIX to Linux, among others)
int size errors (64-bit ints)
things i have not thought of
things that none of us can imagine (tech. changes)
Before processing any file, divide the size (-s) by the read size. (Avoid processing 100 Gb of invalid data.)
Make a module of it, that way the pack/unpack are in one place the next time. they suck to re-compute on a cocktail napkin, on a confrence call, from a pub, with a 300 bytes record length, working from an ancient language struct def :/ (not that this happens)