I'm processing binary file which could contain millions of records. Record format is fixed-length-header, body+.
Currently unpack is used to unpack binary data. From NYTprof benchmark, a lot of time is spent in read and unpack call (which is expected). I'm not sure whether if there is any optimized version of pack/unpack (like compiled regex) where pack/unpack template is compiled (potentially speedier).
In reply to Optimizing binary file parser (pack/unpack) by pwagyi
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |