in reply to Unpacking small chucks of data quickly

Think of “memory” as being “a disk file,” not semiconductor chips on a circuit-board. Because in a modern computer system that's what it really is:   virtual memory.

So, if you have "100 megabytes of anything at all," you do not want to build an in-memory hash of it for any reason whatever.

It appears to me that what you want to have, as the output of your program, is a structure which lists, for each "ID", all of the "VALs" for that ID. Therefore, let me suggest an alternate approach. (Or rather, second what has already been suggested.)

Sort the file, using an on-disk sort like the sort command. When you do that, first by ID and then by VAL, you know that all of the records having any given ID-value will be adjacent to one another.

So now, you read the sorted file sequentially, and you remember what the “previous” ID was, to see if it is the same or different as “this” one. If different, then the end of one group has been reached and the start of a new one has begun. If the same, you have another ID to be added to the current group. Finally, when the end of the file has been reached, you're by-definition at the end of the final group.

Yes, this is literally what those folks were doing with those punched cards and magnetic tapes, all those years ago even before computers were invented.

A hundred megs? Oh, I'd be very surprised if it took even five seconds, after the sort is through. And the sort won't take long either.

Replies are listed 'Best First'.
Re^2: Unpacking small chucks of data quickly
by spectre9 (Beadle) on Nov 21, 2007 at 15:59 UTC
    If you are working within a Windows environment, go ahead an install MingW or Cygwin so you can gain access to sort and some of the other Unix tools.

    sort is almost the ideal, as algorithm improved for tens of years, and most implementations have handlers for special cases, such as merges between multiple files that were already sorted.

    Although I veer a bit off the posts original course, it seems others have delved into the area of optimization so I'd like to drop some references for fellow geeks to some uses of Perl in the scientific community, as well as Hierarchical Data Format:

    For large data sets HDF offers many of the advantages sought by this needy monk... HDF is designed to access to ordered and hierarchical sets of data on a large scale, optimized for performance and compatible with large-scale parallelization or distributed data systems.

    HDF is used to store most of NASA's imaging data from satellites, but finds many other uses such as optimizing hi-speed HTML templating systems (as found in Clearsilver and the associated Data::ClearSilver::HDF.

    There also is a CPAN module PDL::IO::HDF5 that reads/writes HDF5.

    IF performance is really a concern, then using an appropriate storage mechanism for the on-disk data is the place one should focus. Perl makes it easy to measure this performance using Perl's benchmarking and profiling features. Perl itself can perform suprisingly well even in high throughput applications if the code is optimized based upon data gathered through profiling. You only have to look at "Bio" perl to discover plenty of examples.

    And for a real diversion, the book Perl for Exploring DNA that came out in July looks like a fascinating book. Probably has a whole slew of ideas for regexp or advanced pattern matching.

    spectre#9 -- "Strictly speaking, there are no enlightened people, there is only enlightened activity." -- Shunryu Suzuki