in reply to Re: Unpacking small chucks of data quickly
in thread Unpacking small chucks of data quickly
sort is almost the ideal, as algorithm improved for tens of years, and most implementations have handlers for special cases, such as merges between multiple files that were already sorted.
Although I veer a bit off the posts original course, it seems others have delved into the area of optimization so I'd like to drop some references for fellow geeks to some uses of Perl in the scientific community, as well as Hierarchical Data Format:
For large data sets HDF offers many of the advantages sought by this needy monk... HDF is designed to access to ordered and hierarchical sets of data on a large scale, optimized for performance and compatible with large-scale parallelization or distributed data systems.
HDF is used to store most of NASA's imaging data from satellites, but finds many other uses such as optimizing hi-speed HTML templating systems (as found in Clearsilver and the associated Data::ClearSilver::HDF.
There also is a CPAN module PDL::IO::HDF5 that reads/writes HDF5.
IF performance is really a concern, then using an appropriate storage mechanism for the on-disk data is the place one should focus. Perl makes it easy to measure this performance using Perl's benchmarking and profiling features. Perl itself can perform suprisingly well even in high throughput applications if the code is optimized based upon data gathered through profiling. You only have to look at "Bio" perl to discover plenty of examples.
And for a real diversion, the book Perl for Exploring DNA that came out in July looks like a fascinating book. Probably has a whole slew of ideas for regexp or advanced pattern matching.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: Unpacking small chucks of data quickly
by BrowserUk (Patriarch) on Nov 21, 2007 at 19:50 UTC |