A BerkeleyDB tied hash temp file might be a good choice. Your hash key would be your sort criteria, the value would ideally be the regexp replaced fields, unless you really need the original data for something else.
Once you're done reading/storing the SQL results, it's a simple matter to read the hash back in sorted order, doing the printf formatting and less piping then.
You might also consider saving aside the BerkeleyDB temp files as an expiring cache if you get a lot of common queries that don't necessarily need absolutely up-to-date data, and/or user can specify when they do need the latest data. You're doing a lot of I/O here, it's not gonna be fast. Some users might be willing to trade data currency for response time.
Is there any specific reason you're not making the SQL query directly from perl? Assuming its possible, that would likely reduce the complexity a great deal, and increase the reliability. Avoiding data problems introduced by parsing raw text output from another program is always good.
--Dave
In reply to Re^3: Print to Less Screen
by armstd
in thread Print to Less Screen
by bigbot
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |