Here's my thoughts.
Save all information needed to generate the report, and get a md5 of this information to use as a filename. Store a copy of the dataset in this file, possible with DBD::SQLite.
Now, you probably do not want to be making your report queries with your main apache process. It would probably be a good idea to have a seperate server process that gets you the actual data, to avoid having an expensive and huge apache process spin it's wheels while the query runs to return some piddly amount of computationally intensive data.
This would also be handy for paging through datasets, because once the dataset is created you can page within it without re-running the query, as long as no filter options change.
Update:OK, so yeah I cheated - I know the problem ;) I still think this method mixed in with some interstitial goodness will take care of it though.
In reply to Re: (OT) Handling long-running queries with mod_perl/Mason
by Tuppence
in thread (OT) Handling long-running queries with mod_perl/Mason
by Ovid
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |