Ah, I feel your pain: I recently had to paginate results from SQL Server, and wasn't allowed to modify the SQL in any way (stupid over-defensive stored proc devs...). Here's the basic idea of the solution I came up with.
# a unique SessionID for the query already available
# in package global $session
my $cache_file = $session.'_cache.yaml';
my $result_set;
if (-f $cache_file && -r $cache_file && -s $cache_file) {
# we already ran this query (session), and we have a cache!
$result_set = YAML::LoadFile($cache_file);
}
else {
$result_set->{data} = run_dbQuery();
$result_set->{start} = 0;
}
display_results( $result_set->{data}, $result_set->{start}, 20 );
$result_set->{start} += 20;
YAML::DumpFile($cache_file, $result_set);
#---- display_results : shows slice of result set ----
sub display_results {
my ( $data, $start, $length ) = @_;
die "Data Set is not an ARRAY ref" unless ref $data eq 'ARRAY';
my $endpoint = ( $start+$length > @$data ? @$data-1 : $start+$lengt
+h-1 );
show_toBrowser( @$data[$start..$endpoint] );
}
Essentially, I cache the full result set in a YAML file. When I need the next page, I restore the array in the YAML file and just use an array slice to get the subset I want. This might be a bad idea for huge data sets, but it should get you thinking in the right direction (assuming you can't use SQL Server to do the pagination for you, which will likely be much faster).
I deal with cache expiration and removal in session restoration and destruction calls elsewhere.
|