First, you can probably offload a lot of the aggregations you are doing onto the database, which will likely do them faster. But that's neither here nor there..
If you are having trouble getting to the right data ranges fast enough, and your queries are almost always for small sub-ranges of 'timestamp' (which, I assume, is indexed) -- look into range partitioning. There is an explanation of how it works here: http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html
Also, if you are versed in C, and PostgreSQL doesn't do the kind of aggregations you need out of the box, you should know that it does support user-defined aggregation functions: http://developer.postgresql.org/pgdocs/postgres/xaggr.html
Good luck
In reply to Re: PostgreSQL cursors with Perl.
by dvryaboy
in thread PostgreSQL cursors with Perl.
by atemerev
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |