I'm working on a database application with Perl and Postgresql, using DBI and Mojolicous. The main DB table is very long and I'm trying to come up with a way to reduce the data. At first I tried a pure SQL approach, but the results were disappointingly slow. Now I'm considering a Perl approach. If you have the patience please read on.
Each line in the table is a log. There will be many rows that are the same except for the timestamp. As the data ages, I want to cull the rows. I want to group records that have the same combination of class, ip_address, and hostname, and keep the highest timestamp for each day from each group.
Table "public.agent_log"
class | text | hostname | text | ip_address | text | promise_handle | text | promiser | text | promisee | text | policy_server | text | rowId | integer | \ not null default nextval('"agent_log_rowId_seq"'::regclass) timestamp | timestamp with time zone | promise_outcome | text | Indexes: "primary_key" PRIMARY KEY, btree ("rowId") "client_by_timestamp" btree ("timestamp", class)
Example data
2014-01-22T13:44:00 any 192.168.0.1 moon.example.com ... 2014-01-22T14:44:00 any 192.168.0.1 moon.example.com ... KEEP 2014-01-22T14:44:00 any 192.168.0.2 mars.example.com ... KEEP 2014-01-22T13:44:00 any 192.168.0.2 mars.example.com ... 2014-01-23T13:44:00 any 192.168.0.1 moon.example.com ... 2014-01-23T14:44:00 any 192.168.0.1 moon.example.com ... KEEP 2014-01-23T14:44:00 any 192.168.0.2 mars.example.com ... KEEP 2014-01-23T13:44:00 any 192.168.0.2 mars.example.com ... 2014-01-22T13:44:00 cpu_1 192.168.0.1 moon.example.com ... 2014-01-22T14:44:00 cpu_1 192.168.0.1 moon.example.com ... KEEP 2014-01-22T14:44:00 cpu_1 192.168.0.2 mars.example.com ... KEEP 2014-01-22T13:44:00 cpu_1 192.168.0.2 mars.example.com ... 2014-01-23T13:44:00 cpu_1 192.168.0.1 moon.example.com ... 2014-01-23T14:44:00 cpu_1 192.168.0.1 moon.example.com ... KEEP 2014-01-23T14:44:00 cpu_1 192.168.0.2 mars.example.com ... KEEP 2014-01-23T13:44:00 cpu_1 192.168.0.2 mars.example.com ...
I tried aggregating the data with
SELECT class, max(timestamp) as timestamp, hostname, ip_address, promise_handle, promiser, promisee, policy_server, promise_outcome FROM agent_log WHERE timestamp < now() - interval '7 days' GROUP BY class, DATE_TRUNC( 'day', timestamp), hostname, ip_address, promise_handle, promiser, promisee, policy_server, promise_outcome
I wrapped that in a query that didn't quite work, but was essentially DELETE * FROM agent_log WHERE NOT IN... It was painfully slow.Now I'm wondering if I can do this with Perl instead. Using cursor I could fetch rows, group them, and delete the lower date ones. Is it likely that using Perl in anyway could be faster than raw SQL? What would you do?
Neil Watson
watson-wilson.ca
In reply to Help on selecting and deleting many records in a DB. by neilwatson
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |