in reply to [OT] The Long List is Long resurrected

I finally had a few minutes of time thinking about the LLIL problem and looked into PostgreSQL for a solution. For timing reference, is use the data generation and perl reference solution from eyepopslikeamosquitos post Rosetta Code: Long List is Long. With the 3 files, the reference script takes nearly 90 seconds on my puny work laptop:

$ time perl llil.pl big1.txt big2.txt big3.txt > big.tmp llil start get_properties : 6 secs sort + output : 78 secs total : 84 secs real 1m28,236s user 1m27,327s sys 0m0,816s

llil is treated as a one-time problem, so my PostgreSQL solution is certainly not ideal. But if this is thought of as a problem of "these are logs and they grow over time", this is certainly something to consider.

The working directory needs to be read/writeable by the PostgreSQL server process. Then we can bulk import/export the data with the COPY command. Here is the SQL script:

CREATE TABLE llil_raw ( name text, count bigint ); -- INDEX makes it *way* slower --CREATE INDEX llil_raw_idx ON llil_raw(name); COPY llil_raw (name, count) FROM '/home/cavac/src/long_list_is_long/bi +g1.txt' ( FORMAT TEXT); COPY llil_raw (name, count) FROM '/home/cavac/src/long_list_is_long/bi +g2.txt' ( FORMAT TEXT); COPY llil_raw (name, count) FROM '/home/cavac/src/long_list_is_long/bi +g3.txt' ( FORMAT TEXT); CREATE TABLE llil_result ( name text, count bigint ); INSERT INTO llil_result (name, count) SELECT name, sum(count) AS total FROM llil_raw GROUP BY name ORDER + BY total; COPY llil_result (name, count) TO '/home/cavac/src/long_list_is_long/r +esult.txt' ( FORMAT TEXT); DROP TABLE llil_result; DROP TABLE llil_raw;

And the result:

$ time psql -U Earthrise_Server -d Test_DB -f llil.sql CREATE TABLE COPY 3515200 COPY 3515200 COPY 3515200 CREATE TABLE INSERT 0 10367359 COPY 10367359 DROP TABLE DROP TABLE real 0m19,675s user 0m0,022s sys 0m0,009s

Yes, it's a lot slower than the optimized one-time solutions. But in practice, as i imagine, the data is probably produced over time, a database could keep the running aggregate in llil_result up-to-date on every insert, without having to parse gazillion files every time. And it would be way more flexible, as soon as you need filtering or match/mash it with other data.

I also didn't do any optimization or parallelization (inherited subtables with exclusive primary key ranges and/or other partitioning tricks).

What i DID learn in this experiment is that COPY operations can be blazingly fast for larger datasets, expecially compared to INSERTs. (Most of the time spent here was the INSERT INTO SELECT FROM generation of the result table).

PerlMonks XP is useless? Not anymore: XPD - Do more with your PerlMonks XP
Also check out my sisters artwork and my weekly webcomics

Replies are listed 'Best First'.
Re^2: [OT] The Long List is Long resurrected
by marioroy (Prior) on Aug 09, 2024 at 21:36 UTC
    I finally had a few minutes of time thinking about the LLIL problem and looked into PostgreSQL for a solution.

    Thank you for the PostgreSQL solution. I changed the ORDER BY clause to match the OP's requirement.

    INSERT INTO llil_result (name, count) SELECT name, sum(count) AS total FROM llil_raw GROUP BY name ORDER + BY total desc, name;

    Results from Linux installed on a USB 3.0 drive.

    $ /usr/bin/postgres -D /var/lib/pgsql/data -h localhost $ time psql -d test_db -f llil.sql CREATE TABLE COPY 3515200 COPY 3515200 COPY 3515200 CREATE TABLE INSERT 0 10367603 COPY 10367603 DROP TABLE DROP TABLE real 0m52.508s user 0m0.001s sys 0m0.001s

    Results from the DB residing in memory, /tmp/data location.

    $ /usr/bin/postgres -D /tmp/data -h localhost $ time psql -d test_db -f llil.sql CREATE TABLE COPY 3515200 COPY 3515200 COPY 3515200 CREATE TABLE INSERT 0 10367603 COPY 10367603 DROP TABLE DROP TABLE real 0m49.396s user 0m0.001s sys 0m0.002s

    See also, SQLite solution. The results there were captured on Clear Linux. Here, on Ubuntu Linux 24.04. The Sort::Packed module is used for sorting the output.

    $ perl -I ~/perl5/lib/perl5/ llilsql.pl --threads=1 big{1,2,3}.txt | c +ksum Tie::Hash::DBD SQLite database - start fixed string length=12, threads=1, maps=32 get properties : 41.352 secs pack properties : 5.897 secs sort packed data : 1.164 secs write stdout : 3.602 secs total time : 52.017 secs count lines : 10545600 count unique : 10367603 2956888413 93308427 $ perl -I ~/perl5/lib/perl5/ llilsql.pl --threads=8 big{1,2,3}.txt | c +ksum Tie::Hash::DBD SQLite database - start fixed string length=12, threads=8, maps=32 get properties : 5.572 secs pack properties : 0.856 secs sort packed data : 1.173 secs write stdout : 0.719 secs total time : 8.323 secs count lines : 10545600 count unique : 10367603 2956888413 93308427 $ perl -I ~/perl5/lib/perl5/ llilsql.pl --threads=16 --maps=64 big{1,2 +,3}.txt | cksum Tie::Hash::DBD SQLite database - start fixed string length=12, threads=16, maps=64 get properties : 3.140 secs pack properties : 0.533 secs sort packed data : 1.175 secs write stdout : 0.395 secs total time : 5.247 secs count lines : 10545600 count unique : 10367603 2956888413 93308427