Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask
 
PerlMonks  

Re^4: Perl Contempt in My Workplace

by cavac (Parson)
on May 20, 2021 at 14:39 UTC ( [id://11132777]=note: print w/replies, xml ) Need Help??


in reply to Re^3: Perl Contempt in My Workplace
in thread Perl Contempt in My Workplace

For data heavy applications (especially using databases), you will always have to do a lot of heavy lifting yourself.

As soon as your data goes above a certain threshold for size and/or complexity, a "standard" data handling module (in any programming language) will probably not cut the mustard. Sooner or later you'll end up writing your own SQL statements, throw in some caching and adapt the whole thing to your exact requirements anyway.

For DataTables this is especially true. First of all, the JS part has tons of options and tons of plugins. So, a completely generic backend would have to replicate everything, potentially making it a big mess of spagetti code that moves at the speed of a glacier.

And secondly, if you use paging or scrolling in DataTables, especially in combination with filters and JOINs over multiple tables, this will just not work with some SQL statement thrown together by a generic module. It'll bog down the server and bore the user to death. Scrolling in particular can generate multiple requests per second, so you'd better optimize the heck out of your backend.

Edit: I do have some more or less generic modules for DataTable based listing of any old PostgreSQL table, but it's tightly integrated in my PageCamel framework, so it probably wont be any use to you. But just in case you're interested to see how i did it, it's in the ListAndEdit webserver module.

perl -e 'use Crypt::Digest::SHA256 qw[sha256_hex]; print substr(sha256_hex("the Answer To Life, The Universe And Everything"), 6, 2), "\n";'

Replies are listed 'Best First'.
Re^5: Perl Contempt in My Workplace
by marto (Cardinal) on May 21, 2021 at 09:45 UTC

    "For DataTables this is especially true. First of all, the JS part has tons of options and tons of plugins. So, a completely generic backend would have to replicate everything, potentially making it a big mess of spagetti code that moves at the speed of a glacier."

    I don't believe this is true, you could write code code to produce these requirements based on options provided via a perl constructor if required.

    "And secondly, if you use paging or scrolling in DataTables, especially in combination with filters and JOINs over multiple tables, this will just not work with some SQL statement thrown together by a generic module."

    I often find this to be the case in various products, of course your mileage may vary. The user in question is inflexible in terms of this issue however, and efforts to convince them otherwise seems like a waste of time and effort.

    Update: slight rewording.

      with all the respect - why inflexible?

      I've read your post Re^6: Perl Contempt in My Workplace carefully - what "inflexible" I made?
      In that reply you've stated that you only return JSONs but JSONs are not possible for 10_000_000 records.

        JSONs are not possible for 10_000_000 records

        JSON is a data format. It has no limitations on the size of the data.


        🦛

          A reply falls below the community's threshold of quality. You may see it by logging in.

        JSONs are not possible for 10_000_000 records

        As hippo says, JSON is just a data format.

        Let's give it a try, testing with some generated md5 values:

        testdb=# select count(*) from vkon_json; count ---------- 10000000 (1 row) testdb=# select * from vkon_json limit 3; js | id --------------------------------------------+---- {"f1": "c4ca4238a0b923820dcc509a6f75849b"} | 1 {"f1": "c81e728d9d4c2f636f067f89cc14862c"} | 2 {"f1": "eccbc87e4b5ce2fe28308fd9f2a7baf3"} | 3 (3 rows) -- retrieve: testdb=# select * from vkon_json where js @> '{"f1": "d1ca3aaf52b41acd68ebb3bf69079bd1"}'; js | id --------------------------------------------+---------- {"f1": "d1ca3aaf52b41acd68ebb3bf69079bd1"} | 10000000 (1 row) Time: 0.679 ms

        Less than half a millisecond. What do you think? Possible?

        (I could have put everything into a 1-column, 1-row table but that just seems too dumb. Possible, though, and will perform just as fast)

        A reply falls below the community's threshold of quality. You may see it by logging in.

        Inflexible in that you make claims that have no basis in reality, besides your own stance e.g. Re^9: Perl Contempt in My Workplace, and keep asserting the same flawed responses, despite previous corrections, as demonstrated above.

Re^5: Perl Contempt in My Workplace
by vkon (Curate) on May 28, 2021 at 08:00 UTC
    thank you a lot for your URLs!

    actually - my request was rather generic (IMO)
    I have one large table (10_000_000+ records) and just want it to be displayed - nothing more.
    Excel-like search/filtering on multiple columns is all what I needed.

    I thought the request is rather generic and was hoping to find a ready solution to the problem.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11132777]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others studying the Monastery: (6)
As of 2024-04-19 16:25 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found