in reply to Re: fetch row or fetchall
in thread fetch row or fetchall

If I put in a mysql limit or restrict by oracle rowid then I have to do a second query to get the real unlimited total. I reckon doing a second count query is inherently more inefficient than fetching all the results from one query.

i.e this is so i can say. results 1 to 100 of total(10000) rows

From what gmax says there it looks like I am best(assuming enough memory ) just losing the count query and any limiting sql in the initail query and just going with a fetchall_arrayref on the initial query. thanks

Replies are listed 'Best First'.
Re^3: fetch row or fetchall
by tachyon (Chancellor) on Nov 10, 2004 at 00:08 UTC

    count(*) is an optimised query on MySQL (and probably most other DBs)

    mysql> describe select count(*) from global_urls_http; +------------------------------+ | Comment | +------------------------------+ | Select tables optimized away | +------------------------------+ 1 row in set (0.00 sec) mysql> select count(*) from global_urls_http; +----------+ | count(*) | +----------+ | 9908618 | +----------+ 1 row in set (0.00 sec)

    I will guarantee you that pulling 10 million odd rows just to get the count above will take longer than 0.00 sec :-)

    Pulling back 10,000 rows just to get the count and save an extra query has some potentially very undesirable side effects.

    Assuming 512Byte records the base data is 5 MB - even with a disk transfer speed of 50 MB/sec this is a minimum 1/10th second (probably more like 1/2 a second in the real world) just to pull that data off the disk. Given most DBs ability to execute hundreds of queries per second two queries is likely to be sinificantly faster as the expense of pulling 100x as much data as you really want is quit real.

    Anyway by the time you get that data into a perl array it is probably 10 MB or more. Now this may not seem like a problem until you get your head around the fact that Perl essentially never releases memory back to the OS. It does free memory but typically keeps that memory for its own reuse. So why does that matter? Well if you have 10-20 long running parallel processes (mod_perl for example) the net result is an apparent memory leak over time. As each child makes a 'mega' query it it grabs enough memory for the results. The net result is that each child grows to the size of the largest query it has ever made.

    cheers

    tachyon

      very interesting tachyon, so as always it really depends on the details of the data volumes to be fetched.

      In general though I take from this that it is a good idea just to fetch what you need to avoid memory/performance problems. thanks again

Re^3: fetch row or fetchall
by jZed (Prior) on Nov 09, 2004 at 23:54 UTC
    No. Doing a second count query is not "inheritantly more inefficient" than fetching all the results from one query. Suppose you have 1 million rows but only want 100. Which do you think will be more efficient - a) fetching 100 rows and then fetching the number 1 million (not a million rows, just the number 1 million) in a second query for a total fetch of 101 rows OR b) fetching 1 million rows for a total fetch of 1,000,000 rows?