in reply to Re^3: perl mysql - INSERT INTO, 157 columns
in thread perl mysql - INSERT INTO, 157 columns
Usually not: the fetch starts only at the first fetch, so no table content is actually fetched at all.
Some DBD's do not even need the execute. They have the table/field info readily available after the prepare.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^5: perl mysql - INSERT INTO, 157 columns
by erix (Prior) on May 02, 2014 at 19:25 UTC | |
Usually not Well, maybe so but you really should limit in the case of postgres. This is basically your example code running against 9.4devel, with and without a limiting where-clause:
(foo has 10M 1-column rows; just a create table foo as select n from generate_series(1, 10000000) as f(n); ) (What the hell -- let me just dump the test here too, even if it's a bit clunky (disks are cheap and patient): ) | [reply] [d/l] [select] |
by Tux (Canon) on May 03, 2014 at 12:52 UTC | |
You convinced me, but I was not completely wrong. Unify for example is doing what I said. That is most likely also why my mind thought it was for all databases. Also note that accessing a remote database (see Oracle) diminishes the difference:
Read more... (3 kB)
Enjoy, Have FUN! H.Merijn | [reply] [d/l] [select] |