Given what I'm seeing, the tables don't have to be that large, in terms of record count, to have a large impact on memory usage.
To give you an example, I have one table with 2,048,812 records. A decent amount, but not other-worldly.
I don't have a wonderful grasp on how linux uses or reports memory, so I'll show you what top is showing me when I select from that table.
This is with a fresh perl instance:
And after selecting all records from the table with a simple "prepare", "execute":PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14069 mseabroo 17 0 9336 5540 1816 S 0.0 2.1 0:05.68 iperl
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14069 mseabroo 16 0 250m 159m 844 S 0.0 61.1 0:11.22 iperl
This is immediately after calling execute().
It should be noted that I'm selecting all columns here, but collectively, they don't amount to much; a couple fixed-width chars, a few integers, a few doubles, and a datetime.
In any case, it appears mysql prefers to put the burden of processing on the client, which is why it's tossing the entire result set over to my script. This behavior, while default, is optional, though. In spite of that, I don't yet know if it will be feasible to turn it off.
In reply to Re^2: Of large database tables and high memory usage.
by mseabrook
in thread Of large database tables and high memory usage.
by mseabrook
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |