in reply to Re^4: how apply large memory with perl?
in thread how apply large memory with perl?

I didn't realize that those doublings were done in new memory rather than appending to what was already allocated

If the process already has sufficient memory for the doubling, and the memory immediately above the existing allocation is free, then the C-style array of pointers that forms the backbone of a Perl array may be realloc()able in-place, thereby alleviating the necessity for the previous and next sized generations to coexist. It also avoids the necessity to copy the pointers. But that's a pretty big if.

That said, by far the biggest save comes from avoiding building big lists on the stack. For example, compare iterating an array using:

When you routinely work with very large volumes of data and cpu-bound processes, rather than (typically cgi-based) IO-bound processes where 1 MB is often considered "big data"; you shall count yourself lucky the preponderance of programmers and pundits that fall into the latter camp have not yet exercised much influence on the nature of Perl.

I revel in Perl's TIMTOWTDI, that allows me to tailor my usage to the needs of my applications; rather than being forced into the straight-jacket of the theoretical "best way" as defined by someone(s) working in unrelated fields with entirely different criteria.

If I wanted the world of "only one good way", I'd use python.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

The start of some sanity?