in reply to Re^7: Highly efficient variable-sharing among processes
in thread Highly efficient variable-sharing among processes
I think what is being discussed
Don't take this to heart, but
I think what is being discussed
Is a strange way to introduce your to-date-unstated thoughts on the possible solution to the OPs problem.
Nobody had yet discussed it.
If the forked process doesn't write the data, there is no copy made.
Be aware, you don't have to "write" to a perl hash in order to instigate COW memory to copy pages wholesale.
Eg. Say the hash was loaded with data from a file, string keys and numeric values; and the programs doing the lookup, looks up a string, retrieves a number and then performs math with it.
The very act of performing a mathematical operation with the value scalar, stored as a string value, will cause Perl to convert that value from an SvPV to an SvIV And that process will cause at least 2 pages and possibly 3 pages of memory to be modified.
The value hasn't be "written", but the scalar representation of that value has and that causes copy-on-write to allocate and copy entire pages of ram.
It is indeed possible to get a 198 GB memory machine from Dell.
200GB is not a lot of ram; multi-terabyte memory machines are available, if you have the money. Eg. 32 TB machine.
But more to the point: Using a commodity box; memory-mapped-IO; and an PCIe NVMe SSD to hold the data; 256GB of amazing quick File-based data can be made available, to as many processes as can see the SSD, for a couple of hundred quid.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^9: Highly efficient variable-sharing among processes
by Marshall (Canon) on Sep 01, 2016 at 01:58 UTC |