in reply to Re^5: Highly efficient variable-sharing among processes
in thread Highly efficient variable-sharing among processes

I think what is being discussed is copy-on-write.

Why do you think that? And who do you think is discussing that?

The OP mentions only "*other* processes to be able to do lookups". No mention of anybody writing to anything; by him or anyone else.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re^6: Highly efficient variable-sharing among processes

Replies are listed 'Best First'.
Re^7: Highly efficient variable-sharing among processes
by Marshall (Canon) on Aug 29, 2016 at 23:35 UTC
    I am saying that if the other processes don't write this huge hash, there is no memory copy at all made of the data, all of these guys can access the same data segment. In Unix, there is not that much overhead in this. Sorry if my post wasn't clear that since there is no modification being made, little overhead will occur and a huge copy of this humongous hash will not happen in order to start a new process.
      I think what is being discussed

      Don't take this to heart, but

      1. I think what is being discussed

        Is a strange way to introduce your to-date-unstated thoughts on the possible solution to the OPs problem.

        Nobody had yet discussed it.

      2. If the forked process doesn't write the data, there is no copy made.

        Be aware, you don't have to "write" to a perl hash in order to instigate COW memory to copy pages wholesale.

        Eg. Say the hash was loaded with data from a file, string keys and numeric values; and the programs doing the lookup, looks up a string, retrieves a number and then performs math with it.

        The very act of performing a mathematical operation with the value scalar, stored as a string value, will cause Perl to convert that value from an SvPV to an SvIV And that process will cause at least 2 pages and possibly 3 pages of memory to be modified.

        The value hasn't be "written", but the scalar representation of that value has and that causes copy-on-write to allocate and copy entire pages of ram.

      3. It is indeed possible to get a 198 GB memory machine from Dell.

        200GB is not a lot of ram; multi-terabyte memory machines are available, if you have the money. Eg. 32 TB machine.

        But more to the point: Using a commodity box; memory-mapped-IO; and an PCIe NVMe SSD to hold the data; 256GB of amazing quick File-based data can be made available, to as many processes as can see the SSD, for a couple of hundred quid.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
      In the absence of evidence, opinion is indistinguishable from prejudice.
        I apologize for a poor post that wasn't clear.

        Shortly thereafter a cable tech made a poor decision which took my internet and cable TV offline for a couple of days, hence a tardy apology. I have no issue with your post.

        No matter what we do in terms of S/W and H/W redundancy, there is often a way for a human to screw it up! In this case, a human moved my home's connection from port 123 to port 789 in a wiring cabinet .5 km from my home. This of course has nothing at all to do with the OP's question, except to say that humans (me included) make mistakes.