in reply to Re^2: PV limits on 64-bit perls?
in thread PV limits on 64-bit perls?

Ok missed your point about length of variable names. I didn't get that from original question (just went past PV).

This appears to have a LOT to do about memory-mapped virtual address space. If you create a structure that cannot fit into memory at once, the performance penalty can be awesome depending upon how localized the access is to this structure.

Replies are listed 'Best First'.
Re^4: PV limits on 64-bit perls?
by BrowserUk (Patriarch) on Sep 23, 2009 at 03:29 UTC

    Read this. Particularly the first para in the section entitled: "What Do Memory-Mapped Files Have to Offer?"


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      figure 5 is pretty much it! You have got it!

      The OS whether WinXP or Unix will "map" the file/structure (like @array) into user memory space. It looks to the user like a single thing although the OS/file system may bring parts of the file in/out of physical memory to make it look that way. There is one extra copy that is not shown in this diagram (you don't write directly into my I/O buffer, you write into something that I copy into my system I/O buffer when you ask me to do a write()).

      The main point is that: 1) a big memory structure will get mapped out to disk if the OS figures "this is a good idea" or can't fit it into physical memory. 2) Jumping around widely or even going sequentially thru this memory structure will result in disk I/O.

      At the "end of the day" if you create a structure that won't fit easily into your assigned(allocated) memory space, there is going to be a performance penalty to make it look like this "does fit". The OS will make it look like you have the physical memory even though you do not, but there will be a cost.

        There is one extra copy that is not shown in this diagram (you don't write directly into my I/O buffer, you write into something that I copy into my system I/O buffer when you ask me to do a write()).

        Not on win--I can't speak to *nix. There is no "I/O buffer" involved. And no "copying".

        The point is: The file is already on disk! And it doesn't get read in to a physical pages of RAM until you attempt to access it. And it doesn't get written (anywhere) unless you write to it. All that has happened up to the point where you attempt to read or write to the mapped address space is that a few virtual-to-physical mapping tables have been set up.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re^4: PV limits on 64-bit perls?
by FloydATC (Deacon) on Sep 23, 2009 at 11:20 UTC
    In 1989 one might have said the same thing about keeping a data structure of 8 Mbytes in memory. In another 20 years from now, 8 Gbytes may be the average size of a hologram on your camera.

    Ideally, scalars should just hold whatever you throw at them, no questions asked :-)

    -- Time flies when you don't know what you're doing
      In the early 80's I worked on systems where 8MB+ was possible with the 8086. So sure a 8MB data structure was possible with a uProcessor a lot sooner than 1989! Now of course the 8086 processor only had 1MB address space so you had to have external memory mapping hardware and other complications like ECC memory because the RAM chips of the day weren't nearly as robust as they are now. We organized the memory as 32 bit wide as it took fewer RAM chips to do it that way considering (data+ecc bits) at the cost of more memory controller logic.

      As near as I can tell, the software appetite for memory is insatiable. The semiconductor RAM guys are crunching out more and more memory in smaller and smaller packages, but the hard disk guys are doing the same thing! For the foreseeable future, there will always be a tiered memory system based upon cost, with the cheaper stuff being bigger and slower.

      Scalars will never be able to hold "whatever you can throw at them" because whatever the new limit is, somebody will figure out not only how to use it, but figure out a reason to exceed it!