Ok missed your point about length of variable names. I didn't get that from original question (just went past PV). This appears to have a LOT to do about memory-mapped virtual address space. If you create a structure that cannot fit into memory at once, the performance penalty can be awesome depending upon how localized the access is to this structure. | [reply] |
| [reply] |
figure 5 is pretty much it! You have got it!
The OS whether WinXP or Unix will "map" the file/structure (like @array) into user memory space. It looks to the user like a single thing although the OS/file system may bring parts of the file in/out of physical memory to make it look that way. There is one extra copy that is not shown in this diagram (you don't write directly into my I/O buffer, you write into something that I copy into my system I/O buffer when you ask me to do a write()).
The main point is that: 1) a big memory structure will get mapped out to disk if the OS figures "this is a good idea" or can't fit it into physical memory. 2) Jumping around widely or even going sequentially thru this memory structure will result in disk I/O.
At the "end of the day" if you create a structure that won't fit easily into your assigned(allocated) memory space, there is going to be a performance penalty to make it look like this "does fit". The OS will make it look like you have the physical memory even though you do not, but there will be a cost.
| [reply] |
| [reply] |
In the early 80's I worked on systems where 8MB+ was possible
with the 8086. So sure a 8MB data structure was possible with a uProcessor a lot
sooner than 1989! Now of course the 8086 processor only had 1MB address
space so you had to have external memory mapping hardware and other
complications like ECC memory because the RAM chips of the day weren't
nearly as robust as they are now. We organized the memory as 32 bit wide
as it took fewer RAM chips to do it that way considering (data+ecc bits) at the
cost of more memory controller logic.
As near as I can tell, the software appetite for memory is insatiable. The
semiconductor RAM guys are crunching out more and more memory in smaller
and smaller packages, but the hard disk guys are doing the same thing! For the
foreseeable future, there will always be a tiered memory system based upon
cost, with the cheaper stuff being bigger and slower.
Scalars will never be able to hold "whatever you can throw at them" because
whatever the new limit is, somebody will figure out not only how to use it,
but figure out a reason to exceed it!
| [reply] |