Re: Out-Of-Date Optimizations? New Idioms? RAM vs. CPU
by Abigail-II (Bishop) on Jul 21, 2003 at 08:04 UTC
|
If you recalculate something, so that the something
doesn't stay in memory, it won't stay in the cache either.
The cache is a memory cache - what's there is also in
the main memory.
CPU's have become faster, but main memories have become bigger.
Nowadays, computers tend not to swap; if your server swaps
on a regular basis, you might want to do some tuning.
Memory I/O is faster than disk I/0, and the ratio
memory I/0 / disk I/0 is more than the ratio cache / memory.
I/0.
Maybe not much of a data point, but from the servers with
resource problems I've seen, more of them benefitted from
getting more memory, than more or faster CPUs. Most computers
have more than enough CPU cycles - but usually they can use
more main memory.
Abigail
| [reply] |
|
|
| [reply] |
|
|
| [reply] |
|
|
| [reply] |
|
|
Re: Re: Re: Out-Of-Date Optimizations? New Idioms? RAM vs. CPU
by tilly (Archbishop) on Jul 21, 2003 at 16:02 UTC
|
A better way to improve usage of cache without going through a lot of careful tuning is to keep actively accessed data together, and avoid touching lots of memory randomly.
My understanding (from my view somewhere in the bleachers) is that Parrot's garbage collection will provide both benefits.
Incidentally correcting a point you made in your original post, the importance of Parrot having lots of registers is not to make efficient use of cache. It is to avoid spending half of the time on stack operations (estimate quoted from my memory of elian's statement about what JVM and .NET do). In a register-poor environment, like x86, you come out even. In a register-rich environment you win big. (Yes, I know that x86 has lots of registers - but most are not visible to the programmer and the CPU doesn't always figure out how to use them well on the fly.)
Before someone pipes up and says that we should focus on x86, Parrot is hoping to survive well into the time when 32-bit computing is replaced by 64-bit for mass consumers. Both Intel and AMD have come out with 64-bit chips with far more registers available to the programmer than x86 has. That strongly suggests that the future of consumer computing will have lots of registers available. (Not a guarantee though, the way that I read the tea leaves is that Intel is hoping that addressing hacks like PAE will allow 32-bit computing to continue to dominate consumer desktops through the end of the decade. AMD wants us to switch earlier. I will be very interested to see which way game developers jump when their games start needing more then 2GB of RAM.) | [reply] |
|
|
And a good way to ruin cache hits is to use a garbage-collecting language.In reply to Aristotle: The theory is (and I haven't profiled this myself, just passing on received wisdom) is that when the GC goes off to clear out old memory, it has to read it into the cache to do so. If the memory were released as soon as it were finished with, then the page could just be discarded as necessary. Of course, the effect on processor cache is just one factor, and it may be that good GC systems can make up for this in other ways, but I don't like them anyway. I much prefer deterministic release of resources. I first heard of this theory from comments by Linus Torvalds, if you'll excuse the name dropping, and it seems to make sense to me. Of course it may be that the pages visited by the GC are pages that are going to be needed real soon. A good reminder that the first rule of optimisation is: don't, and the second is: do some profiling first.
| [reply] |
|
|
Any backup for your claim? All arguments I’ve heard so far indicate the opposite – which I’d be inclined to believe, unless you’re allocating and releasing memory in a tight loop. (But that would cause thrashing regardless of a garbage collector anyway…) So what would support a claim to the opposite?
Makeshifts last the longest.
| [reply] |