Well OK, the frequency has stagnated, but the architecture continues to improve, continuously bringing more raw computing power to the table with 8 core intel i7 processors about to hit the market and hex-core AMD processors already available.
Obviously it's best to optimise for both CPU & Memory usage, but the point was if you had to choose between them, which is more important?
| [reply] |
Don't forget, the more memory you use, the more data has to be shoved through the FSB (Do they still call it that?) and the more likely it is that the program will have to call upon data that is not in it's L1, L2 and L3 caches.
Processors these days have several megabytes (upto about 12) of onboard cache, if your program and all it's data can fit entirely within that amount, it will perform significantly better than if the processor is forced to start swapping out chunks of data to the main memory.
When your server is dealing with dozens of simultaneous requests, how tight your code uses memory is going to have a very significant impact on the throughput of responses.
Back in the days of the 486, the FSB used to run at between 16 and 50 mhz, with a multiplier of 2x. Meaning the processor operated at twice the frequency of the FSB.
These days the FSB operates at around 1333 mhz (faster on some of the very latest boards), with the processor running at around 3-4..GHz, a multiplier of around 3x, however, that FSB bandwidth is also shared between multiple cores, 6 or even 8 on the latest chips, meaning that you have a total of around 24 Hz spread between the cores for each 1hz of FSB bandwidth available.
In such a situation, the way to get maximum throughput is to make sure as far as possible that your program and the data it needs fits entirely in the L1 cache within the individual processor core, with only external data like database lookups being transported through the FSB.
So I ask again, is it more important to optimise for memory usage or processor usage, and I assure you... the question has hidden depth.
| [reply] |