And where did you see me suggest that "algorithimic analysis is mostly a waste of time"?
Maybe if I add my emphasis on my own words ...
Most real-world code is dominated by interactions with external events. User inputs, shared devices and databases, chaotic networks and the omnipresent contention for cpu and other resources. Whilst we all benefit from highly tuned sorts and tree-traversal algorithms when we need them, the benefits derived from their tuning, in the reality of our tasks spending 50%, 70% or even 90% of their time task-swapped or waiting on IO, is usually much less than those ascribed to them via intensive studies performed under idealised conditions.
... you'll see that all I said was, that results produced under carefully controlled research conditions are very hard to realise in practice.
A particular bugbear in this regard, is a lot of effort I've seen expended on researching algorithms that attempt to optimise for cache coherency. In theory, and in research done on (usually highly spec'ed), test setups dedicated to running the tested algorithms, it is possible to achieve dramatic efficiencies by tailoring algorithms to avoid cache misses.
However, trying running that same tuned algorithm on a desktop that's also running a browser and a few editor sessions or an mp3 player and a network stack; or a server with a webserver or a DB server and an ftp deamon; or even the same test setup mutli-tasking two copies of the same program running on different datasets, and all that fine tuning to maximise cache coherency gets thrown away every 20 or 30 millseconds.
In reply to Re^7: Mathematics eq CompSci
by BrowserUk
in thread Mathematics eq CompSci
by kiat
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |