Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling
 
PerlMonks  

Re^5: Mathematics eq CompSci

by BrowserUk (Patriarch)
on Jun 22, 2005 at 19:05 UTC ( [id://469133]=note: print w/replies, xml ) Need Help??


in reply to Re^4: Mathematics eq CompSci
in thread Mathematics eq CompSci

And all of those applications, without exception, require data be input from disk or tape or keyboard or mouse or network devices, and results be output to disk, tape or screen devices.

And for many (most?) of those applications, the only way to process the vast volumes of data involved, is to spread the load across multiple processors. In order for that to happen, those processors need to talk to each other.

Heck, even on a single processor machine, the cpu has to talk to the RAM, and to the processor, RAM is just another external device driven by IO lines. It may be concealed by vitualised memory, but there is real memory (chip devices) and real interupts underlying that abstraction. Every program uses IO in some form.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.

Replies are listed 'Best First'.
Re^6: Mathematics eq CompSci
by Anonymous Monk on Jun 22, 2005 at 20:06 UTC
    Most real-world code is dominated by interactions with external events. User inputs, shared devices and databases, chaotic networks and the omnipresent contention for cpu and other resources. Whilst we all benefit from highly tuned sorts and tree-traversal algorithms when we need them, the benefits derived from their tuning, in the reality of our tasks spending 50%, 70% or even 90% of their time task-swapped or waiting on IO, is usually much less than those ascribed to them via intensive studies performed under idealised conditions.
    And exactly how again does this support the notion that algorthimic analysis is mostly a waste of time? Maybe you're making the conjecture that P==EXP, since all problems are dominated by I/O? Please do the world a favor and share with us how you solve the traveling salesman problem in linear time (Just for us idealized theorists, please assume that the I/O takes a vanishingly small amount of time).

      And where did you see me suggest that "algorithimic analysis is mostly a waste of time"?

      Maybe if I add my emphasis on my own words ...

      Most real-world code is dominated by interactions with external events. User inputs, shared devices and databases, chaotic networks and the omnipresent contention for cpu and other resources. Whilst we all benefit from highly tuned sorts and tree-traversal algorithms when we need them, the benefits derived from their tuning, in the reality of our tasks spending 50%, 70% or even 90% of their time task-swapped or waiting on IO, is usually much less than those ascribed to them via intensive studies performed under idealised conditions.

      ... you'll see that all I said was, that results produced under carefully controlled research conditions are very hard to realise in practice.

      A particular bugbear in this regard, is a lot of effort I've seen expended on researching algorithms that attempt to optimise for cache coherency. In theory, and in research done on (usually highly spec'ed), test setups dedicated to running the tested algorithms, it is possible to achieve dramatic efficiencies by tailoring algorithms to avoid cache misses.

      However, trying running that same tuned algorithm on a desktop that's also running a browser and a few editor sessions or an mp3 player and a network stack; or a server with a webserver or a DB server and an ftp deamon; or even the same test setup mutli-tasking two copies of the same program running on different datasets, and all that fine tuning to maximise cache coherency gets thrown away every 20 or 30 millseconds.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
      "Science is about questioning the status quo. Questioning authority".
      The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
        ...and all that fine tuning to maximise cache coherency gets thrown away every 20 or 30 millseconds.
        <slaps knee> That's a good one! 'cause in 10ms you only execute 40E6 or so instructions on middle of the road COTS hardware (more if your processor supports some sort SIMD instructions). But I see we must be talking past each other, so cheerio.
        And where did you see me suggest that "algorithimic analysis is mostly a waste of time"?
        Oh, I don't know, how about where you state...
        I think that the mathematical symbolism used in the description, exploration and characterisation of algorithms from a formal CS perspective is frequently unhelpful.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://469133]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others romping around the Monastery: (2)
As of 2024-04-26 03:53 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found