Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
 
PerlMonks  

Re^4: Mathematics eq CompSci

by Anonymous Monk
on Jun 22, 2005 at 18:40 UTC ( [id://469124]=note: print w/replies, xml ) Need Help??


in reply to Re^3: Mathematics eq CompSci
in thread Mathematics eq CompSci

Hmmm. I was thinking of people who use computer to actually, you know, compute things. People like mathematicians, scientists, engineers, etc., working on problems like circuit analysis, 3D electromagnetic field solvers, place and route algorithms, fluid dynamics, cryptography, signal processing, image/voice recoginition, theorem provers, natural language processing, program analysis, structural analysis, expert system (chess playing, credit risk analysis, ...), vehicle routing, drug chemistry, particle physics simulations, seismic modeling, weather prediction...

Replies are listed 'Best First'.
Re^5: Mathematics eq CompSci
by BrowserUk (Patriarch) on Jun 22, 2005 at 19:05 UTC

    And all of those applications, without exception, require data be input from disk or tape or keyboard or mouse or network devices, and results be output to disk, tape or screen devices.

    And for many (most?) of those applications, the only way to process the vast volumes of data involved, is to spread the load across multiple processors. In order for that to happen, those processors need to talk to each other.

    Heck, even on a single processor machine, the cpu has to talk to the RAM, and to the processor, RAM is just another external device driven by IO lines. It may be concealed by vitualised memory, but there is real memory (chip devices) and real interupts underlying that abstraction. Every program uses IO in some form.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
      Most real-world code is dominated by interactions with external events. User inputs, shared devices and databases, chaotic networks and the omnipresent contention for cpu and other resources. Whilst we all benefit from highly tuned sorts and tree-traversal algorithms when we need them, the benefits derived from their tuning, in the reality of our tasks spending 50%, 70% or even 90% of their time task-swapped or waiting on IO, is usually much less than those ascribed to them via intensive studies performed under idealised conditions.
      And exactly how again does this support the notion that algorthimic analysis is mostly a waste of time? Maybe you're making the conjecture that P==EXP, since all problems are dominated by I/O? Please do the world a favor and share with us how you solve the traveling salesman problem in linear time (Just for us idealized theorists, please assume that the I/O takes a vanishingly small amount of time).

        And where did you see me suggest that "algorithimic analysis is mostly a waste of time"?

        Maybe if I add my emphasis on my own words ...

        Most real-world code is dominated by interactions with external events. User inputs, shared devices and databases, chaotic networks and the omnipresent contention for cpu and other resources. Whilst we all benefit from highly tuned sorts and tree-traversal algorithms when we need them, the benefits derived from their tuning, in the reality of our tasks spending 50%, 70% or even 90% of their time task-swapped or waiting on IO, is usually much less than those ascribed to them via intensive studies performed under idealised conditions.

        ... you'll see that all I said was, that results produced under carefully controlled research conditions are very hard to realise in practice.

        A particular bugbear in this regard, is a lot of effort I've seen expended on researching algorithms that attempt to optimise for cache coherency. In theory, and in research done on (usually highly spec'ed), test setups dedicated to running the tested algorithms, it is possible to achieve dramatic efficiencies by tailoring algorithms to avoid cache misses.

        However, trying running that same tuned algorithm on a desktop that's also running a browser and a few editor sessions or an mp3 player and a network stack; or a server with a webserver or a DB server and an ftp deamon; or even the same test setup mutli-tasking two copies of the same program running on different datasets, and all that fine tuning to maximise cache coherency gets thrown away every 20 or 30 millseconds.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://469124]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others sharing their wisdom with the Monastery: (4)
As of 2024-04-19 14:28 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found