in reply to Re: about Coolness, Impatience and Complexity
in thread about Coolness, Impatience and Complexity

Now, have you ever seen a Turing Machine? Not only are they impossible to build, if they would exist they are quite impractical to do real computation on.
Bosh. The only reason we can't make a perfect Turing machine is that we can't actually give it an infinite length of tape. We have a well developed theory that tells us what computations can and can't be done by a Turing machine with a finite length of tape. In many cases, it's easier to talk about one with an infinite length, so we do that instead, but at least we know what we're getting ourselves into.

Sorry, but I don't agree with Dijkstra. It's true that astronomy is not about telescopes. Astronomy is about stars and nebulae and planets and things, and telescopes are just the tools that let us study them. But computer science is about computing, and things that compute are computers, by definition. So computer science is all about computers - not about PCs and Macs and Sparc workstations in particular, but about computers in general.

And what is computing? Pushing information around. The flow of information is governed by information theory, which tells us that information is very nearly a physical thing. We usually deal with information in the form of bits, but it doesn't have to be stored that way. A given quantity of information is going to take up a finite minimum amount of time and space, no matter what you do with it. Getting as close as possible to the minimum is an engineering problem that we'll be workng on forever. But it's important to keep in mind that we're dealing with stuff that's not infinitely compressible. Information is solid.

Of course, in a typical implementation numbers are limited to 32, 64, 128 or some other number of bits, which is a constant disappearing in the big-Oh anyway.
In many cases, you can make the number of bits disappear into the big O, because the numbers don't need to be all that large. However, the reason this conversation started is that I was talking about implementing arbitrary precision arithmetic, which is not one of those cases. If I'm planning on using a raid array to hold the bits of a number, I'd better not count on the number of bits disappearing into a constant factor. Maybe I want to multiply two 10-terabyte numbers together for some reason. For you to claim that you have a geometrical computing model that ignores the limits of information theory and can do it with the same amount of effort as 2*2 is, quite frankly, silly.

I don't think there's as much distinction as you think between "computer science" and "computing science". Before, I said, "why talk about computers that can't possibly be built?" Now, I say, "why talk about computations that can't possibly be performed?"

  • Comment on Re: Re: about Coolness, Impatience and Complexity

Replies are listed 'Best First'.
Re: about Coolness, Impatience and Complexity
by Abigail (Deacon) on Jul 14, 2001 at 03:54 UTC
    We have a well developed theory that tells us what computations can and can't be done by a Turing machine with a finite length of tape.
    Please stick to the subject. The subject isn't computability, a domain in which Turing Machines frequent, but run time complexity of algorithms, of which you claimed where measured how fast they can be done on Turing Machines - but then you later claimed that any device that couldn't be build was irrelevant. But yeah, we could build a machine with a finite tape. Now, how many of them do you have laying around?

    But computer science is about computing, and things that compute are computers, by definition.
    Not everything that is about computing needs to be done by a "thing"! Are you dismissing the majority of physics as well because the models they use don't exist in the real world? It's Computer SCIENCE not electronics.
    And what is computing? Pushing information around. The flow of information is governed by information theory, which tells us that information is very nearly a physical thing. We usually deal with information in the form of bits, but it doesn't have to be stored that way. A given quantity of information is going to take up a finite minimum amount of time and space, no matter what you do with it. Getting as close as possible to the minimum is an engineering problem that we'll be workng on forever. But it's important to keep in mind that we're dealing with stuff that's not infinitely compressible. Information is solid.
    So what? Pushing bits around is indeed an engineering problem. Just like getting measurements of stars, nebulae and planets is an engineering problem, and has nothing to do with astronomy. Computer Science isn't about how to move bits around - Computer Science is about where to move "the bits" to. And note that I'm using quotes here, because Computer Science doesn't necessary deal with physical bits. It all depends on your computational model what your basic units are.

    I don't think there's as much distinction as you think between "computer science" and "computing science".
    I'm sorry, but did I say there was any difference? I don't think so. All I said was that some people think it's a better description of the same thing.

    Before, I said, "why talk about computers that can't possibly be built?" Now, I say, "why talk about computations that can't possibly be performed?
    Sir, you have never been a scientist, you are not a scientist, and you never will be a scientist. You just don't get it. If people always would have had that attitude, we would still live in the stone age.

    -- Abigail

      I have only a few things left to say, and then you may have the last word.
      • My mention of Turing machines was ill-considered, as I have already admitted
      • Turing machines are interesting because:
        • They are powerful enough to be widely applicable
        • They are simple and easy to think about
        • They are not very much different from a machine that could actually be constructed
      • Computability and complexity of algorithms are inextricably connected
      • Any reasonably competent programmer can code a finite tape Turing machine in a matter of minutes
      • Anticipitating your probable response to the above, imagine that I hand you a black box which I claim contains a Turing machine. Without opening the box, what kind of "Turing test" would you put it through to determine whether it contains a "real" Turing machine or a simulation of one using a general-purpose computer?
I *definitely* agree with Dijkstra
by tilly (Archbishop) on Jul 14, 2001 at 05:40 UTC
    What is CS about?

    I agree with Abigail, CS is not about computers. CS is about how we work together to organize ideas and concepts to be able to specify the operation of something incredibly complex. It is not about the incidental details of what is actually going to happen. It is about the principles and details that humans need to understand to get it to happen.

    And so from the point of view of the programmer it is fundamentally irrelevant whether the underlying computation is being carried out by a Turing machine, a RISC processor, an x86 chip, or a Lisp machine. For a human to get things to happen it is essential that the human stop thinking at a low level and start creating useful abstractions. And understanding and manipulating those abstractions means forgetting about the operations that are being performed and thinking about the process of humans trying to comprehend and direct the process.

    You don't believe me? Well what is Dijkstra best known for? Go To Considered Harmful. Read it. It is a famous paper. When the ACM decided to put some famous papers that had appeared in their magazines online, it was the second one they chose. It started a famous debate, and even people who have not read and understood the paper know that goto is something you are supposed to avoid.

    You haven't read it yet? Please do. Or else what I am about to say will make no sense.

    OK, you just read one of the most influential papers ever written in CS. And not just influential in the abstract. It completely revolutionized the practice of programming. If there is an essence to the study of CS, that paper of all papers should reflect it.

    What did it have to do with computers?

      Indeed, the functionality of the device doing the underlying operations is the work of the mechanic, or in our case, the EE. How often have abstractions in CS led to practical discoveries? The Turing Machine was a theoretical construct, but that construct is actually what led to the development of the computers that we use today. Its simple language led to the earliest machine and assembly languages.

      And the higher level languages were results of abstraction from assembly language. Object-oriented programming is a perfect example of the practical fruits of thinking about the way that tasks are performed on an abstract level (avoiding the obvious pun on abstraction).
      I see that I have once again completely failed to communicate. I agree with you. Really. Computer science is not about bioses and endianness and all the other technical minutiae of computers. Abstraction is good.

      When we create an abstraction, though, we don't always know whether we are making something useful or (to quote the Cryptonomicon) wanking ourselves. Just because you abstracted something doesn't bring it into existence. I think that there are fundamental limits to what a computer of any type can do, which are set by information theory and physics. Those limits define the compass of computer science. If we sit around and postulate new kinds of computers without addressing the limits (maybe by circumventing them in some way), I think we're dangerously close to wanking.

      Also, I think not using goto is more an issue of engineering discipline than computer science.

      That's all. I will now do my best to shut up and not add any more incoherent rantings to this thread.

        If CS has abstraction at its heart, at what point in your abstractions has the underlying concept of the machine been abstracted away into irrelevance? That is what Dijkstra is talking about. The computer is necessary for supporting the base of the abstractions in practicality, but what you use a computer to see and think about has nothing fundamentally to do with computers.

        Now further to that, there is truth in saying that discussion of calculations we cannot perform is not particularly useful. However beware of saying that thinking about such calculations is useless. The fact is that often by thinking about calculations we cannot perform and then trying to understand them, we can find ways to speed up key parts and come up with practical replacements. Without thinking about the implausible calculation problems we would be unlikely to ever arrive at practical solution.

        An interesting example of this that I once saw was an old book on signal processing someone was throwing out. I don't remember what the book was, but I looked at it briefly. It looked like a fairly well organized book, but virtually everything in it was only a historical curiousity. You see the book dated from the late 50's. Virtually every algorithm in it existed as a workaround for the fact that while everyone knew that the Fourier Transform was the ideal way to solve the problems at hand, it was not practically computable. Just a few years later an old result of Gauss' would be rediscovered, and the FFT made the Fourier Transform usable in real world signal processing in real world computers. And all of the theoretical work on Fourier Transforms was suddenly applied and all of the applied work on signal processing was out of date.

        However there is a far more practical example. In 1940 GH Hardy wrote in A Mathematicians Apology that

        I have never done anything 'useful.' No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world. . . Judged by all practical standards, the value of my mathematical life is nil . . .
        What was his reasoning for this? Why quite simply that his life was spent in pure number theory. He was working with numbers so big that nobody could do anything useful with them. In fact we look at the kinds of things that he was talking about and we see no prospect of practically doing the things he thought about. The computations which mechanical verification of his proofs require are, and likely will forever remain, well out of the reach of computers. Things like trying out every possible factorization of a hundred digit number and verifying that it is prime is not going to happen. Of what use then would be studying the statistical distribution of large primes be?

        Yet with RSA public-key encryption and the existence of algorithms like Rabin Miller, we can definitively say that Hardy was wrong. No matter how useless he thought his work was - and he had good reason for calling it useless - it was not. (ObRandomNote: While Hardy did a lot of good mathematical work in his own right, he is most remembered for reading and responding to a letter by a poor Indian clerk named Ramanujan.)

        So be very, very careful about what you label as useless, no matter how good your reasons are for labelling it so.