in reply to Re: Re: about Coolness, Impatience and Complexity
in thread about Coolness, Impatience and Complexity

What is CS about?

I agree with Abigail, CS is not about computers. CS is about how we work together to organize ideas and concepts to be able to specify the operation of something incredibly complex. It is not about the incidental details of what is actually going to happen. It is about the principles and details that humans need to understand to get it to happen.

And so from the point of view of the programmer it is fundamentally irrelevant whether the underlying computation is being carried out by a Turing machine, a RISC processor, an x86 chip, or a Lisp machine. For a human to get things to happen it is essential that the human stop thinking at a low level and start creating useful abstractions. And understanding and manipulating those abstractions means forgetting about the operations that are being performed and thinking about the process of humans trying to comprehend and direct the process.

You don't believe me? Well what is Dijkstra best known for? Go To Considered Harmful. Read it. It is a famous paper. When the ACM decided to put some famous papers that had appeared in their magazines online, it was the second one they chose. It started a famous debate, and even people who have not read and understood the paper know that goto is something you are supposed to avoid.

You haven't read it yet? Please do. Or else what I am about to say will make no sense.

OK, you just read one of the most influential papers ever written in CS. And not just influential in the abstract. It completely revolutionized the practice of programming. If there is an essence to the study of CS, that paper of all papers should reflect it.

What did it have to do with computers?

Replies are listed 'Best First'.
Re: I *definitely* agree with Dijkstra
by HyperZonk (Friar) on Jul 14, 2001 at 05:53 UTC
    Indeed, the functionality of the device doing the underlying operations is the work of the mechanic, or in our case, the EE. How often have abstractions in CS led to practical discoveries? The Turing Machine was a theoretical construct, but that construct is actually what led to the development of the computers that we use today. Its simple language led to the earliest machine and assembly languages.

    And the higher level languages were results of abstraction from assembly language. Object-oriented programming is a perfect example of the practical fruits of thinking about the way that tasks are performed on an abstract level (avoiding the obvious pun on abstraction).
Re: I *definitely* agree with Dijkstra
by no_slogan (Deacon) on Jul 14, 2001 at 06:24 UTC
    I see that I have once again completely failed to communicate. I agree with you. Really. Computer science is not about bioses and endianness and all the other technical minutiae of computers. Abstraction is good.

    When we create an abstraction, though, we don't always know whether we are making something useful or (to quote the Cryptonomicon) wanking ourselves. Just because you abstracted something doesn't bring it into existence. I think that there are fundamental limits to what a computer of any type can do, which are set by information theory and physics. Those limits define the compass of computer science. If we sit around and postulate new kinds of computers without addressing the limits (maybe by circumventing them in some way), I think we're dangerously close to wanking.

    Also, I think not using goto is more an issue of engineering discipline than computer science.

    That's all. I will now do my best to shut up and not add any more incoherent rantings to this thread.

      If CS has abstraction at its heart, at what point in your abstractions has the underlying concept of the machine been abstracted away into irrelevance? That is what Dijkstra is talking about. The computer is necessary for supporting the base of the abstractions in practicality, but what you use a computer to see and think about has nothing fundamentally to do with computers.

      Now further to that, there is truth in saying that discussion of calculations we cannot perform is not particularly useful. However beware of saying that thinking about such calculations is useless. The fact is that often by thinking about calculations we cannot perform and then trying to understand them, we can find ways to speed up key parts and come up with practical replacements. Without thinking about the implausible calculation problems we would be unlikely to ever arrive at practical solution.

      An interesting example of this that I once saw was an old book on signal processing someone was throwing out. I don't remember what the book was, but I looked at it briefly. It looked like a fairly well organized book, but virtually everything in it was only a historical curiousity. You see the book dated from the late 50's. Virtually every algorithm in it existed as a workaround for the fact that while everyone knew that the Fourier Transform was the ideal way to solve the problems at hand, it was not practically computable. Just a few years later an old result of Gauss' would be rediscovered, and the FFT made the Fourier Transform usable in real world signal processing in real world computers. And all of the theoretical work on Fourier Transforms was suddenly applied and all of the applied work on signal processing was out of date.

      However there is a far more practical example. In 1940 GH Hardy wrote in A Mathematicians Apology that

      I have never done anything 'useful.' No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world. . . Judged by all practical standards, the value of my mathematical life is nil . . .
      What was his reasoning for this? Why quite simply that his life was spent in pure number theory. He was working with numbers so big that nobody could do anything useful with them. In fact we look at the kinds of things that he was talking about and we see no prospect of practically doing the things he thought about. The computations which mechanical verification of his proofs require are, and likely will forever remain, well out of the reach of computers. Things like trying out every possible factorization of a hundred digit number and verifying that it is prime is not going to happen. Of what use then would be studying the statistical distribution of large primes be?

      Yet with RSA public-key encryption and the existence of algorithms like Rabin Miller, we can definitively say that Hardy was wrong. No matter how useless he thought his work was - and he had good reason for calling it useless - it was not. (ObRandomNote: While Hardy did a lot of good mathematical work in his own right, he is most remembered for reading and responding to a letter by a poor Indian clerk named Ramanujan.)

      So be very, very careful about what you label as useless, no matter how good your reasons are for labelling it so.