in reply to Re: about Coolness, Impatience and Complexity
in thread about Coolness, Impatience and Complexity

Note that all big-Oh run time complexities are in numbers of elementary operations.
If you really want to get down to the brass tacks, a computer scientist is going to define "elementary operations" in terms of a Turing machine (or something equivalent). Since a Turing machine has a finite number of states and symbols available, it can only push a finite number of bits around at once. Multiplying two n-bit numbers together is an O(n**2) operation using the algorithm they taught you in grade school. It can be done in O(n*log(n)) with FFT techniques.
  • Comment on Re: Re: about Coolness, Impatience and Complexity

Replies are listed 'Best First'.
Re: about Coolness, Impatience and Complexity
by Abigail (Deacon) on Jul 12, 2001 at 22:57 UTC
    Having been a computer scientists, I strongly disagree with the statement that a Turing machine should be involved. Not at all! We use elementary operations in a certain computational model. That could be a Turing machine, but it could also mean a pointer machine, a Von Neumann architecture or something more abstract as arithmetic or geometry. Complexity analysis for calculations on a modern computer certainly don't assume a Turing machine as the underlaying model. You wouldn't be able to do binary search in O (log N) time on a Turing machine. You cannot achieve O (log N) on a Turing machine because indexing in an array cannot be done in constant time on a Turing machine - it's going to be at least linear in the amount of tape used.

    -- Abigail

      Good point. Turing machines can do any polynomial-time computation in polynomial time, but they might take a higher order polynomial length of time than some other type of machine. I was trying to compose a snappy reply, and got sloppy.

      Still, when we talk about "polynomial time", what we really mean is "polynomial in the length of the problem using a reasonable encoding scheme." Not "polynomial in the number of entities in the input set," though we sometimes ignore the distinction. If the point and the line in your geometrical computation are specified with 512 bit numbers, that's a larger problem than if they were only 16 bits. You can't reduce that - it's information theory.

      Any physical machine you build is going to have a finite number of bits of processing power available to it. We can't make computers out of infinitesimally thin Euclidian line segments. A quantum computer might be able to consider a virtually unlimited number of possibilities at the same time, but it has a limited number of qubits and they have to collapse down into one answer before it can be printed out on the screen.

      If we start talking about machines that are impossible to build, are we still doing computer science? I don't say this to be facetious, but just to ask what computer science really is.

        Still, when we talk about "polynomial time", what we really mean is "polynomial in the length of the problem using a reasonable encoding scheme." Not "polynomial in the number of entities in the input set," though we sometimes ignore the distinction. If the point and the line in your geometrical computation are specified with 512 bit numbers, that's a larger problem than if they were only 16 bits. You can't reduce that - it's information theory.
        No, what we mean is "polynomial in the size of the input". That could be bits if your computational model asks for that. If your computational model is geometry, then your input might consist of points and lines. Not bits. Bits don't play part in geometry. They might play a part in an implementation of the model, but not in the computational model itself. Yes, this requires a level of abstraction, but that's why we call it "science". ;-)
        Any physical machine you build is going to have a finite number of bits of processing power available to it. We can't make computers out of infinitesimally thin Euclidian line segments. A quantum computer might be able to consider a virtually unlimited number of possibilities at the same time, but it has a limited number of qubits and they have to collapse down into one answer before it can be printed out on the screen.
        All true, but not really relevant. If we have an O (f (N)) geometric algorithm (I graduated in the closely related fields of datastructures and computational geometry, hence bringing up geometry over and over again) and we want to implement it on a real machine, we might have to multiply or divide by a factor of log N (or something else) somewhere to get a running time expressed in the number of bits of the input. [1] But we'll have the same conversion for another algorithm. Hence, the relation of running time (and space) complexities in one computational model stay the same in the other, assuming you use the same implementation. Of course, in a typical implementation numbers are limited to 32, 64, 128 or some other number of bits, which is a constant disappearing in the big-Oh anyway.
        If we start talking about machines that are impossible to build, are we still doing computer science? I don't say this to be facetious, but just to ask what computer science really is.
        Well, it was you who brought up Turing Machine. ;-) Now, have you ever seen a Turing Machine? Not only are they impossible to build, if they would exist they are quite impractical to do real computation on. But much of the theory of Computer Science is founded on Turing Machines. Take away Turing Machines, and Computer Science no longer is the Computer Science we all know.

        Do not forget the words of one of the most famous Computer Scientists of all times, Edsgar Dijkstra: computer science is as much about computers as astronomy is about telescopes. Computer Science is a big misnomer; the French word is much better: Informatique (I hope I spelled that right), which is described by the French academy of Science as the science of processing, possibly by using automata, of information, where information is the is knowledge and data. I'm sure it sounds a lot better in French. ;-) There are a few European departments that use the term "Computing Science" when translating their department to English. I prefer Computing Science over Computer Science as well.

        [1]I'm using a lot of handwaving here. An algorithm on N numbers each consisting of M bits (and hence having size N*M) doesn't need to have the same running time when it's run on an input of M numbers each consisting of N bits (which also has size N*M).

        -- Abigail