This is one of the big myths about database optimizations. Indexes are not always good, and full table scans are not always bad. I have just experienced this at work. We were able to make 300+% gains on our queries by forcing oracle to do a full table scan over a 12 million row table.
Ultimately, you have to know what your query is doing before you start optimizing it. With Postgres, start using the EXPLAIN syntax to figure out if you're using indexes or full table scans. (ie EXPLAIN select * from foo where baz = 'baz'). Next, it helps to understand what your query is doing, and perhaps create a different index that is more selective.
12 secs to scan a 180k row table sounds awfully slow, but it depends on hardware and your I/O subsystem.
--
Ben
"Naked I came from my mother's womb, and naked I will depart."
| [reply] |
I'm fully aware of that, which is why I said, Depending on what your query does... In fact not only am I aware of it as a theoretical point, but within the last month I sped a query up from taking around an hour to 3 minutes by providing a hint that, among other things, convinced it to do a full table scan on a large table.
However normally, particularly with straightforward queries, indexes are good. The simplest case of all would be that he's querying a large table and doing something like looking for the handful of items in a particular status. If it really is a handful of items that would be returned, then adding an index would certainly be a huge win.
| [reply] |