http://qs1969.pair.com?node_id=165622


in reply to Re: Database searches with persistent results via CGI
in thread Database searches with persistent results via CGI

1.5M records is not *that* much data. Indeed you should attack the weakness of the problem which is the search.

You probably need to make an index on the search columns though. Yes, indexes help with LIKE searches too if the database is sophisticated enough. PostgreSQL is an example of such an RDBMS. You may need to tune other DB parameters to give the backend more resources to work with. If the DB is swapping out to virtual memory alot that makes for a HUGE performance hit. With an RDBMS the goal is to have the table(s) cached in RAM, not read from disk. Perhaps a few more bars of memory and/or some configuration changes and a couple of well place indexes are all you need.

You may want to look into the use of "OFFSET" and "LIMIT" clauses in your SQL statement. The backend still has to run the whole query, but you only get the rows you need back. If you are fetching the entire table in Perl just to output the last 25 rows you are wasting a LOT of CPU, let the DB do that for you because it's optimized to do so.

If by chance you are on PostgreSQL . . . Have you done a "VACCUM ANALYZE" which allows the query optimizer to gather statistics on past searches which it uses to find better search paths?