Okay, here is how it works:
If you issue a completely new query, Oracle has to do what is called a hard parse: It reads the SQL string, checks permissions for the accessed objects, statistics about table data size and distribution, existence of indices, and generates the execution plan. This is expensive. For simple queries, this takes more time than the execution itself. In a web application (or any other OLTP scenario) you do not want this to happen more than necessary.
The resulting execution plan is cached in Oracle's server memory. If you issue the same query again, it will only incur a soft parse. It will just find the already existing execution plan and reuse it. This is what happens in the OP's program, and it is much better than a hard parse.
The first priority is to reduce hard parses to soft parses. To do this, the SQL statement must be identical, so you have to use bind variables.
Now, you can take it a little further, by preparing the statement only once, and executing it many times. In this case, you will have only one parse (hopefully a soft parse) and subsequent executions run without any parsing (no parse). Obviously, that is the best case. Using prepare_cached can help you with that.
| [reply] |
The oracle optimizer should not take long for an uncached query that's only using one table. Especially if the tables have been analyzed. There are only three possible plans for the query given -- a full table read, using an index that's based on security_id, or using an index based on is_new (okay, technically, you might have more than two indexes that meet those criteria, so there might be a couple more options, but should still be negligible.)
You should only need to worry about the time to generate the execution plan when you're dealing with complex table joins, or subqueries, or the like. This query given should be very quick, even if you don't have an index on security_id, and haven't analyzed the table. (well, the actual execution won't be very quick necessarily, but the planning should be quick).
You do, however, as Thilosophy said, make sure the statements are identical. Unless it's changed since 8i (when I took Oracle's SQL Tuning class), this means case sensitive, identical whitespace, etc. Which they are, as well as I can tell. Which would mean that the database isn't caching your plan -- which given that it uses LRU, could be a sign of a bigger problem with the database.
I would suggest having the DBA check V$SQL and V$SQL_PLAN to see what's currently being saved. Unfortunately, all of my oracle books are at work, and this isn't the sort of thing I have memorized. I know there are ways to check how full the SGA and Shared Pool are ... STATSPACK might give you some information, and of course, the error logs.
Building on what Thilosophy said, I'd also suggest the following code adjustment (note -- I've changed the Oracle bind placeholders to the ones that DBI uses... I'm not sure if DBD::Oracle handles the :1 syntax, or not, but I know ? works)
| [reply] [d/l] [select] |
The oracle optimizer should not take long for an uncached query that's only using one table.
It is true that a hard parse for a complex SQL statement takes more time than for a simple SQL statement.
However, doing a hard parse always takes more time than not doing it, and in some cases the parsing overhead is the dominating factor for the total execution time. All I wanted to say, therefore is that hard parsing has to be avoided, and that is exactly what bind variables are for.
Especially if the tables have been analyzed
Actually, I should think that building an execution plan is faster if the tables have not been analyzed. In the absence of statistics (gathered during an analyze), Oracle has to skip all its clever calculations and defaults to some hard-coded heuristics.
Of course, the quality of the resulting execution plan will suffer, so spending some extra time on gathering table statistics and using them is a good thing (And you can profit most from this extra cost when re-using the execution plan over and over by having bind variables)
So, to sum up, not using bind variables is absolutely killing your Oracle performance, and if you run queries in a loop, you should also give prepare_cached some serious considerations.
| [reply] |
It does, but I think it only keeps a certain number cached. If a particular query is run enough, it will be cached, but this is very dependent on use of the database. If there is a lot of activity, you need to run your query many times before it will be cached in the DB.
The advice above allow you to reuse the prepare on the Perl side without counting on Oracle to do it. | [reply] |