in reply to Re^3: Benchmarking A DB-Intensive Script
in thread Benchmarking A DB-Intensive Script
With the random strategy, the odds of repeating the select go from 0% to 4%. So, you're going to need to pull about 102 million pairs by the time you're done. (Slightly higher than that, but below 104 million.) That's 102 million hash lookups following fast operations.
The alternate strategy involves doing 2.45 billion hash lookups just to start. That preprocessing step is more than the 2 million redos that you're hoping to save with a better algorithm.
Random then redo is just fine for his needs. It would only be worth revisiting that plan if he was going to be searching a large part of his search space.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^5: Benchmarking A DB-Intensive Script
by blogical (Pilgrim) on Mar 15, 2006 at 05:04 UTC |