in reply to Algorithm Efficiency vs. Specialized Hardware?
| dataset size | Grep | Max | Ternary |
|---|---|---|---|
| 100 | 319704.72/s | 167783.44/s | 160956.63/s |
| 1000 | 324017.59/s | 158933.12/s | 157394.83/s |
| 10000 | 328377.50/s | 162308.63/s | 159661.13/s |
| 100000 | 332964.44/s | 160964.22/s | 157891.99/s |
What I expect you're seeing is that it is much quicker to sort (implemented in binary code) and index the last element than it is to iterate through an array (even once) in an interpreted language for datasets of the size we are considering.
You see similar results with sorting algorithms. Quicksort is well know to be the fastest general purpose sorting algorithm for large sets O(n log(n)). However the overhead incurred by quicksort is overkill for small datasets. For datasets with very few elements bubblesort is often faster, even though bubble sort is O(n^2).
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
RE: Re: Algorithm Efficiency vs. Specialized Hardware?
by Aighearach (Initiate) on Jun 20, 2000 at 23:25 UTC | |
by lhoward (Vicar) on Jun 20, 2000 at 23:34 UTC | |
| A reply falls below the community's threshold of quality. You may see it by logging in. |