1.i have to index the big table that is already indexed (the value itself is an index), and that is an extra number of steps
2. i have to import my data (big table) - that is an extra number of steps,
3.i have to index more that one column to speed up the search - that is a redundant number of steps.
so the point is why to do all that if i can do it only by hashing(importing) smaller data and then just sweep once through the big data(file).
plus what i notice is, that when db engine is importing staff, it is formatting it in some manner and that has to be expensive (extra number of steps) with respect to sweeping through already formatted file (format is not the issue, switching the formats is ).
so the question was, when calculating theoretical complexity of hashed data, can i neglect the fact that, for every checkup, it has to go through all hash keys to get me the value for my checkup (is it, should i put it 'legite', to do that). because that is what i'm actually doing when evaluating the complexity of my db query, isn't it?
thank you for your fast reply!
In reply to Re^2: how do hashes work - complexity question
by baxy77bax
in thread how do hashes work - complexity question
by baxy77bax
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |