im still unsure about binary searches. how are they faster? they search the same area and do it the same amount of times, so it's not like there's a higher probability that the requested db entry would be on the top or bottom of a list or thingie. also, as far as fixed length entries, it's a bit difficult seeing as it contains passwords, user info, and a special section that holds hackers' IP addresses (can't really truncate that now can we?) i cannot use a real database system since i am going for text-file type. i know line by line still loads all the stuff into memory but of well im stupid. so how might i make the database find the right part of the db without re-indexing the db every time i add or delete something? it's not all one big database because i can easily split it into files of 200 entries and scale directories for each set of db files. this way we can have db files "db_info.1-200" and "db_info.201-400" and it takes their userid (they all start at 100000) and it takes away 99999 or something i forget. then i have the number (100013 would be 13 in the end) and then open the database file that has "1-200" since 13 is less than 200. opening a huge database file into memory is bad. opening a small one is good.
so you said that comes from experience (that seeking and stuff) so may i have a bit more of a hint on how to do it? i dont really know how to seek though records that well since i usually do line-by-line or load the entire file into an array. any code is helpful. | [reply] |
Sorry for the delay in the post.
The trick to the index is it must be a sorted index (suggested ascii sorted). So if I have 10 numbers I would make an index that lookes like this
001 005
020 006
030 001
031 007
050 009
060 002
070 003
080 010
090 004
100 008
Pretty boring. Notice that it is fixed width. In this example the first column is the sorted index, the second column is the real row number that the data would be in the main file.
The following is the basic code I would use to search the given index file. It would do the following:
- Seek to and read in 060
- Not found max set to beginning of 060
- Seek to and read in 030
- Not found max set to beginning of 030
- Seek to and read in 020
- Found spit out the result.
A binary search will find any of the data using only 4 seeks ((2 ** 4 == 16) and 16 > 10). If this was a data file of a million records, it would take only 20 seeks ((2 ** 20 == 1048576) and 1048576 > 1000000). I wrote this code from scratch below and haven't tested it, but it should be sound.
my $search = sprintf("%03d",20); # pad the search thingy
my $ind_length = 3; # length of sorted index column
my $line_length = 8; # line length
my $size = (-s $file);
die "Corrupt file" unless $size % $line_length == 0;
open(IND,$file) || die $!; # get the file ready
my $min = 0; # lower bound
my $max = $size; # upper bound
my $result = undef;
my $match;
while(1){
my $pos = $ind_length * int(
($min/$ind_length + $max/$ind_length) / 2
); # find the middle (make sure it is at the
# beginning of a row
last if $pos == $min; # there is no result
seek(IND,$pos,0); # go to the middle
sysread(IND,$match,$ind_length); # read the info
# found it!
if( $match eq $search ){
seek(IND,($pos+$ind_length+1),0);
sysread(IND,$result,$ind_length); # read the real index
last;
}
if( $match gt $search ){
$max = $pos;
}else{
$min = $pos;
}
}
if( defined $result ){
print "Found it ($result)\n";
}
my @a=qw(random brilliant braindead); print $a[rand(@a)]; | [reply] [d/l] [select] |