For real speed and scalability, you'll need an RDBMS, such as MySQL or PostgreSQL, or some such; use DBI to access it, and when you're feeling up to it, mod_perl and Apache::DBI to manage connections to the DBMS. If you're avoiding this kind of setup, learn how to use it. You'll thank yourself later.
That said, right now you can probably use a DBM interface, which along with tie allows you to manipulate a file as if it were a hash; the keys could be user names and the values could be split - able strings that contain the user information. You'll have to pay close attention to things like file locking, however (which you'll have to do now already, and is one *very* good reason to use a DBMS, IMO).
Check out DB_File and or do a search on CPAN for "DBM" and see what pops up.
HTH
perl -e 'print "How sweet does a rose smell? "; chomp $n = <STDIN>; $r
+ose = "smells sweet to degree $n"; *other_name = *rose; print "$other
+_name\n"'
| [reply] [d/l] [select] |
1. You can speed it all up with BerkeleyDB:
http://www.sleepycat.com. Low overhead, and very speedy
thanks to BTree sort. Within the database,
put the fields in fixed with. Multiple index calls are
slow, naturellement, compared to a set of numbers.
Your mileage may vary, depending on OS, memory, CPU speed,
CPU caches, disk caches, disk speed, etc,etc, so use
and find the best solution for your
situation.
BTW, making a database with links to other database just to
retrieve a password, seems *very* complicated for such
an operation. Why not put it all in one file, or why don't
you use a RDBM (MS SQL, Postgres, Oracle, MySQL, etc)?
2. If performance is an issue, just write to a copy, and
lock that copy only when writing. Every 5 mins or so, let a
cron (or at) job refresh the original dbase file, and lock
that file during the refresh. Or use a RDBM, see above. Oh
yeah, BerkeleyDB has a transaction system as well.
Hope this helps, Jeroen
"We are not alone"(FZ) | [reply] |
Actually, if disk space is not an issue and if you want to create a new DBI interface, you can make a blazingly fast database (suggested read only) on your local drive.
The only way to do it and to be fast on your local machine (or a local machine), is to use fixed width rows and columns. Doing this allows you to seek across the rows and sysread data out. One you have done this, create fixed row indexes that map a column to the appropriate row in the real database. You can do a binary search on the index and get the right row out. If programmed correctly, you can get your correct row out with less than 20 file operations even on a database of over one million rows. This comes from experience. It can be done and the results are great (if done properly).
That said, you should use an RDBMS. They already have the interface written, the ability to create relations, do type checking, snapshot the data, handle locking, and allow for easy update and removal of data. The hand written solution I mentioned above is appropriate really if you aren't updating the data more than one or twice a day.
my @a=qw(random brilliant braindead); print $a[rand(@a)]; | [reply] [d/l] |
im still unsure about binary searches. how are they faster? they search the same area and do it the same amount of times, so it's not like there's a higher probability that the requested db entry would be on the top or bottom of a list or thingie. also, as far as fixed length entries, it's a bit difficult seeing as it contains passwords, user info, and a special section that holds hackers' IP addresses (can't really truncate that now can we?) i cannot use a real database system since i am going for text-file type. i know line by line still loads all the stuff into memory but of well im stupid. so how might i make the database find the right part of the db without re-indexing the db every time i add or delete something? it's not all one big database because i can easily split it into files of 200 entries and scale directories for each set of db files. this way we can have db files "db_info.1-200" and "db_info.201-400" and it takes their userid (they all start at 100000) and it takes away 99999 or something i forget. then i have the number (100013 would be 13 in the end) and then open the database file that has "1-200" since 13 is less than 200. opening a huge database file into memory is bad. opening a small one is good.
so you said that comes from experience (that seeking and stuff) so may i have a bit more of a hint on how to do it? i dont really know how to seek though records that well since i usually do line-by-line or load the entire file into an array. any code is helpful.
| [reply] |
Sorry for the delay in the post.
The trick to the index is it must be a sorted index (suggested ascii sorted). So if I have 10 numbers I would make an index that lookes like this
001 005
020 006
030 001
031 007
050 009
060 002
070 003
080 010
090 004
100 008
Pretty boring. Notice that it is fixed width. In this example the first column is the sorted index, the second column is the real row number that the data would be in the main file.
The following is the basic code I would use to search the given index file. It would do the following:
- Seek to and read in 060
- Not found max set to beginning of 060
- Seek to and read in 030
- Not found max set to beginning of 030
- Seek to and read in 020
- Found spit out the result.
A binary search will find any of the data using only 4 seeks ((2 ** 4 == 16) and 16 > 10). If this was a data file of a million records, it would take only 20 seeks ((2 ** 20 == 1048576) and 1048576 > 1000000). I wrote this code from scratch below and haven't tested it, but it should be sound.
my $search = sprintf("%03d",20); # pad the search thingy
my $ind_length = 3; # length of sorted index column
my $line_length = 8; # line length
my $size = (-s $file);
die "Corrupt file" unless $size % $line_length == 0;
open(IND,$file) || die $!; # get the file ready
my $min = 0; # lower bound
my $max = $size; # upper bound
my $result = undef;
my $match;
while(1){
my $pos = $ind_length * int(
($min/$ind_length + $max/$ind_length) / 2
); # find the middle (make sure it is at the
# beginning of a row
last if $pos == $min; # there is no result
seek(IND,$pos,0); # go to the middle
sysread(IND,$match,$ind_length); # read the info
# found it!
if( $match eq $search ){
seek(IND,($pos+$ind_length+1),0);
sysread(IND,$result,$ind_length); # read the real index
last;
}
if( $match gt $search ){
$max = $pos;
}else{
$min = $pos;
}
}
if( defined $result ){
print "Found it ($result)\n";
}
my @a=qw(random brilliant braindead); print $a[rand(@a)]; | [reply] [d/l] [select] |