Re^2: Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)( A DB won't help)
by tilly (Archbishop) on Jul 24, 2004 at 16:32 UTC
|
My expectation is that most databases would use a well-known datastructure (such as a BTree) to store this kind of data. Which avoids a million directory entries, and also allows for variable length data. I admit that an RDBMS might do this wrong. But I'd expect most of them to get it right first try. Certainly BerkeleyDB will.
As for the "file with big holes" approach, only some filesystems implement that. Furthermore depending on how Perl was compiled and what OS you're on, you may have a fixed 2 GB limit on file sizes. With real data, that is a barrier that you're probably not going to hit. With your approach, the file's size will always be a worst case. (And if your assumption on the size of a record is violated, you'll be in trouble - you've recreated the problem of the second situation that you complained about in point 1.)
I'd also be curious to see the relative performance with real data between, say, BerkeleyDB and "big file with holes". I could see it coming out either way. However I'd prefer BerkeleyDB because I'm more confident that it will work on any platform, because it is more flexible (you aren't limited to numerical offsets) and because it doesn't have the record-size limitation. | [reply] |
|
|
A 2GB filesize limit is definitely a problem with the big file approach. Two possible ways to avoid this if you still want to go this way:
- the obvious: split the big file up into n files. This would also make the "growing" operation less expensive
- if some some subfiles aren't growing very much at all, you could actually decrease the size allocated to them at the same time you do the grow operation.
Actually, if you wanted to get really spiffy, you could have it automatically split the big file in half when it hits some threshold...then split any sub-big files as they hit the threshold, etc...
BerkeleyDB is definitely sounding easier...but I still think this would be a lot of fun to write! (Might be a good Meditation topic...there are times when you might want to just DIY because it would be fun and/or a good learning experience.)
Brad
| [reply] |
|
|
| [reply] [d/l] [select] |
|
|
| [reply] |
|
|
|
|
|
|
| [reply] |
|
|
use DB_File;
my $db
= tie (my %data, 'DB_File', $file, O_RDWR|O_CREAT, 0640, $DB_BTREE)
or die $!;
# Now use %data directly, albeit with tie overhead.
# Or use the OO interface (put/get) on $db.
| [reply] [d/l] |
Re^2: Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)( A DB won't help)
by demerphq (Chancellor) on Jul 25, 2004 at 23:01 UTC
|
Im confused, why wouldnt you just use a single table, with file_num,item_num and num_val as the data? Presuming that we can use four bytes per field we have 12 bytes per record. Thus 1 million records is ~12MB, assuming 100 records per file, we are looking at 120 MB no?
My point here is that unless Im missing something (which i suspect I am) that neither of the ways you describe is how I would solve this problem with an RDBMS engine. BLOBs are a bad idea as they almost always allocate a full page (one cluster iirc) regardless of how big the BLOB is. And using millions of tables just seems bizarre as the overheads of managing the tables will be ridiculous. I suspect, but dont know for sure that Sybase would be very unhappy with a DB with a million tables in it, but i know for sure that it is quite happy to have tables with 120 million records in them.
---
demerphq
First they ignore you, then they laugh at you, then they fight you, then you win.
-- Gandhi
| [reply] |
|
|
As described by the OP, there are 1,000,000(+) binary files containing (a variable number of) 4-byte integers often less than 1kb, and usually less than 4kb. Assuming an average of 2kb/512 integers per file that gives 2,048*1,000,000 = 1.9 GB. The aim was to save 'wasted disc space' due to clustersize round-up.
Any DB scheme that uses a single table and 2x 4-byte integer indices per number will require (minimum) 12 * 512 * 1,000,000 = 5.7 GB.
The extra space is required because the two indices, fileno & itemno(position) are implicite in the original scheme, but must be explicit in the 'one table/one number per tuple' scheme.
The other alternative I posed was to store the each file (1..1024 4-byte integers) from the filesystem scheme as LONGBLOBs thereby packing 1 file per tuple in the single table. Often BLOBS are stored as fixed length records, each occupying the maximum record size allowed regradless of the length actually stored.
Even when they are stored as LONGVARBINARY (4-byte length+length bytes) they are not stored in the main table file, but in separate file with a 4-byte placeholder/pointer into the ancillary file. That's at least 12-bytes/file (fileno, pointer, length) * 1,000,000 extra bytes that need to be stored on disc somewhere. Any savings made through avoiding cluster round-up by packing the variable length records into a single file are mostly lost here and in the main table file.
In addition as the OP pointed out, this sceme requires that each 'file' record be queried, appended to, and then re-written for each number added. A costly process relative to appending to the end of a named file.
It's often forgotten that ultimately data stored in a database end's up in the filesystem (in most cases). Of course, in a corporate environment, that disc space may belong to someone else's budget and is therefore not a concern :) But if the aim is to save disc space (which may or may not be a legitimate concern--we don't know the OP's situation. Embedded systems?), then a DB won't help.
Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algoritm, algorithm on the code side." - tachyon
| [reply] |
|
|
Sybase could handle a million tables, but, as you say, the overhead (in syscolumns and sysobjects) would be tremendous.
BLOBS would be a bad idea from the space management perspective, and would probably be a bit slow as well due to being stored on a different page chain.
If you are using Sybase 12.5 or later and you know that the binary data will be less than a set amount (say 4k or so) then you could use a 4k or 8k page size on the server, and use a VARBINARY(4000) (for example) to store the binary data. This would be quite fast as it is stored on the main page for the row, and wouldn't waste any space.
Michael
| [reply] |