I am sorry if I misunderstood what you wrote. It is not my place, or intention to try and distort what you would most sincerely recommend to a fellow monk! Just because I don't agree with you, doesn't make me better, or more correct. I just disagree with your views on flatfiles, that is all... What you did say and I disagree with is:
> 10,000 items to store. Well, why not having 10 files to store them?
> The first file for the first 1000 items, and so forth. Speedwise, I
> am telling you, you will end up with something a LOT faster than any
> other big DB package or wrapper like all the DBI stuff. Because those
> packages are, in fact, also using some big files I guess...
My general DB and flatfile experience tells me that if you exceed a file with 2500 records, about 400 chars wide, and having more than one query per second, you are better off with a real DBMS. Yes, ZZamboni's easiest way out is probably going trhough flatfiles, but even in that case I would try doing something DBIsh. And while we are on that topic, splitting a large file in several smaller ones will not help at all (actually only make matters worse) if you don't have some sort of clever indexing system. By splitting the data in different files, you will not increase lookup speed, and will have a penalty for having to open each one of those files to do a full search! Again, I would split the files only, and only!, if you have a good indexing mechanism and can't afford (money, or machine wise) a DBMS. Most of the DMBSes already have clever indexing systems, so you don't have to reinvent the wheel.
On a side note, caching won't make the perfomance merely acceptable, it will make it go through the roof!! There's no way of comparing disc access vs. ram access.
#!/home/bbq/bin/perl
# Trust no1!
| [reply] |