Use a database. I'm serious. This requirement is impossible to implement using files. You can use lock-files and the like, but that means you're just queueing up the changes to be done serially instead of parallely. (Not to mention that it's REALLY easy to screw up lockfiles and have everything go to hell in a handbasket mighty quick.)
Lots of people seem hell-bent on using full-blown RDBMS's these days even for the most simple things. File locking isn't black magic; see this article about semaphore files. Yes, there could be a queue, but if the running time of a script is short it probably isn't a problem.
If certain information is always on a specific line, use Tie::File. With this module you can access a normal file like an array ($line[0] is line 1, $line[1] is line 2, etc). I think it also does locking, but you'll have to experiment with that.
And if the problem is small, and a database is called for, use DBD::SQLite (the complete database is included in the module source) and DBI. This lightweight database uses a single file to store information, so you don't have to do a system-wide, full-blown MySQL or PostgreSQL installation.
Arjen
Update: Using Tie::File is even a good idea if information isn't always on the same line. If you change a line (array element), then the file automatically gets rewritten. So, you don't have to read in the file, search, replace, and write the file but just change an array element and the rest goes automagically.
| [reply] [d/l] [select] |
All excellent suggestions, especially DBD::SQLite.
I'd like to make the opposite point - that people don't use RDBMS-like technology nearly enough. There are whole classes of problems that have been solved and their solution is the RDBMS. I may be missing something, but I don't want to implement file-locking just to do line-level locking. That's like saying "No-one can read existing products from the PRODUCT_DEFINITION table because I'm adding new products to it." That's not a reasonable statement. But, that's what file-level locking requires.
RDBMS's may have a larger footprint than a simple file, but that footprint brings with it a lot of benefits. If (and this is a big if) the OP's needs will never go beyond what was requested, then MySQL or DBD::SQLite would be overkill. But, there's something really seductive about having normalization and fast searches at your fingertips. The very fact of that availability drives the mind to enter new spheres and see problems in new lights - very similar to learning about functional programming in Perl. All of a sudden, a lot of problemspaces have solutions that map better, allowing for new features and better capabilities.
That may sound really pie-in-the-sky, but I see it happen on a daily basis everywhere I go. "Oh, you mean I could do XYZ that way?? Wow!" is a statement I hear a lot, solely because I showed them a feature in Excel or provided a 10-line script in Perl.
I may be unique, but for me, realization the data-organization capabilities of an RDBMS was probably the most influential "Aha!" moment of my programming life. It's such a rich solutionspace that 5 years later, I'm still barely ankle-deep into the possibilities.
It may be overkill for this particular problem, but that may be an acceptable tradeoff for the capabilities it provides to solve problems the OP never even classified as problems, because they were intractable.
------ We are the carpenters and bricklayers of the Information Age. The idea is a little like C++ templates, except not quite so brain-meltingly complicated. -- TheDamian, Exegesis 6 Please remember that I'm crufty and crochety. All opinions are purely mine and all code is untested, unless otherwise specified.
| [reply] |