in reply to Re^4: Trying to optimize reading/writing of large text files.
in thread Trying to optimize reading/writing of large text files.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^6: Trying to optimize reading/writing of large text files.
by Marshall (Canon) on Jan 23, 2012 at 06:48 UTC | |
If performance matters, this is always a good idea! For what you want to do, getting a "read lock" on LOG, basically means nothing. You need an exclusive lock. There is no need to get any kind of lock on the temp file - should be a unique file anyway. I mean if it is a unique file, for your own access, nobody else is going to mess with it. You haven't explained much (actually nothing) about what LOG does in terms of IPC except that this file is used for IPC (Inter Process Communication). There is a difference between "guaranteed to work all of the time" and "very high probability of working". My question about interference between flock() and rename() is still open. If the file is closed, the lock is released. You cannot have a lock unless the file is open. You cannot rename x=>y unless y doesn't exist. If your process relies upon a "write" lock on y, this won't work (all of the time) because you have to delete "y" before re-naming x=>y. If your OS allows x to replace an existing file y, then I'd like to see a Perl example. rename as like all file operations, can fail -- check the return status. | [reply] |
by nikkimouse (Initiate) on Jan 25, 2012 at 05:38 UTC | |
Finally I've done the benchmarks.
benchmark script code is:
By the way, I've completed several heavy stress tests by launching tens of script instances at once and there were no problems with file integrity. Every script waited until previous instance finish working with the file. Thanks for your tip about using flag file! Also, I realized that version#2 is more stable against hardware crashes or loss of power. Even if HDD shuts down during write operation, there are always two copies of the file - DAT and TMP, and data always can be recovered from one of them. | [reply] [d/l] [select] |
by Marshall (Canon) on Jan 25, 2012 at 07:04 UTC | |
Benchmark is a core module (meaning that is included in all Perl installations without you having to install it yourself). This can simplify further benchmarking code. I haven't looked in detail at your new code, but Version #1 made an in memory copy of the file. It is not surprising, and was expected that not doing that would save memory! With 100MB files, a 20% performance gain is also plausible (copying stuff around can be expensive). Data recovery is something that we didn't talk about, but if it is even remotely possible that something "can go wrong", it will "eventually go wrong" if you do it enough times! | [reply] |