in reply to After a good example of efficient record handling...

Are you sure you want to use fixed-length record sizes for forum posts? Do you have a small maximum length for the body of the post? If not, all of your records will be the size of the largest possible record -- and if that's not small, you'll have to do a lot of IO to read each message from the disk, pull it through the cache, and get it where perl can handle it. If you really think that will improve your speed (and I don't, even without benchmarking it), you need some sort of unique ID per message which you can use as an offset into the file. Multiply that by the size of a record and use seek and sysread on a filehandle to extract just the record you want.

If that doesn't seem fun to you (and it's not), you could use a unique record separator between records and let Tie::File take care of reading and writing.

If this were my project, I would use DBD::SQLite and avoid all of the tedious mucking about with O(n) access and let indexed columns get that down to O(log n) or better, while not having to process all of the data myself. Please consider that instead.