Just to throw out a possible option to you, don't know if it's right for your particular needs but just want to help out. In the past I had a somewhat similar situation where I was doing parallel processing using forks and wanted to share large data structures across them that were too big to fit in my available RAM.
After experimenting with various IPC libraries on CPAN I went for KyotoCabinet it's the fastest DBM out there to my knowledge, written in C++ and fully supports a multithreaded environment. From the docs:
Functions of API are reentrant and available in multi-thread environment. Different database objects can be operated in parallel entirely. For simultaneous operations against the same database object, rwlock (reader-writer lock) is used for exclusion control. That is, while a writing thread is operating an object, other reading threads and writing threads are blocked. However, while a reading thread is operating an object, reading threads are not blocked. Locking granularity depends on data structures. The hash database uses record locking. The B+ tree database uses page locking.
In order to improve performance and concurrency, Kyoto Cabinet uses such atomic operations built in popular CPUs as atomic-increment and CAS (compare-and-swap). Lock primitives provided by the native environment such as the POSIX thread package are alternated by own primitives using CAS.
Download, build and install the source core library and then the Perl API bindings. There are quite a few options on how you want it built so please check out ./configure --help.
Hope it is fast enough for your needs, it's not as fast as RAM but solves a lot of other problems
In reply to Re: Sharing large data structures between threads
by hermida
in thread Sharing large data structures between threads
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |