Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Re: Shared Memory Cache or Database?

by Marshall (Canon)
on Dec 07, 2020 at 18:23 UTC ( [id://11124790]=note: print w/replies, xml ) Need Help??


in reply to Shared Memory Cache or Database?

I don't know what the performance characteristics of something like this would be, but SQLite does have the ability to run with a totally in memory DB. One limitation of SQLite is that writing requires the acquisition of an exclusive lock. Multiple read operations can proceed in parallel. It sounds like you have a single writer and maybe 8 or so readers? So on the surface, this "slow writes" doesn't sound like a "show stopper".

I haven't personally used a completely in memory SQLite DB, but one SQLite feature that I have experimented with is varying the memory footprint of SQLite dynamically. I was very surprised to learn that this was possible and I played with it and found out that this actually works. In my test DB, for a complicated operation like creating multiple indices on a 1M+ record DB table, I ran the memory up to say 500MB. This made a huge improvement in execution time. After which, I dropped the memory usage back down. Perl itself cannot reduce its memory footprint (to my knowledge), but SQLite as used by Perl can do so.

I suppose that if you have an SSD disk and tell SQLite to use say a 1GB memory cache, that the performance will be impressive and approach that of a complete in-memory DB (except for writes), Whether the performance is good enough or not for your application, I do not know.

Before rushing off to implement some complicated shared DB structure on your own, I would consider at least implementing a prototype with SQLite. For the write operations, you will have to be cognizant of performance implications of a "write" and use explicit start and end transaction statements. The number of transactions per second will be a limiting performance factor. If you need to write to 2 tables, make sure that is a single transaction. The number of rows affected by an operation is not that important, but the number of transactions is important.

In summary, I would recommend that you consider making a prototype to play with. Even if you plan to "throw the prototype away". There are DB mechanisms that allow you to be notified when a table changes. This may be completely adequate for your IPC needs?

Update:
I did a bit more investigation into SQLite performance and found this informative link with C benchmarks on stackoverflow: improve-insert-per-second-performance-of-sqlite.

This thread talks mostly about increasing row insert speeds, but much of the discussion is also applicable to increasing transactions per second. A transaction is a very expensive thing because the DB is trying to maintain what is called ACID (Atomicity, Consistency, Isolation, Durability). A transaction requires multiple writes to the disk and also journaling so that an incomplete transaction can be restarted. While writing this post, my cat jumped on top of my tower and held the power button down, causing a panic shutdown of Windows. Something like that would cause an incomplete transaction! SQLite can recover from something like that. SQLite will run faster if you "tone down" some of its ability to recover from catastrophic errors. The OP wrote: however if data is lost it's not an issue. The article talks about a number of those parameters. Also as I mentioned before, an SSD will have a noticeable impact on writes because no disk rotational delays are involved.

I presume that you are not trying to run this on some wimpy cheap laptop and that buying an SSD dedicated to this project is no big deal (let the O/S use its own drive). You will have to do some benchmarking and experimentation on your system to see exactly what is or is not possible. The indexing of the database also matters quite a bit. There are lots of variables. However, it appears to me that a SQLite prototype is completely feasible (app is in the range of a few dozen transactions per second). You would learn a lot by that exercise. Also consider that having an actual DB file vs totally in memory may have some advantages for debugging and monitoring/tweaking the application.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11124790]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others surveying the Monastery: (6)
As of 2024-03-29 11:43 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found