in reply to Re: array overhead
in thread array overhead

I do not have a process who's memory requirements continually grow without bound. It is looking at stock market data for a given window of time. Data in the cache that is newer than that window hasn't been read yet, and data older than that window of time is removed automatically. TIEing the arrays is not an option because the whole point of the cache is to avoid disk access. A DB solution is *already* in play - that's where the data initially comes from, and the cache is there to prevent repeatedly loading the same data from the database.

Replies are listed 'Best First'.
Re^3: array overhead
by Marshall (Canon) on Jan 14, 2011 at 16:21 UTC
    You have a high performance system. You are blowing data in from the stock market and sucking data out with some analysis program.

    In between those two processes is a buffer. I would certainly consider making that buffer of a fixed size. How big it needs to be is determined by the burst rates of in and output.

    You are completely lost if the consuming process cannot on average, within a short time window, consume more than the data producing process - the system just cannot work in the high performance mode that you envision - it will "get behind" and you will have buffer overrun problems.

    If the buffer starts to panic and ask for even more system resources as it gets too full, this typically will auger into the ground very quickly and there is a system crash. How to loose data as gracefully as possible is one of the considerations. Sorry that I can't help you more. High performance system design is tough - real-time system design is even tougher.

Re^3: array overhead
by DStaal (Chaplain) on Jan 06, 2011 at 21:12 UTC

    It's hard to give any specific suggestions without some more idea of your code, but I did want to point out one option that hasn't been mentioned: SQLite offers in memory databases. It might be worthwhile as a cache, in some cases.