in reply to Remaining Memory on a system

Check perldoc perlvar for $^M and ask your sysadmin to configure more swap, but the better course of action is to figure out what about your approach is causing you to chew up all of your RAM to begin with.

Replies are listed 'Best First'.
Re: Remaining Memory on a system
by Abigail-II (Bishop) on Aug 29, 2003 at 15:39 UTC
    Check perldoc perlvar for $^M

    Yes, and? $^M only has a use after running out of memory, a situation that the OP wants to avoid. Furthermore, use of $^M only means you will run out of memory quicker, and Perl will just allocate a chunk of memory when $^M is assigned to.

    ask your sysadmin to configure more swap

    This was already dismissed by the OP, saying 2 Gb is enough.

    Abigail

RDBMS vs. RAM (was: Remaining Memory on a system)
by blue_cowdawg (Monsignor) on Aug 29, 2003 at 15:40 UTC

        If you've got to keep intermediary results, consider writing them out to a DBM file or to an RDBMS rather than keeping everything in RAM.

    In case anyone hasn't guessed.. I'm very interested in this thread because I am running into similar problems with some code that I'm writing. I'm not running out of memory but I am noticing that the Perl strucutres I am storing things in are getting very large and unwieldy and RSS when tracked is getting HUGE. Gee... I wonder just how many Perl Monks and non-PM Perl coders are getting sucked into virus remediation right now.

    All that aside: I have thought about using DBMs and/or an RDBMS to store intermediate results for my scripts as well. The problem with that is you tend to lose performance using that approach (especially using DBI) due to the overhead of communicating with the database and the scripts I'm writing are too slow already.

    Of course, as the OP pointed out indirectly the problem with keeping everything in memory is you can concievably run out of memory when your data structures get REALLY huge due to the amount of data you are processing.

        Break your input data set into smaller chunks if possible, process the smaller chunks, then use those results to produce a final agregate result.
    This approach works OK too if your data "cooperates" and lines itself up really nice for you the way you want it. However when you are processing logs you may have related events that you want to track that are many many lines apart in the log files.

    I know I am not coming up with a solution here, but I can feel smellysocks's pain here as I'm trying to solve some of the same problems and I'm not coming up with any good answers either.


    Peter L. Berghold -- Unix Professional
    Peter at Berghold dot Net
    Chat Stuff: AIM:  redcowdawg
    Yahoo: blue_cowdawg
    Cowdawg Philosophy:  Sieze the Cow! Bite the Day!
    Clever Quip:  Nobody expects the Perl Inquisition!
    Non-Perl Passion:   Dog trainer, dog agility exhibitor, brewer of fine Belgian style ales. Happiness is a warm, tired, contented dog curled up at your side and a good Belgian ale in your chalice.