in reply to Allocating more memory to perl process...

Perl's memory allocation system is very different from Java's. As such, there isn't a limit placed on the amount of memory that you can allocate from within perl, outside of any limit placed on it by the operating system or memory allocation system.

Generally you can count on being able to get to at least 1G, usually 2G, and on a big machine with a perl that doesn't use the perl malloc (which has a 2G limitation) you should be able to use all the memory your system has available to it.

There are some bits of perl that require contiguous memory, so you may find yourself facing out of memory situations before you actually run out of memory, but these are reasonably uncommon, and usually show up with exceedingly large arrays, or hashes with an extraordinary number of keys in it. (Which does happen from time to time. Or $time[2*time()], if you'd rather :)

Are you having memory issues? If so, details of OS version, perl version, and some symptoms might help figure out what's wrong.

Replies are listed 'Best First'.
Re: Re: Allocating more memory to perl process...
by Bamafan (Initiate) on Nov 13, 2002 at 00:59 UTC
    Thanks Elian

    As I said above, memory usage is definitely the issue here. My wimpy 250 MB of memory is not cutting it here and the process crashes out on me. So my options are to get more memory or to reimplement the algorithm in an iterative fashion. Actually, there are more options than those, but I'd rather not go there. ;)

    Bamafan
      Well, there's always more swap, but that's an option of limited utility. Perl data structures are reasonably large (I'll put in the obligatory plug for Devel::Size here) and it's pretty easy to chew up a lot of memory quickly.

      An iterative algorithm is probably more in order if you're blowing memory or, if this is a program that works on the same set of data over multiple runs, you might want to consider something more persistent and less memory hungry, such as a database, that you can connect to and only process the data you need to.

      (Or, if this is math heavy, consider something like PDL which can represent a lot of numeric data densely)

      Are you sure you've implemented the recursive algorithm right in the first place? The fact that you say you're not moving that much data around -- yet you have recursion more than 100 levels deep -- makes me suspicious that you might inadvertantly have created the recursive equivalent of an infinite loop. 250MB is actually quite a lot of memory for most text-ish material.

      One way to get a sense of what's going on in your recursion is to print the values of arguments each time the sub is entered. You may discover that your base case isn't returning properly, or that you're not decomposing the problem the way you thought you were, and thus not making progress with each recursive call.

      Apologies if this is obvious stuff that you've tried already, but I'd be wary of jumping to the conclusion that this is a perl or system issue before you're confident that the algorithm itself is correct.

              $perlmonks{seattlejohn} = 'John Clyman';