Re: Allocating more memory to perl process...
by FamousLongAgo (Friar) on Nov 12, 2002 at 23:30 UTC
|
As far as I know, Perl on Linux, BSD and OS X systems will happily allocate memory it gets terminated by the scheduler after nearing 100% usage. I have 1.5 GB of RAM and have had perl scripts go up to 90% memory usage without requiring any special command line switch.
Could you provide more information about your operating system, and why you aren't getting the memory you need? Part of Perl programming is supposed to be never worrying about memory until you run out, but these issues are heavily OS-dependent.
| [reply] |
|
|
I'm implementing a recursive subroutine to find all the permutations of elements in a list of lists. You can look here to find out more on that subroutine.
Unfortunately, with the way the subroutine is implemented, the interpreter complains when a certain level recursion is reached. Inside the debugger, it complains when the 100th level of recursion is hit and exits("100 levels deep in subroutine calls!" is the error message).
It's not like the elements I'm using are particularly memory intensive. A conservative estimate would be that 2,000 characters are being pushed around at most at any one time. So it may actually be a limitation on the size of the call stack perhaps?
I'm running under Linux (one of the most recent ones, can't recall the exact version number) with 250 MB of RAM and generally 190MB free at any moment.
| [reply] |
|
|
According to the perl manpage, "Recursion is of unlimited depth"
Perhaps your problem is with the debugger? Perl will generate the warning when you get 100 levels deep if you have warnings turned on, but it shouldn't die on you. What happens when you run the script without the debugger?
I agree that the problem isn't likely to be memory usage; you can run the program while watching the output from top sorted by memory to confirm that.
Could you post some sample data so we can try to reproduce your error?
| [reply] [d/l] |
|
|
|
|
Usually when you bomb out of a deep recursion, you have reached your stack size limit. If you are on a Unixy system, use ulimit -s or ulimit -a to see your stack size limit. If you are lucky you can increase it yourself, if not you'll have to tweak the kernel-imposed limit.
CU Robartes-
| [reply] [d/l] [select] |
Re: Allocating more memory to perl process...
by Elian (Parson) on Nov 13, 2002 at 00:41 UTC
|
Perl's memory allocation system is very different from Java's. As such, there isn't a limit placed on the amount of memory that you can allocate from within perl, outside of any limit placed on it by the operating system or memory allocation system.
Generally you can count on being able to get to at least 1G, usually 2G, and on a big machine with a perl that doesn't use the perl malloc (which has a 2G limitation) you should be able to use all the memory your system has available to it.
There are some bits of perl that require contiguous memory, so you may find yourself facing out of memory situations before you actually run out of memory, but these are reasonably uncommon, and usually show up with exceedingly large arrays, or hashes with an extraordinary number of keys in it. (Which does happen from time to time. Or $time[2*time()], if you'd rather :)
Are you having memory issues? If so, details of OS version, perl version, and some symptoms might help figure out what's wrong. | [reply] [d/l] |
|
|
Thanks Elian
As I said above, memory usage is definitely the issue here. My wimpy 250 MB of memory is not cutting it here and the process crashes out on me. So my options are to get more memory or to reimplement the algorithm in an iterative fashion. Actually, there are more options than those, but I'd rather not go there. ;)
Bamafan
| [reply] |
|
|
Well, there's always more swap, but that's an option of limited utility. Perl data structures are reasonably large (I'll put in the obligatory plug for Devel::Size here) and it's pretty easy to chew up a lot of memory quickly.
An iterative algorithm is probably more in order if you're blowing memory or, if this is a program that works on the same set of data over multiple runs, you might want to consider something more persistent and less memory hungry, such as a database, that you can connect to and only process the data you need to.
(Or, if this is math heavy, consider something like PDL which can represent a lot of numeric data densely)
| [reply] |
|
|
Are you sure you've implemented the recursive algorithm right in the first place? The fact that you say you're not moving that much data around -- yet you have recursion more than 100 levels deep -- makes me suspicious that you might inadvertantly have created the recursive equivalent of an infinite loop. 250MB is actually quite a lot of memory for most text-ish material.
One way to get a sense of what's going on in your recursion is to print the values of arguments each time the sub is entered. You may discover that your base case isn't returning properly, or that you're not decomposing the problem the way you thought you were, and thus not making progress with each recursive call.
Apologies if this is obvious stuff that you've tried already, but I'd be wary of jumping to the conclusion that this is a perl or system issue before you're confident that the algorithm itself is correct.
$perlmonks{seattlejohn} = 'John Clyman';
| [reply] |