in reply to memory not freed after perl exits. Solaris.
People,
Firstly THANK-YOU for all your comments.
I half expected a "not another guy blaming memory not freeing" type response.
There were a number of replies and it would be impracticle to reply to each so at the risk of breaking the threading, I have gathered all my responses here.
Generally I've repeated a summary of a suggestion followed by my response.
Now if I can sort out the formatting....
Post the code:
There is about 1000 lines and (almost by definition) I'm not sure which sections to post.
Posting the lot might be asking a lot of your collective patience.
Rerun on a local disk.
I can get the code onto a local disk and run it.
I only have 5 G locally (on /tmp) and I have about 0.5T of log files. I'll try to get a fragment of the log files on to /tmp and rerun.
I'm not forking.
Only 2 zombies after, not sure before I'll rerun and see how many after another run, but this is after several runs.
The script ends normally, not with ^C. It prints a message right near the end and closes the output file correctly (I think!). I often run it in background ^Z, (followed by "bg") but not always. No evidence of it with ps -aux
Free memory isn't decreasing just fragmenting.
I don't know. I'm using vmstat as I described. Is there a better way to assess free memory. It seems to agree with the header in "top"
Am I using "shared name memory"
Not as far as I know, I'm letting perl handle everything in that regard. I just keep adding elements to my hash, sometimes I store it and retrieve a new one. In between I undef the hash name. Don't know if that is necessary or helps. Also I open a lot of files and read from them, I only write to 1 or 2 files. I'm pretty sure I'm closing all the files that I open (which I wasn't a while ago).
Am I storing data on /tmp.
I'm not storing data on /tmp but I do send the STDOUT of the programme to a log file on /tmp.
I don't know if the nature of the /tmp filesystem,
$ df -k .
Filesystem kbytes used avail capacity Mounted on
swap 7077352 1652448 5424904 24% /tmp
Does that mean that it is of type "swap"? Sorry for my ignorance.
run perl -E"$x='x' x 2**31 to flush memory from cache
If I put it in a file and run, I can run **29, if I try **30 I get:
panic: string extend at ./999-workHard.pl line 2.
**29 takes about 4 seconds.
If I run it from the command line:
/usr/bin/perl -e"$x='x' x 2**29"
syntax error at -e line 1, near "="
Execution of -e aborted due to compilation errors.
I don't know how to fix either the limit of **29 or the compilation error when running from command line.
I have no background jobs running.
ps -au username
1 defunct process. No indication of process size.
The name of the programme I'm running isn't in the list.
no ramdisks on this machine
So I wrote the OP yesterday and when I log on today the "free" in top and vmstat has increased, (from ~500m to 2.6G) so maybe it is just the file cache being released over time.
Can someone help me with the "$x='x' x 2**31" thing? This seems like the most promising answer or is it a red-herring?
On a completely separate note, how to I do the formatting sensibly.
Thanks again to everybody.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: memory not freed after perl exits. Solaris.
by roboticus (Chancellor) on Nov 24, 2010 at 13:01 UTC | |
|
Re^2: memory not freed after perl exits. Solaris.
by BrowserUk (Patriarch) on Nov 24, 2010 at 12:09 UTC | |
by Workplane (Novice) on Nov 25, 2010 at 10:28 UTC | |
by BrowserUk (Patriarch) on Nov 25, 2010 at 10:42 UTC | |
by marto (Cardinal) on Nov 25, 2010 at 12:48 UTC | |
|
Re^2: memory not freed after perl exits. Solaris.
by MidLifeXis (Monsignor) on Nov 24, 2010 at 15:04 UTC | |
by Workplane (Novice) on Nov 25, 2010 at 10:39 UTC | |
by afoken (Chancellor) on Nov 25, 2010 at 12:49 UTC | |
|
Re^2: memory not freed after perl exits. Solaris.
by Workplane (Novice) on Dec 12, 2010 at 22:48 UTC |