Re^3: System call doesn't work when there is a large amount of data in a hash
by Nicolasd (Acolyte) on Apr 29, 2020 at 10:53 UTC
|
I know I could have written it better, it's a bit of a mess, but it works great so that's the most important.
And I really need that hash, because I need to access that data all the time, a database would be too slow.
Which file is using the : character?
Could it be that the system call duplicates everything that is in the virtual memory to start the sister process?
If that is the case I guess I just can't do system calls, any idea if there is another way
| [reply] |
|
Without seeing your code, it will be very hard to suggest things on how to make it do what you want.
You have discarded all the obvious things that would make it easier, because you say that you really need this.
Ideally, you show us some minimal code that reproduces the problem so that we can run it ourselves. For example, the following could be a start:
#!perl
use strict;
use warnings;
my $memory_eaten = 8 * 1024 * 1024 * 1024; # 8GB, adjust to fit
my %memory_eater = (
foo => scalar( " " x $memory_eaten ),
);
my $cmd = "foo bar";
system($cmd) == 0
or die "Couldn't launch '$cmd': $!/$?";
Updated: Actually make the hash eat memory by creating a loong string | [reply] [d/l] |
|
Thanks For this suggestion, it helped a lot!
I tried this script and it worked fine on my laptop, put 12 GB of the 16 GB available in hash and system call still works
I did got varying results on the Centos 7 (450 GB of RAM), I monitored it also with top, if I saw a memory increase
20 GB, 50 GB, 100 GB, 150 GB and 200 GB all worked fine, didn't see any memory increase either
But with 230 GB (more than half of the available) I ran out of memory (Cannot allocate memory), so I need the same amount of memory free than there is in the hash? And only for Centos then?
I also made the system call loop for 10 times and the bigger the hash, the slower the system call starts
| [reply] |
|
Hi,
Thanks for the reply.The big hash is the most essential part, so really can't change that.
I didn't add any code because I tried many different system calls on different locations in the large script.
A system call always worked if I put it before the hash is loaded, if I do it afterwards, it never starts
The code you send, does that hash takes the memory of your variable $memory_eaten?
If so, that would be great to test, I will do it now.
| [reply] |
|
"Which file is using the : character?"
Download the repo as a zip file, try to extract under Windows, it'll report a bunch of problems caused by 'invalid' characters in filenames.
| [reply] |
|
Hi, thanks for reporting it, but I just tried it and I don't get any warnings. Maybe because I use different zip software..
| [reply] |
|
|
|
|
|
Could it be that the system call duplicates everything that is in the virtual memory to start the sister process?
In theory, fork (used to implement system) does exactly that. Modern kernels with virtual memory will set up COW instead of actually copying the entire address space, but this still (usually) requires duplicating page tables, which for 256GiB of data with 4KiB pages themselves fill 512MiB or so. Could you be bumping up against a resource limit? (Look up ulimit for more information.)
| [reply] [d/l] |