in reply to need help debugging perl script killed by SIGKILL

In general, when a new thread starts it gets a private copy of each existing variable (see threads::shared). In OS's that utilise copy-on-write (e.g. Linux) the effect of this may not be immediate although Perl does try to make private copies. It's good practice before starting a thread (or 35 of them) destroy all unwanted data from memory.

You mention you are using Thread::Queue. Its doc states in https://perldoc.perl.org/Thread::Queue#DESCRIPTION that

Ordinary scalars are added to queues as they are. If not already thread-shared, the other complex data types will be cloned (recursively, if needed, and including any blessings and read-only settings) into thread-shared structures before being placed onto a queue.

As I understand it, your data going into the queue must be declared :shared otherwise it will be duplicated and then :shared 35 times.

bw, bliako

Replies are listed 'Best First'.
Re^2: need help debugging perl script killed by SIGKILL
by expo1967 (Sexton) on Mar 02, 2021 at 16:34 UTC

    Thanks for the reply. I modified my $queue->enqueue() operation to use a shared variable, but the same problem still occurs.

    When you stated "your data going into the queue must be declared :shared ", How do I do that ? I have been google searching and have not found anything on how to accomplish this for enqueue operations

      The link I posted peripherally shows how to enqueue a blessed hash (object). And threads::shared has an example on how to create a shared hash which contains other shared items.

      use threads; use threads::shared; use Thread::Queue; my %hash; share(%hash); my $scalar; share($scalar); my @arr; share(@arr); # or # my (%hash, $scalar, @arr) : shared; $scalar = "abc"; $hash{'one'} = $scalar; $hash{'two'} = 'xyz'; $arr[0] = 1; $arr[1] = 2; $hash{'three'} = \@arr; my $q = Thread::Queue->new(); # A new empty queue $q->enqueue(\%hash);

      The "pitfall" I had in mind is this:

      my $hugedata = <BIGSLURP>; my (%hash) : shared; %hash = process($hugedata); # perhaps filtering or rearranging it into + a hash # $hugedata = undef; # <<< if done with it, then unload it, otherwise +... threads->create(...); # ... hugedata is duplicated, %hash is not.

      Memory is not the only reason the kernel can kill your process, perhaps "too many" threads will have the same effect. So, you should also find out the exact messages in /var/log/messages and the output of dmesg as Fletch suggested. Additionally, you must measure memory usage exactly (as opposed to just observing a SIGKILL maybe because of memory). If you are in some sort of *nix that's easy.

      bw, bliako