Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:
The rationale here is sharing a very large index where each packed integer refers to a specific item. ~ 3000000000 items, thanks to a simple bijective hashing function, can be mapped to integers in the range (0..4000000000) so that each worker will easily know at which offset it should look into the shared memory to retrieve the corresponding packed piece of information for each item. About ~1000000000 empty fields (because the item corresponding to that position is missing) contain just 4 nul bytes (i.e. pack("N",0)). However, when in the parent process I try just to create the shared memory object, by iteratively reading 1 million bytes from the index file and copying them into the shared memory, like this:my $packed; my $n =1000000-1; my $offset = $n*4; my $success = shmread($id, $packed, $offset, 4)
I always get an error "shmwrite: Bad address ". This happens always when writing the 2^32 th byte. So it looks like the shared memory segment perl can handle is limited to 2 Gb. however, runninguse warnings; use strict; use IPC::SysV qw(IPC_PRIVATE IPC_RMID S_IRUSR S_IWUSR); open(my $idx, "<", "$ARGV[0].idx") || die "cannot open data file\n $!" +; my $idx_size = (split(' ',`wc -c $ARGV[0].idx`))[0]; my $idx_id = shmget(IPC_PRIVATE , $idx_size, S_IRUSR | S_IWUSR) || die + "shmfet $!"; my $offset = 0; foreach my $i (0..$idx_size/1000000) { my $n=""; read($idx,$n,1000000); shmwrite($idx_id, $n, $offset, 1000000) || die "shmwrite: $!"; $offset +=1000000; } shmctl($idx_id, IPC_RMID, 0) || die "shmctl: $!"; close $idx; exit;
shows that perl actually reserved a shared memory segment much larger than that (actually, the expected 4 Gb):$ ipcs -m
I am running perl v 5.34 on an ubuntu 22.04 w/ 32 Gb RAM. perl here should be a 64-bit process:------ Shared Memory Segments -------- key shmid owner perms bytes nattch stat +us 0x00000000 8454205 valerio 600 4294967296 0
$ perl -V:archname archname='x86_64-linux-gnu-thread-multi';
Is this a builtin limit of perl shared memory, or is there anything that I am missing?
Thanks for your wisdom,
Valerio
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Does perl have a builtin limit to the size of shared memory segments I can write to?
by NERDVANA (Priest) on Jan 07, 2025 at 20:56 UTC | |
by Anonymous Monk on Jan 08, 2025 at 09:57 UTC | |
by Corion (Patriarch) on Jan 08, 2025 at 10:13 UTC |