in reply to IPC::Shareable sometimes leaks memory segments

Just a bit of an update here. There were many misconceptions and misunderstandings I had about the entire shared memory situation.

I made significant modifications to a copy of IPC::Shareable to better track resources the way I wanted to use them, and am waiting for co-auth on the distribution so I can incorporate my updates.

However, I learned I needed other significant changes as well to go even further, so I changed the complete back end out for a new one (modified version of IPC::ShareLite, which is written in C instead of Pure Perl), and also the serializer (Storable to Serial, the latter being slightly faster, but has the ability to handle situations Storable couldn't). This means that my further changes won't be backwards-compatible, so I'll have to create a new distribution for this further work.

I can now safely and reliably do what I set out to do (keep the data structure available persistently, ie. run one script in a window and let it exit, then start another script in another window and have it pick the data back up as if it was from disk, as well as having multiple independent scripts use the data simultaneously). We maintain registries of all segments and semaphores in use, and remove them as required.

In order to facilitate all of this, I also had to remove some and significantly modify a lot of other code which definitely breaks backwards compatibility, so in the end, although I'll be updating IPC::Shareable, I'll have to release a new distribution as well.

Here are some benchmarks between the original version, and the new version as it sits currently. For the purposes I need this project for, I don't need lightning speed, but I definitely need it faster than it was before (as it'll be used for tracking physical hardware changes). I still have a fair amount of work to do which'll definitely make it even faster, but so far, a 243% increase is decent already. I'm only doing 30k iterations at this time because I've got a semaphore conflict I've still got to fix, but the results are consistent over dozens of runs.

# cmpthese (30k iterations) Rate shareable sharedhash shareable 396/s -- -71% sharedhash 1356/s 243% -- # timethese (30k iterations) Benchmark: timing 30000 iterations of shareable, shared_hash... shareable: 75 wallclock secs (41.40 usr + 32.59 sys = 73.99 CPU) @ 40 +5.46/s (n=30000) shared_hash: 22 wallclock secs (17.10 usr + 4.57 sys = 21.67 CPU) @ 1 +384.40/s (n=30000)

Here's my simple current bench test file. As I said earlier in this thread, once the new software is done and release, I plan on writing a detailed blog post about what I've learned while going down this path.

use warnings; use strict; use Benchmark qw(:all) ; use IPC::SharedHash; use IPC::Shareable; if (@ARGV < 1){ print "\n Need test count argument...\n\n"; exit; } my $timethis = 0; my $timethese = 0; my $cmpthese = 1; if ($timethis) { timethis($ARGV[0], \&shareable); timethis($ARGV[0], \&sharedhash); } if ($timethese) { timethese($ARGV[0], { 'shareable' => \&shareable, 'shared_hash' => \&sharedhash, }, ); } if ($cmpthese) { cmpthese($ARGV[0], { 'shareable' => \&shareable, 'sharedhash ' => \&sharedhash, }, ); } sub default { return { a => 1, b => 2, c => [qw(1 2 3)], d => {z => 26, y => 25}, }; } sub shareable { my $base_data = default(); tie my %hash, 'IPC::Shareable', 'able', { create => 1, destroy => 1 }; %hash = %$base_data; $hash{struct} = {a => [qw(b c d)]}; tied(%hash)->clean_up_all; } sub sharedhash { my $base_data = default(); tie my %hash, 'IPC::SharedHash', 'hash', { create => 1, destroy => 1 }; %hash = %$base_data; $hash{struct} = {a => [qw(b c d)]}; tied(%hash)->clean_up_all; }