STork2000 has asked for the wisdom of the Perl Monks concerning the following question:

Hi, monks. We have a very trouble. We want to use BerkeleyDB for store data (analog Shared Memory).
We write a interface for this - MyMemory.pm module.
package MyMemory; use strict; use Data::Dumper; use MIME::Base64; use BerkeleyDB; use vars qw(@ISA @EXPORT @EXPORT_OK $VERSION); require Exporter; $VERSION=0.1; @ISA=qw(Exporter); @EXPORT=qw(get_memory set_memory clear_all_memory SerializeR UnSeriali +zeR); my $filename = "/home/test/mem.db"; my %h; tie %h, "BerkeleyDB::Hash", -Filename => $filename, -Flags => DB_CREATE or die "Cannot open file $filename: $! $BerkeleyDB::Error\n"; sub get_memory { my $Key=shift; return UnSerializeR($h{$Key}); } sub set_memory { my $Key=shift; my %Value = @_; $h{$Key}=SerializeR(\%Value); } sub clear_all_memory { %h={}; } sub SerializeR { my $SValue = shift; my $SSerialized; if (ref $SValue eq "HASH") { $SSerialized = "HASH"; while (my ($key, $val) = each %$SValue ) { $SSerialized .= enc +ode_base64($key) . ':' . SerializeR($val) . ';'; } } elsif (ref $SValue eq "ARRAY") { $SSerialized = "ARRAY"; foreach my $var (@$SValue) { $SSerialized .= SerializeR($var) +. ';'; } } else { $SSerialized = $SValue; } return encode_base64($SSerialized); } sub UnSerializeR { my $SSerialized = decode_base64(shift); my $SData; if ($SSerialized =~m/^HASH/) { my %HData; my @ATMP = split /;/, substr($SSerialized, 4); foreach (@ATMP) { split /:/; $HData{decode_base64($_[0])} = UnSerializeR($_[1]); } $SData = \%HData; } elsif ($SSerialized =~m/^ARRAY/) { my @AData; my @ATMP = split /;/, substr($SSerialized, 5); foreach my $var (@ATMP) { push(@AData, UnSerializeR($var)); } $SData = \@AData; } else { $SData = $SSerialized; } return $SData; } 1;

In browser:
Software error:
BerkeleyDB Aborting: Database is already closed at /usr/local/lib/perl5/site_perl/5.8.6/mach/BerkeleyDB.pm line 1204.

We have a 25-30 httpd procesed (mod_perl). And I think, when we use get_memory from first process - we don't have any error, but when we use get_memory from second process - we have a error;

If we write "tie %h ..." in get_memory and set_memory subs - we don't have error, but we have a too many connections to BerkeleyDB.

Please help...

P.S."Sorry for bad English. I work on this."

Replies are listed 'Best First'.
Re: mod_perl & BerkeleyDB. Need help.
by perrin (Chancellor) on Feb 16, 2006 at 18:43 UTC

    You have to use locking to make this work. There is no way to have too many connections to BerkeleyDB, but without locking you will lose data. There is a code example linked from this page.

    Also, don't write your own serializer. Use the Storable module. It's much faster and more reliable.

      I don't think so... Right me. We have a mod_perl. And we have a 25 httpd process. We in MyMemory.pm make "tie %h ..." (once). We have a one connection in BerkeleyDB (imho). If we a put "tie %h ..." in get & set subs - we a get a many connections? ICQ UIN 41960352 Knock Knock please. We realy a need help & consultation.
        I don't do chat. If you want to ask more questions, ask them here. I'm not sure why you think 25 "connections" to BerkeleyDB is a bad thing. There are no sockets and there is no server -- BerkeleyDB is a library. The only limitation should be running out of filehandles on your operating system.
Re: mod_perl & BerkeleyDB. Need help.
by Arunbear (Prior) on Feb 16, 2006 at 20:26 UTC
      Please Right me... but I not understand... why all send me to read to Locking? I not a PerlGuru & PerlMonk... But i want to teach this... Because I'm here.

      We have a problem in Shared Memory. We used many modules, but no one satisfaction to us ))) (Sorry for bad English). We used a Cache::FastMmap. But we have a some problems...

      I don't know, how right talk my problems to u. But I try say about us in ICQ or any chat, if u want.

        It doesn't help to just say you had problems. You need to say what the problems were.

        Since these modules appear to be difficult for you, let me recommend one that is very easy: MLDBM::Sync. It's not as fast, but it is simpler.