Lowry76 has asked for the wisdom of the Perl Monks concerning the following question:
# Takes the output directory name $annotationDirName # and a hash-ref of all attribtues of a sequence and # adds it to database '$annotationDirName/annotation.dat'. # Given attribute hash is stored as hash-dump. sub _printAnnotation{ my ($annotationDirName,$h) = @_; # create new Berkeley_DB my %database; tie %database, 'DB_File', "$annotationDirName/annotation.dat" or d +ie "Can't initialize database: $!\n"; my $dump = Data::Dumper->new([$h],[qw($annotation)])->Purity(1)->D +ump(); $database{$h->{'id'}} = $dump; untie %database; }
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: DB_File/BerkeleyDB with large datafiles
by Tux (Canon) on Sep 20, 2010 at 11:26 UTC | |
by locked_user sundialsvc4 (Abbot) on Sep 20, 2010 at 17:23 UTC | |
|
Re: DB_File/BerkeleyDB with large datafiles
by BrowserUk (Patriarch) on Sep 20, 2010 at 12:58 UTC | |
by Lowry76 (Novice) on Sep 20, 2010 at 17:26 UTC | |
by BrowserUk (Patriarch) on Sep 20, 2010 at 17:46 UTC | |
by Anonymous Monk on Sep 21, 2010 at 15:57 UTC | |
|
Re: DB_File/BerkeleyDB with large datafiles
by graff (Chancellor) on Sep 21, 2010 at 00:51 UTC | |
by Anonymous Monk on Sep 21, 2010 at 16:08 UTC |