Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

Log::Log4perl::Appender::DBI and SQLite

by clueless newbie (Curate)
on Dec 29, 2022 at 12:51 UTC ( [id://11149187] : perlquestion . print w/replies, xml ) Need Help??

clueless newbie has asked for the wisdom of the Perl Monks concerning the following question:

The Village Fool mucks around with Log::Log4perl' Log::Log4perl::Appender::DBI.

I've been trying to come to grips with Log::Log4perl::Appender::DBI. Not wanting to deal with MySQl, Postgres etc I settled on SQLite. In benchmarking the test of logging 10,000 messages yielded:

(D:/Yada.SQLite - database on a "spinner"):    the code took:789 wallclock secs ( 6.38 usr + 17.17 sys = 23.55 CPU +) (C:/Yada.SQLite - database on a 'SSD'):    the code took:101 wallclock secs ( 3.70 usr + 16.78 sys = 20.49 CPU +) (RAM drive - database on a 10 gig RAM drive)    the code took: 14 wallclock secs ( 2.89 usr +  9.47 sys = 12.36 CPU +)

But what about a SQLite's ':memory:' database? Okay! - we just want to know 'did the guy get the girl?', '...the detective catch the murderer?' Timings for a ":memory:' attached to the database noted:

(D:/Yada.SQLite)    the code took: 2 wallclock secs ( 1.41 usr +  0.00 sys =  1.41 CPU) (C:/Yada.SQLite)    the code took: 1 wallclock secs ( 1.31 usr +  0.00 sys =  1.31 CPU) (R:/Yada.SQlite - RAM drive)     the code took: 2 wallclock secs ( 1.36 usr +  0.00 sys =  1.36 CPU +)

Back story:

I subclassed Log::Log4perl::Appender::DBI. In _init an empty ':memory:' database is created and the persistent database (PD) is attached. Then the schema of the PD is replicated in our volatile ':memory:' database (VD). The appender proceeds with logging. Upon DESTROY the content of the tables of the PD are updated with the contents of the corresponding tables of the VD.

package Log::Log4perl::Appender::DBI_eeks v0.00.01; # Purpose : Just to see how subclassing Log::Log4perl::Appender::DBI + is done # Keywords : SQLite in-memory, # NB : # Notes : Note use of supplemental DBI::db::<methods: _setup,_tear +down,_verify_integrity> # : Note the above superceded by inlining the code for _setu +p, _teardown ## no critic (EmptyQuotes,ProhibitParensWithBuiltins,ProhibitPostfixCo +ntrols,ProhibitUnlessBlocks,PostfixControls,RequireCarping) ## no critic (RequireDotMatchAnything,RequireExtendedFormatting,Requir +eLineBoundaryMatching,RequirePodSections) use 5.032001; use warnings; use parent 'Log::Log4perl::Appender::DBI'; use Benchmark; use Carp; use Data::Alias; use Data::Dumper; use DBI; use English qw(-no_match_vars); use Readonly; Readonly my $FALSE => !!0; Readonly my $THREE => 3; Readonly my $TRUE => !!1; use Syntax::Keyword::Try qw( try :experimental(typed) );; use Time::HiRes; my ($MASTER); my ($error,$sql,@tablename_a); sub _init { ;my $subname=(caller 0)[$THREE]; ;warn "$subname called."; #;warn Data::Dumper->new([\@_],[qw(*_)])->Deepcopy(1)->Indent( +1)->Maxdepth(3)->Sortkeys(1)->Dump(),q{ }; my $self=shift; my %params = @_; if ($params{dbh}) { $self->{dbh}=$params{dbh}; } elsif ($params{datasource} !~ m{\Qdbi:SQLite:dbname=\E(?<dbfil +e>.*)$}i) { # Not OUR speciality $self->SUPER::_init(@_); } else { # OUR speciality my $t0=Benchmark->new; #;warn Data::Dumper->new([\%LAST_PAREN_MATCH],[qw(*+)])->D +eepcopy(1)->Indent(1)->Maxdepth(3)->Sortkeys(1)->Dump(),q{ }; $MASTER=$LAST_PAREN_MATCH{dbfile}; substr($params{datasource},$LAST_MATCH_START[1])=q{:memory +:}; $self->{connect}=sub { DBI->connect(@params{qw(datasource username password)} +, {PrintError => 0, $params{attrs} ? %{$par +ams{attrs}} : ()}) or croak "Log4perl: $DBI::errstr"; }; $self->{dbh}=$self->{connect}->(); ###################################################################### +#### #$self->{dbh}->_setup(); try { # Replicating the schema of $MASTER # Attach master database $self->{dbh}->do($sql=<<"_ATTACH_"); ATTACH DATABASE '$MASTER' as master; _ATTACH_ # replicate tables (my $sth=$self->{dbh}->prepare($sql=<<"_SELECT_"))->ex +ecute(); SELECT * FROM master.sqlite_schema; -- WHERE type += 'table'; _SELECT_ my $field_aref=$sth->{NAME}; my %_h; while (my $value_aref=$sth->fetchrow_arrayref()) { @_h{@{$field_aref}}=@{$value_aref}; $self->{dbh}->do($sql=$_h{sql}); push @tablename_a,$_h{tbl_name} if ($_h{type} =~ m{table$}); }; $sth->finish(); } catch ($error) { confess $error; }; ###################################################################### +#### $self->{_RAM}=1; warn "$subname took: ",timestr(timediff(Benchmark->new,$t0 +)),"\n"; }; return; }; # sub _init: ok. sub DESTROY { ;my $subname=(caller 0)[$THREE]; ;warn "$subname called."; my $self=shift; no warnings ('experimental::lexical_subs'); my sub _verify_integrity { ;my $subname=(caller 0)[$THREE] +; use warnings; ;warn "$subname called."; Data::Alias::alias my $dbh=$self->{dbh}; (my $sth=$dbh->prepare('PRAGMA integrity_check'))->exe +cute(); my $row; while (my $_aref=$sth->fetchrow_arrayref()) { #=# TRACE [ IntegrityOK=>\$_aref ]; return $TRUE if (++$row == 1 && $_aref->[0] eq 'ok'); }; warn 'Database integrity could NOT be validated --- ba +ckup will not be updated!'; return $FALSE; }; # sub _verify_integrity: unless ($self->{_RAM} && $self->{dbh}) { # Not our speciality $self->SUPER::DESTROY(@_); } else { # Our speciality my $t0=Benchmark->new; ###################################################################### +#### #$self->{dbh}->_teardown(); try { _verify_integrity(); # Attach $sql=<<"_ATTACH_"; -- NB: since \$MASTER might have a [.] \$MASTER mu +st be quoted! -- ATTACH DATABASE '$MASTER' AS master; BEGIN TRANSACTION; _ATTACH_ # insert then delete $sql.=<<"_INSERT_" INSERT INTO master.$_ SELECT * FROM main.$_; _INSERT_ for (@tablename_a); $sql.=<<"_DELETE_" DELETE FROM main.$_; _DELETE_ for (@tablename_a); $sql.=<<"_DETACH_"; END TRANSACTION; DETACH DATABASE master; _DETACH_ $self->{dbh}->{sqlite_allow_multiple_statements}=1; $self->{dbh}->do($sql); $self->{dbh}->{sqlite_allow_multiple_statements}=0; } catch ($error) { confess $error; }; ###################################################################### +#### $self->{dbh}->disconnect; delete $self->{_RAM}; warn "$subname took: ",timestr(timediff(Benchmark->new,$t0 +)),"\n"; }; return; }; # sub DESTROY: ok. unless (caller) { ...; } else { 0**0; }; =scratch =cut __DATA__

Comments/improvments welcomed.

Replies are listed 'Best First'.
Re: Log::Log4perl::Appender::DBI and SQLite
by Corion (Patriarch) on Dec 29, 2022 at 12:56 UTC

    Speeding up logging by caching is all great until you need your log to find/trace a mysterious application crash - and the most recent 100 entries in the log were lost in the crash.

    If you can find a way to isolate your logging from the application far enough that a crash doesn't impact it, like writing your own logging daemon or using syslog or whatever, I would look at using that approach.

    If you don't really deal with application crashes since the logs are more for application auditing and the application itself is usually stable, then caching things before writing it to disk might be a viable approach.

    Personally, I would look at the pragmata that SQLite itself offers, for example pragma synchronous=off, which gives up some of the durability for improved performance.