Hello All...
I have a simple script that calculates the average values from a number of RRD files. I would now like to discard all data returned by RRDs::fetch that does not fall within working hours.
For those unfamiliar with RRDs.pm, $data is an AoA ref.use strict; use warnings; use DateTime::Event::Recurrence; use DateTime::SpanSet; use RRDs; # # Set up paths, config data etc... # foreach $file ( @array_of_RRD_filenames ) { my ( $metric, $interval, $begin, $end ) = ( 'AVERAGE', '1800', '-800h', 'now' ); my $start_span = DateTime::Event::Recurrence->weekly( days => [ 1 .. 5 ], hours = +> 8 ); my $end_span = DateTime::Event::Recurrence->weekly( days => [ 1 .. 5 ], hours = +> 18 ); my $span_set = DateTime::SpanSet->from_sets( start_set => $start_s +pan, end_set => $end_spa +n ); push @opts, $file; push @opts, $metric; push @opts, "-r $interval"; push @opts, "-s $begin"; push @opts, "-e $end"; my ( $start, $step, $names, $data ) = RRDs::fetch @opts;
my ( $intotal, $outtotal, $i ); foreach my $value ( @{ $data } ) { $start += $step; my ( $minute, $hour, $day, $month, $year ) = ( localtime( $start ) )[ 1, 2, 3, 4, 5 ]; $month += 1; # Why does localtime give month -1? my $dt = DateTime->new( year => $year, month => $month, day => $day, hour => $hour, minute => $minute ); next unless $span_set->contains( $dt ); $i++; $in_total += $value->[ 0 ]; $out_total += $value->[ 1 ]; } # Now go off and work out averages / percentages etc... # Build a HoH and exit the for loop. } # Output some purty HTML.
The problem is, this is incredibly slow! I've qualitatively isolated the major slowdown to the "next unless $span_set->contains ( $dt )" -- this slows things down by a factor of 5.
dprofpp just segfaults when I try to profile the script.
Anyone have any suggestions as to how to optimise this? Is there another way rather than using Date::Time?
Cheers
SM
In reply to Questions on Optimisation (DateTime::SpanSet and RRDs.pm) by smullis
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |