Nightthrall has asked for the wisdom of the Perl Monks concerning the following question:

I am working on a script which will tail a log file (using file::tail) and then write data for certain events to a database. It needs to be generic enough that it can be used on multiple log files, which may have different formats and different datapoints to be written. To handle all of this, I am reading in a configuration from a separate file. The configuration will have lines that specify what we are looking for and how we find that data in the specific log files we're looking at.

In the configuration file, you will have something like:

typeeventdatapoints
some_typesome_grep_string event_start:line[0] || event_end:substr(line[3], 0, index(line[3], ' '))

I am reading the log file line-by-line into a variable $line, then using split to populate an array @line. So for the first datapoint (event_start:line[0]) the variable is event_start, and the value for it is the first value in the @line array.

The second case is where it gets tricky. I need to be able to read in the string 'substr(line[3], 0, index(line[3], ' '))' and recognize that there are perl commands embedded in it. How do I read in a string that say "substr(blah, foo, bar)" and tell perl that it needs to find the value blah and take the substring from foo to bar? Especially when there may be additional commands (such as the index) embedded within it?

  • Comment on Converting strings read from a file into perl commands

Replies are listed 'Best First'.
Re: Converting strings read from a file into perl commands
by wind (Priest) on Mar 23, 2011 at 21:07 UTC
    Sounds like you just need eval. I suggest that you turn your rules into subroutines that can be cached.
    #!/usr/bin/perl -w use strict; use warnings; # Build list of translation rules my @rules; while (<DATA>) { chomp; my ($pattern, $code) = split "\t"; push @rules, [$pattern, eval "sub {$code}"]; warn $@ if $@; }; # Fake Data my @lines = ( "this is foo\n", "this is bar\n", "this is baz and foo\n", ); # Process the lines using compiled rules. for my $line (@lines) { for my $rule (@rules) { if ($line =~ /$rule->[0]/) { $rule->[1]->($line); } } print "$line"; } __DATA__ foo $_[0] =~ s/foo/FOO/; bar $_[0] =~ s/bar/BAR/; baz $_[0] =~ s/baz/BAZ/;
    This way you can use whatever perl translations that you want. Also, it would be up to you what you pass to the subroutines and if it needed to return anything.
Re: Converting strings read from a file into perl commands
by locked_user sundialsvc4 (Abbot) on Mar 23, 2011 at 22:59 UTC

    For what it may be worth, this kind of practice makes me very nervous.   Not so much (in this case...) that someone might take nasty advantage of such things, but rather, that it might be The Devil Himself to debug and to maintain on an ongoing basis.

    I would consider tackling these differences through the use of Perl classes... a base-class that handles the task for log-files in general, and descendent classes which provide the details of particular file variants.   Each time you need to extend the application to handle a new twist, you can do it (robustly and safely) by creating a new descendent module.

    Now your configuration-file would basically just have to say which class needs to be required into existence.

    My admonition is, “an arrangement such as that, could be made to be rugged, whereas one based on tricky evals might be a constant source of headache.”   Even if, in the first blush, the latter seemed easier to write.

Re: Converting strings read from a file into perl commands
by Nightthrall (Initiate) on Mar 24, 2011 at 13:59 UTC

    Wind - thanks for the idea, and I will see what I can do when I play around with it today.

    Sun - I am dreading the maintenance on this as well. I've never worked with Perl classes before, so I will have to research to see if I can implement them in a future revision. I am not sure what a solution using them would look like.