throop has asked for the wisdom of the Perl Monks concerning the following question:

Can you help me find the right module? I need to track which versions of my input data were used in an analysis.

I've got a 4-stage analysis. At each step it reads a bunch of data files, including outputs from previous steps. Is there a good module for generating version/log files, so at the end I can generate a report showing what versions were used in the entire chain?

I envision that, at each step, I generate a human-and-machine-readable logfile. It shows the full-pathname and time-stamps for each input. The next step of the chain reads its own inputs. These include the outputs AND the logfile from the last step, plus other data. It issues its own reports and logfile. That logfile recaps the first logfile, plus the pathnames and time-stamps of the additional data files.

I can imagine rolling my own for this, but, you know, laziness and all that. I've searched both in CPAN and Perlmonks, but if it's in there, I'm getting overwhelmed by all the results for Perl module versions, which isn't what I'm looking for.

  • Comment on Tracking data versions throughout an analysis chain

Replies are listed 'Best First'.
Re: Tracking data versions throughout an analysis chain
by pemungkah (Priest) on Jun 19, 2012 at 23:25 UTC
    Can you tell us a little bit more about the process? From what I'm reading here, it sounds like you'll have different versions of files of the same name?

    If that's so, the first thing that comes to my mind is putting all the data files in a directory managed with a version control system, and then using Jenkins to run your process. Jenkins can checksum the output from a run, allowing you to upload an output to Jenkins and have it tell you exactly which run created it - you then can refer to the actual run log for that run to see what happened and what version of the files was used for input.

    This way, all you need is a process that takes you files and gives you the output you're looking for; everything else is managed by your version control system and Jenkins. The workflow goes like this:

    1. New file is ready.
    2. Check this into the version control system directory that Jenkins is monitoring.
    3. Jenkins is triggered by the checkin.
    4. Jenkins checks out the current contents of the VCS directory.
    5. Jenkins runs the process and creates the output file. If the run fails, Jenkins has a log of the run.
    6. Jenkins notifies you the run has completed.
    7. You can use a Jenkins plugin to extract data from the log, or fetch it with WWW::Mechanize and analyze it youself.
    8. You distribute the output file to your customer with whatever extracted data you like.
    Should you need to make an output back to a run, you simply go to Jenkins and upload the output file; Jenkins then tells you which run it came from.
      Thanks! That sounds like what I was looking for.