in reply to MedlineParser: to parse and load MEDLINE into a RDBMS

The "parsemedline.pl" code that you cited (at the biotext.berkeley.edu web site) could have been a lot shorter (with no loss of intelligibility or maintainability), if the code authors had made more thoughtful use of perl data structures (HoH, HoA, HoHoH, and so on), instead of declaring vast numbers of simple arrays with long names. (Personally, I think shorter code is easier to maintain; and declaring arrays to keep track of the names of hash keys is a lot easier than keeping lots of differently-named arrays.)

As for run-time efficiency compared to a java implementation, I don't know how the java version handles RDBMS insertions, but the perl version cited here is obviously working at a serious disadvantage. The code is doing two things that I would normally call bad ideas:

One other nit-pick: the commentary in the code is quite good as documentation, but it would be better as POD (and this would be so easy -- there's no good reason not to do so).

I counted over 3700 lines (excluding blanks and comments) in "parsemedline.pl"; I don't know whether I'd try to boil it down (not sure I want to pull in 40+GB of data from a field I know nothing about), but as a rough estimate, I'd guess this could be done, using appropriate data structures and loops, with well under 1000 lines. Hard to say what sort of speed differences would result, but if there is ever any "evolution" in the XML data format, a shorter version of the code would be a lot easier to fix, I think.

  • Comment on Re: MedlineParser: to parse and load MEDLINE into a RDBMS

Replies are listed 'Best First'.
Re^2: MedlineParser: to parse and load MEDLINE into a RDBMS
by haricothoriz (Novice) on Jun 25, 2005 at 12:58 UTC

    To address your concern about efficient data import: both the java and perl programs have options to generate flat-file representations of the tables for native table loaders.

    Some other points about the BioText parsemedline.pl program:

    • as with the java program, it is unnecessarily database-specific (although only in a very minor way compared with the java code);
    • it was written without the strict or warnings pragmas, and as a result, there are actual (minor) bugs in it due to misspelling of the lengthy variable names;
    • Medline keeps changing its DTDs, so to capture all the data in the 2005 release of Medline requires some tedious changes to the SQL table definitions and code.

    Question: I know this is not necessarily a good idea for performance reasons, both in loading and querying, but ... is it possible to automatically translate DTD descriptions into SQL DDL and corresponding code to parse the XML and load the data? (Ignoring the complication of data types).

      I know this is not necessarily a good idea for performance reasons, both in loading and querying, but ... is it possible to automatically translate DTD descriptions into SQL DDL and corresponding code to parse the XML and load the data?

      How do you know (or what makes you think) that parsing a DTD is "not ... a good idea for performance reasons ..."? I doubt that using this sort of facility would have any noticeable impact on run-time performance, and it could certainly be a major boost to programmer performance (and would a good way to reduce code that is too bulky and ad-hoc).

      There appear to be at least a couple modules on CPAN for converting DTD's into perl-internal objects or data structures: XML::DTDParser, XML::Smart::DTD. (I haven't used either of them myself, but a brief look at the docs makes me think the first one might be more suitable; I expect there are others.)

      As for converting a perl object or data structure into an SQL DDL (or going directly from DTD to DDL), I haven't searched CPAN for that (maybe you could try it), but it seems like it could be less of a cut-and-dried sort of task; there might be different ways of specifying a table, or designing a set of relational tables, based on a given DTD, depending on what the SQL users want to do with the data.

      (The same could be said for deriving different perl data structures from a DTD, but since people have already posted solutions for this on the CPAN, it might be worth trying what they've come up with.)

        I know this is not necessarily a good idea for performance reasons, both in loading and querying, but ... is it possible to automatically translate DTD descriptions into SQL DDL and corresponding code to parse the XML and load the data?
        How do you know (or what makes you think) that parsing a DTD is "not ... a good idea for performance reasons ..."? I doubt that using this sort of facility would have any noticeable impact on run-time performance, and it could certainly be a major boost to programmer performance (and would a good way to reduce code that is too bulky and ad-hoc).
        You're right; I don't know for sure, and I agree about the savings in programmer time (frankly what I was interested in). The medline database has 15,000,000 big fat records, and I was guessing that a hand-teaked, denormalized table layout would be more optimal for querying and maybe data loading. But I don't know yet. Thanks for pointing out the CPAN modules. I was just wondering if I had overlooked a module that does exactly what I am interested in.