So do I.
...But, while my hypothesized automaton can produce a uniform tag framework, my guess is that at least 1 in 5000 ( times n fields) of variable data will vary the linecount. "Otherwise the question would not make much sense" because if all 5000 files are identical, there's not much point in reading more than one of them.
'Oh, no,' you say. 'The (normalized) data coming out of a DB should be quite consistent.'
Well, I think OP is putting data (from an unknown origin, received via html pages) INTO a DB. And look at the data: a multi-line fragment of an html table, where some <td> items include multiple adjacent spaces (as a general rule html will render ONLY one of those, ignoring the rest) and such things as line 6 (a long form address -- in a style that could be as few as a dozen characters or so... or could be many tens of characters).
And if the page is indeed script-generated, someone should fire the programmer, the proofreader, and/or their supervisors: Some of the boiler plate -- ie, renderable text that one might expect to be invariant in its spelling -- is not; viz: "aresss:" in line 6 and "adresse_two:" in line 7.
Of course, such an error may not change the line count a bit, but human data entry tends to be falible, and raw data tends not be be normalized.
|