in reply to Dirtiest Data
Throughout it all, the most important thing for me has been to have a good set of diagnostic tools. The one tool I tend to use most often, as a first resort in the widest range of tasks, simply prints out a byte-value histogram, either as a 256-line list or as a nice 8-column x 32-row table, with an optional summary that counts up character categories like "printable ascii", "non-printable ascii", "8-bit", "iso-printable 8-bit", "digits", "whitespace", etc.
With practice, you can figure out quite a lot about any sort of data just by viewing the distribution of byte values this way. If you have a specific expectation of what the data is supposed to be (ulaw audio? pcm? ascii text? text in some given language and encoding?), the byte histogram can tell you right away whether there's anything "out of band" (e.g. text shouldn't contain \x7f or null bytes, among other things), and whether any particular byte values have an unexpectedly high or low frequency of occurrence ("hmmm, too many \x0d bytes in this audio file..." or "this xml file has different quantities for '<' and '>'...")
After that, additional diagnostic tools tend to get more "specialized" (ad hoc). But among these, the next most generic one produces a code-point histogram for wide-character text data in any of several distinct multi-byte encodings, and also reports any encoding errors that it finds -- good for knowing when a utf8 file alleged to be Russian happens to contain Arabic characters, and so on...
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Dirtiest Data
by xdg (Monsignor) on Jun 23, 2006 at 16:32 UTC | |
by graff (Chancellor) on Jun 23, 2006 at 20:51 UTC |