in reply to My first package - need help getting started
I have a couple of questions and, depending on your answers, a suggestion for how to simplify the problem.
First, you said that the only thing guaranteed to be unique was the key, but you were talking about uniqueness among all the records. In your example, the field names are all unique within the record. Is that the case for every record? If so, it seems to me that a record can be conveniently represented as a hash.
Second, it sounds to me from the description, though you don't really expressly say this, that you generally only need to look at one record at a time.
If I'm understanding right here, then creating an object per se may be an unnecessary complication. It sounds to me like all you need is two functions: one that takes an open filehandle (as a glob maybe), reads off the next record, and returns a reference to a hash, and one that takes a reference to a hash and returns a string. Depending on what you need to do, another routine or several might be in order for testing records (e.g., a routine that takes a hashref and a string and returns the number of Alias fields in the hash whose values match the string).
I know it's heresy to some to suggest not using OO where it's possible to use OO, but it just seems unnecessary here, to me.
The only thing that makes me think I might be wrong, and that OO might in fact be a Good Idea, is that you didn't show what delimits records in the files you're reading. If there's no delimiter, then you are going to be reading until you get the key for the next record, which you then have to save for when you read that record. It is of course possible to do this without real OO, but it's awkward, since it involves a persistant variable (the one-line buffer) that needs to be associated with the specific file in question. If you never have more than one of these files open at the same time you could get by with a magic global ($main::MY_DB_PARSING_PERSISTENT_LINE_BUFFER or whatnot), but that's a kludge, and if you ever need to work through more than one of these files at the same time it will break. It is possible to get around that too, by using the filehandle as a key into a magic global hash, but now we're doing something arguably almost as complex as OO, so I'm not sure this really saves anything.
But it is an option to consider. If your records are delimited by some magic marker in the files (e.g., a blank line), then this problem goes away, and you can just have a couple of routines, as I said.
for(unpack("C*",'GGGG?GGGG?O__\?WccW?{GCw?Wcc{?Wcc~?Wcc{?~cc' .'W?')){$j=$_-63;++$a;for$p(0..7){$h[$p][$a]=$j%2;$j/=2}}for$ p(0..7){for$a(1..45){$_=($h[$p-1][$a])?'#':' ';print}print$/}
|
|---|