in reply to Balancing Complexities of Data Structures, Code and Readability

I know you posted this to ask for different ways to do it, so my comment isn't really fair.  but...

A principle from Extreme Programming is "You ain't gonna need it".  What you have now is probably fine.  You're sorting by users and then by systems per user which I guess is what you meant by 'multiple e-mail accounts'.  If you're calling the function more than once per run you could probably move @SYS and %POVars outside the function, since they only need to be set once, but that's about it.

Basically, having %accounts setup the way you need it is what really matters.  Perl lets you process it this way and that whenever you need to, without much fuss.

Only if it's looking like a lot of different scripts (or a few really complex ones) are going to be kept around and need to be maintianed or you find yourself typing the same code over and over to access it, would you need to go further.  Then, work out a simple useful interface (eg,  @accts = $accts->user($ssn);  hmmm, is that simpler?) and build an encapsulating object.

update: I didn't emphasize the central advantage of waiting, which is that by waiting until you need it, you'll know much more specifically what you want to do.

  p

  • Comment on Re: Balancing Complexities of Data Structures, Code and Readability
  • Download Code

Replies are listed 'Best First'.
Re^2: Balancing Complexities of Data Structures, Code and Readability
by tadman (Prior) on Aug 13, 2001 at 12:43 UTC
    I agree with most of the above comments, and in particular, I would also stress how important it is to have "constant" data outside of your function.

    Additionally, eliminating duplication is also a priority, though not at the expense of needless complexity. In this case, @SYS is really the same as keys %POvars, so there is no need for this extra definition.

    Further along those lines, I would restructure your definition something like:
    my @POvars_common = qw[ ppo userid email ]; my %POvars = ( 'GRP' => [@POvars_common, 'gwpo', 'gwdomain'], 'JAG' => [@POvars_common, 'jagexcept'], 'CIS' => [@POvars_common], 'MST' => [@POvars_common], 'GCG' => [@POvars_common] );
    Although this doesn't seem like a big deal, removal of duplication can help with:
    • Accidental transposition errors, such as one of your entries having 'emall' instead of 'email', which is hard to spot amidst many similar lines. Using Perl with the '-w' option, and use strict will spot errors in your variable names, but not your string constants, unless these happen to generate errors as well.
    • Errors when changing the structure on a global scale, which requires modifications to every single entry in this case. This usually happens to every program, so plan for it in advance.
    • Synchronization errors between needlessly linked data structures, such as @SYS and %POvars, where one has an entry which the other does not.
Re: Re: Balancing Complexities of Data Structures, Code and Readability
by John M. Dlugosz (Monsignor) on Aug 13, 2001 at 09:05 UTC
    That's not what I find.

    I tell co-workers that "Programs get more complex over time." and encourage them to plan for that.