I'm not sure I follow you on this one. I have one basic need: to convert a hash to a query string. Are you saying that I should have two functions to build one query string? That doesn't strike me as an optimal programming solution. Currently, the hash looks something like this:
my %hash = ( name => 'Ovid',
color => [ 'red', 'blue' ],
COBOL => 'sucks' );
That's fed to the routine that creates the query string and everybody's happy. What you're suggesting seems to imply that I should break the hash in two and feed them in separately, or send them to different functions. That seems less efficient. Is there a benefit to approaching it that way, or did I misunderstand your response?
Cheers,
Ovid
Update: Hmm... it occurs to me that I could have made *all* values into array refs. The code would be smaller and easier to follow. The while{} loops becomes this:
while (($key, $value) = each %data) {
$key = uri_escape( $key, $cgi_chars );
foreach ( @$value ) {
my $array_value = uri_escape( $_, $cgi_chars );
$query_string .= "$key=$array_value&";
}
}
Saved about six lines of code and made it cleaner, to boot. Damn. merlyn strikes again!
So here's the interesting question: is it coincidence that I was able to improve this subroutine and eliminate the ref, or is seeing a ref in code generally indicative of a poor algorithm that bears further investigation? Here's another question: is that last sentence pompous enough for you?
Join the Perlmonks Setiathome Group or just click on the the link and check out our stats. | [reply] [d/l] [select] |
$struct = {
key => {
sub_key => 'value',
sub_key2 => 'value2',
},
key2 => 'other_value',
};
Or maybe it was:
@items = (
'item1',
[ 'nested1', 'nested2', 'nested3' ],
[ 'nested4',
[ 'nested5', 'nested5a' ] ],
'item6',
'item7',
[ 'nested8', 'nested9' ],
);
I don't fully remember the rationale for building a data structure like that, but I think it had something to do with decision-making, where if you'd come across a reference, only one of the references would be used, with control passing off to the next item when it was completed (something like that). I just remember that it required us to use 'ref' when processing it, and perhaps a bit of recursion. I'm perfectly willing to accept that this is bad practice, and I've learned a lot of Perl between then and now and it's likely that I might have come up with alternative way to accomplish what we were doing... *shrug*. | [reply] [d/l] [select] |
| [reply] |
How can I say strongly enough that this subtle point
strikes directly at my gut feeling about clean design?
If you are trying to overload a design or make it magic
then it is trying to do too much and needs to be
rethought!
It may take some attention and focus to see what Randal
is talking about here, but the point came up several times
in the Re (tilly) 1: Supersplit thread. Basically making things magic
makes them more complex to implement and harder to learn.
Remember that the most complex bugs to track down are at
your interfaces and think twice before making those
interfaces needlessly difficult to figure out.
Sometimes it is worth breaking this rule, but (like most
rules) not without specific good reason.
| [reply] |
My way of implementing merlyn's suggestion of having
multiple functions with different interfaces is to have
one function that constructed your query string from one
of those interfaces (eg arrays) and then have another that
accepts a hash-ref, repackages that in terms of array
refs, and then calls the first.
So you have exactly one real implementation (should you
want to change the construction of the query string it
is easy to do so) yet you have two functions. One of which
can't be called with a hash ref, the other which needs an
array ref. Without doing any test within the function or
making the implementation of the critical code more complex.
| [reply] |
| [reply] |