in reply to "deep" unit testing
A couple of thoughts...
I would also call into question the statement that adding sub calls would slow things down, on its own that smells like it could be premature optimization and/or cargo culting. How often does this code get called? Has the performance difference of inlining the code been tested? And even if it's slower, is it significant? I.e. if a script runs 1.5 minutes instead of 1 minute, does that really matter if the script only gets run once a day?
Although using a constant to optionally add in code is probably a somewhat decent solution, some might argue that by adding test code that is optionally compiled in, one is actually altering the code that is being run, i.e. you've got different code running during testing and a normal run.
Although I'm having trouble finding the reference at the moment, I know that there was at least one thread here not all too long ago that mentioned at least one good CPAN module that made testing even complex data structures possible (it allowed defining a kind of "schema" for Perl data structures, IIRC). Asking the other way around, what is the concrete argument against treating the sub as a single unit and testing the entire output data structure? What do these output structures contain that would make automated testing so difficult? (For example, if it's timestamped data that's hard to nail down, then could you add mocks such that the time is always the same during the tests?)
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: "deep" unit testing
by LanX (Saint) on Mar 28, 2019 at 22:28 UTC |