Waste? I don't see waste.
The thing I found most vexing was that mocking up objects which contained arbitrary binary data was brain-bending and time-consuming. Let's say I want to write a deserialization method. We'll follow TDD and write a failing test first.
package Foo; sub get_data { $_[0]->{data} } package main; use strict; use warnings; use Bar; my $blackbox = bless { data => "\x3foo\x3bar\x3bazasdfasdfasdfasdf", }, Foo; my $object = Bar->new; my $deserialized = [ $object->deserialize( $blackbox ) ]; is_deeply($deserialized, [ qw(foo bar baz) ], "Deserializer decodes correctly");
FYI, in order to write that, I had to go look up BER compressed integers and see how the byte-level algorithm worked. Let's hope I got it right.
Here's the actual code, now that I am allowed to type it.
sub deserialize { my ($self, $blackbox) = @_; # capture up to 16-byte random sentinel. $blackbox->get_data =~ s/ (.*?) (?: $self->{record_separator} | $ ) //xsm or confess("no match"); return unpack("(w/a)*", $1); }
Now, even if I risk displeasing the gods of TDD and cheat by typing the code I'm actually going to use before writing my test, it's still a pain to generate this intermediate data. And if I decide to experiment with another algorithm, I wind up throwing away that hard-won mock data, as it's rare that it transmutes easily. Ironically, tests like these are tightly coupled to the code they test, which makes them brittle and difficult to adapt or reuse.
You're rewriting the library from scratch, period.
Credit where it's due: Plucene was originally written over a year ago, as a port of Lucene 1.3. The problem is this:
# time to index 1000 documents: Plucene 1.25 276 secs Kinosearch 0.021 88 secs Kinosearch 0.03_02 35 secs Java Lucene 13 secs
I'm now working on a port of the current version of Lucene (essentially 1.9, not yet officially released), leveraging what I learned by reinventing the wheel with Kinosearch.
The same problems of dealing with arbitrary binary data arise, though since this is a port and not an alpha, I won't have to continually rewrite tests as I would have had to (if I'd followed TDD) when I was writing Kinosearch. Perhaps you can suggest an alternative technique for creating the mock objects? You can't algorithmically generate this data; even if you could live with large copy and paste ops, too many dependencies are involved to pull it off.
In addition, you can probably end up with several new distros to add to CPAN that aren't directly usable solely for reverse indexing.
That's where Sort::External came from.
Best,
In reply to Re^6: Documenting non-public OO components
by creamygoodness
in thread Documenting non-public OO components
by creamygoodness
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |