Dear Monks,
I got hit by an unexpected problem while getting a project done. This project was well specified, coded and tested.
Uhm - well - did I say tested? Kind of. It was tested by the coders who did it. And because the project is a web application covering a nontrivial business process cycle, everyone tested basically his own piece. Being responsible for some overall aspects of the project, I tried to test the overall cycle, but either failed (which then led to very frustrating "but it's working here" discussions with developers) or - when succeeded - realized, that I simply cannot cover all aspects of the process due to my restricted knowledge and the fact that I'm a single person and interaction of many people is what needs to be tested.
Because of this heavy dependence on human interaction, neither a semi-automatic testing tool, nor playing different roles myself (with several alter egos using the system) are feasible.
So it all boils down to try to find betatesters. Ha! Been there, tried that - and failed miserably. The programmers who did implement the whole thing have the great gift of "circumtesting" around the possible pitfalls, some external testers (translators - which is exactly one half of the target users) had a stab at it, but until now I got exactly 0 feedback about problems. And as I definitely assume, that there ARE problems, maybe the 0 feedback is because of some utter failure.
But then again maybe not. As I mentioned translators are only one half of the target group. The other half is simply "users" or "clients" that want a translation job to get done. Maybe the betatest translators are sitting there, waiting that something happens and nothing happens because no one wants to act as "user"?
Ok - to be more concrete - what I'm talking about exactly? There is this NLP Portal our company has been operating for quite some time now. A new functionality is the Translation Platform which should provide a B2B/B2C framework for people wanting some text to be translated the quick and cheap way. By choosing to use Machine Translation, Human Translation - or something in between AND by utilizing several technologies to ensure quality of human translation without the need of a second proofreader, this framework when presented at various talks, was perceived a "pretty cool way to do it".
Maybe it will be "pretty cool" after it's burn-in. Without it, it seems to me quite hand-warm right now. So I suppose you already figured it out: I have the problem of finding the right people to test the beast, which is nothing I was trained for nor do I have any experience doing it... If I could solve these problems with coding - ok. But that doesn't apply here. What are your strategies of burn-in/betatest multi-user environments? How do you get people (who should have no prior knowledge of your system) to participate? Are there best practices? Incentives?
Thanks for helping me out of the pit I've fallen in. I have this big baby before me, and am not even able to evaluate whether it is "pretty cool with some flaws", "not cool yet" or "born dead".
Bye
PetaMem All Perl: MT, NLP, NLU
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: A coders work is only half of the game
by spurperl (Priest) on May 13, 2005 at 10:17 UTC | |
|
Re: A coders work is only half of the game
by tbone1 (Monsignor) on May 13, 2005 at 12:49 UTC | |
by JanneVee (Friar) on May 13, 2005 at 13:07 UTC | |
|
Re: A coders work is only half of the game
by dragonchild (Archbishop) on May 13, 2005 at 12:57 UTC | |
by kscaldef (Pilgrim) on May 13, 2005 at 17:41 UTC | |
by BUU (Prior) on May 13, 2005 at 23:38 UTC | |
by dragonchild (Archbishop) on May 14, 2005 at 00:49 UTC | |
by kscaldef (Pilgrim) on May 14, 2005 at 06:27 UTC | |
|
Re: A coders work is only half of the game
by mattr (Curate) on May 14, 2005 at 17:21 UTC | |
|
Re: A coders work is only half of the game
by Avox (Sexton) on May 16, 2005 at 22:21 UTC |