in reply to Re: A Fit on NIH
in thread A Fit on NIH

I think we need to do something about this. Michael Schwern's CPANTS project looked promosing, but then he abandoned it. I'd like to see peer review of CPAN modules and a database of reviews.

I thought that was the purpose of the "discuss your module before submitting it" part of the CPAN module procedure.

Putting peer reviews (or, for that part, *any* reviews) of modules in a central place is a good idea, for sure. But who will write those reviews? And should CPAN shoulder the burden of organizing the whole process around it, and take the responsibility for (always possible) errors?

Let's face it - reviewing module code is so difficult and time-consuming that most people prefer to write something new instead (I don't mean reviews like the ones we have on this site under Module Reviews - I mean real code reviews). This is a Bad Thing, agreed. But where is the incentive for people to put their energy into code reviews, when the majority of the community does not value this service?

Christian Lemburg
Brainbench MVP for Perl
http://www.brainbench.com

Replies are listed 'Best First'.
Re: A Fit on NIH
by Dominus (Parson) on Jan 10, 2001 at 21:42 UTC
    Says clemburg:
    I thought that was the purpose of the "discuss your module before submitting it" part of the CPAN module procedure.
    Perhaps it is. In that case, I submit that that part of the procedure is failing to serve its purpose, and we need to find a more effective solution.

    And should CPAN shoulder the burden of organizing the whole process around it, and take the responsibility for (always possible) errors?
    Here is one way it might work:

    • There is a standard format for quality assessment reports.
    • Individuals upload reports to their CPAN directories.
    • The CPAN librarian compiles a nightly index of all the reports.
    • CPAN.pm has a feature that will download, summarize, or display all the reports about a particular module.
    • The person downloading the module gets to decide which reports to believe or to heed.

    As the reporting system becomes more robust, other developments would be possible. For example, CPAN.pm might be configurable with a list of trusted people, and would display a warning before installing any module that wasn't recommended by a trusted person.

    There are a lot of other ways that it could work; this is only one of many possibilities.

    But where is the incentive for people to put their energy into code reviews, when the majority of the community does not value this service?
    I think that if the issue were more visible, people would be more interested in addressing it. Once the review system was in place, it would snowball. People would start saying things like "I chose Text::Template because it had been reviewed and HTML::Template had not." Module authors would seek out reviewers and would start to exchange review services with one another.