Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much

Formal code review specifications and reporting format

by Nuke (Scribe)
on May 16, 2002 at 03:26 UTC ( [id://166911] : perlmeditation . print w/replies, xml ) Need Help??

  I'm a little nervous about writing this, as it's my first node, and I wasn't sure if it should go in Seekers or Q&A. I hope I chose wisely.

  I guess it all started when grep posted something in the chatterbox about becomming a saint. From his node I found How You (Yes You!) Can Get Involved, which got me thinking. I've wanted to contribute in some way, but haven't known how, and this seemed just the ticket.

 I decided to review code that I am already somewhat familiar with as a jumping off point. I've decided to review YaBB. Maybe it needs it, maybe it doesn't, but it's a start, and it's not so huge a project that I wouldn't know where to begin.

  I do code reviews at work occasionally, and we have templates that we use which simplify the process by showing what should be checked, and provide a concise way to report back.

  Our template is for VB/Pick BASIC. It won't really work for this.

  So I set out to look for something like this on perlmonks. Alas... It's been talked about here, here, here, and no doubt other nodes.

  I found a lot of great points, and some lists of what to look for, but it seemed that nobody had ever pulled it all together and made any hard decisions on what to look for, and what format to submit the information in.

  This is my desire. I'd like to see the above. Something all of us could use as a standard for reviewing code. A Perlmonks standard, if you will.

  My feeling is that there needs to be perhaps three levels of code review:

  1.  A simple review, which just touches on basic items such as use strict and warnings.
  2.  A second level which goes a little more in-depth, covering things like use of CGI and other modules where appropriate as well as basic security.
  3.  Finally, a full review, which would include everything mentioned above, but to much more rigid standards.

  We all have differing amounts of time that we can devote to projects like this. When we do decide to help out in this manner, it would be great to have a template to use as a starting point, modify it as appropriate, and go from there.

  Am I missing something that's already out there, or perhaps my first mission to give back to the community should be to push for and assist in the making of the above? Is there someone who's already got something like this worked up that could be shared with the rest of us?

  Hoping for a yes to the above.


  • Comment on Formal code review specifications and reporting format

Replies are listed 'Best First'.
Re: Formal code review specifications and reporting format
by vladb (Vicar) on May 16, 2002 at 03:42 UTC
    Excellent post (especially for a starter), Nuke! :)

    To start, let me welcome you to the Monastery. It is my sincere pleasure to see you have found the courage to set out on the glorious passage of sainthood!

    I think your post is a perfect fit for the Perl Monks Discussion forum since you are really trying to discuss a given subject here rather than ask a plain Perl coding question.

    You have picked a nice set of code review levels. But I also feel that it still lacks a few pieces. For example, at my work, we also check for ways how a piece of code that is up for a review could be 'refactored'. This is very much in line with a fabulous discussion you can find here and also here. Of course, we also give top priority to script security, especially if it's a web script. For server side scripts only, which are not directly accessible through a web server, there are other things to look at.

    First, if script is running as a daemon, we normally look closely at hot spots which could cause memory leaks or grind the entire system to a virtual hault by taking over CPU (the script should run at appropriate 'nice' levels...)

    Secondly, in our code reviews, we consider how a given script is handling files etc. For example, if file locking is required, is it being done properly in the code?

    UPDATE: I don't have any more votes left for today, but I'll make sure to give your post a ++ tomorrow! ;-)

    UPDATE 1: Oops, was this thread moved to Perl Monks Discussions from it's initial Seekers of Perl Wisdom spot, or do I suffer late night migraine? ;)

    UPDATE 2: Ahh, just found it out from andreycheck that I'm apparently a perfectly normal human being; a monk, but still in good spiritual health. Thanks guys for moving the post as I too felt it would be the right thing to do :p

    $"=q;grep;;$,=q"grep";for(`find . -name ".saves*~"`){s;$/;;;/(.*-(\d+) +-.*)$/;$_=&#91"ps -e -o pid | "," $2 | "," -v "," "]`@$_`?{print" ++ $1"}:{print"- $1"}&&`rm $1`;print"\n";}
Re: Formal code review specifications and reporting format
by pdcawley (Hermit) on May 16, 2002 at 11:45 UTC
    I'm faintly wary of 'rigid' criteria for code reviews, especially having spent some time dealing with different 'legacy' code bases, which all seem to go to hell in their own different, yet depressingly similar ways.

    My general approach when asked to do a review is to follow a checklist along these lines:

    1. If you can, find out the level of experience of the person whose code you're reviewing. Ask them what they think is good or bad about the code. Make notes.
    2. Do a quick read through the code, looking for stuff that jumps out as being particularly bad or particularly good. Make a few notes.
    3. Compare your notes about what you liked/disliked with what the stuff the programmer liked/disliked.
    4. Quickly outline good and bad things about the code. Try and find something to praise.
    5. Arrange the problems in order of importance. If you have things which you think are bad and the programmer thinks are good, or vice versa, treat them as the most important part.
    6. Start with the most important problem, explain what the problem is and suggest fixes, bearing in mind the experience of the person you're reviewing for. Repeat until time runs out.
    Try, if you possibly can, to do the last three steps face to face rather than writing 'em down. Best of all is when you have access to a whiteboard, CRC cards and an editor window or two, but make do with what you've got.

    If you can't do that, write the stuff down. Start with the good things, give an overview of issues to address. Address them, suggesting solutions (or requesting further discussion for the hard bits), and finish by reiterating the good bits and inviting further discussion.

    Yes, these suggestions are touchy feely, but I'm a touchy feely kind of guy. Checklists are all well and good, but try and go beyond the 'ticking boxes' Kwalitee Assurance approach.

    Try and treat code reviews (either giving or receiving them) as a learning opportunity. Be aware of your reviewee's feelings and don't trample on them unduly. But don't ignore problems either, that's doing nobody any favours.

Re: Formal code review specifications and reporting format
by tadman (Prior) on May 16, 2002 at 03:49 UTC
    This post got me thinking about on-line code review with people you may not necessarily be directly involved with. For distributed programming, there are tools like Bugzilla and Bonsai which can build a good working environment, but what about tools directly related to facilitating refactoring and feedback?

    You could use a "bulletin-board" type tool to do this, mayb e with some modifications. Under an XP-type environment, you are going to want to review changes, and refactorings, to make sure they are better and more robust.

    Are there any tools which can help support code review that are readily available?
Re: Formal code review specifications and reporting format
by ferrency (Deacon) on May 16, 2002 at 15:43 UTC
    Reading Nuke's original post, and the responses to it so far, I realized that there are different reasons for doing a code review, and the nature of the review depends on your reasons for doing the review. Some reasons I can think of off the top of my head:

    • to find and fix bugs, security holes, and the like
    • to find and fix design flaws and/or plan for refactoring
    • to facilitate knowledge transfer to new programmers on a project
    • as part of an employee performance review
    • as a part of a job interview

    Most likely, no single code review will be done for all of these reasons. Depending on the reasons for the review, the "important parts" of a review will be different.

    Of course, these observations are made in a world larger than just PerlMonks.


Re: Formal code review specifications and reporting format
by ehdonhon (Curate) on May 16, 2002 at 16:29 UTC

    Hello Nuke

    I'm of the personal opinion that the reason we have come this far without developing a really good standard for code review is that there is no such thing.

    As Ferrency points out, there are many reasons that one could possibly want to do a code review. Finding one template that covers all of those would be very hard. For example, a code review done for the purpose of job performance review would probably not be very useful if you were looking for bugs. Likewise, I don't think I would want my performance review to be dependent on the number of bugs in my code. :) One solution might be to have an all-encompasing template to cover all bases. But, the problem with that is it would be so long and/or complicated that nobody would ever want to use it!

    pdcawley also touches on an issue that was recently brought up by stephen in Failure To Refactor. That is that not all programmers are working at the same level. Without the same experiences, we don't look at code the same way.

    I would suggest that a more fruitful endeavor might be to work on developing a standard template to be given to the reviewer that specifies what is to be reviewed, and how to review it.

        Those are all good and very valid points. What I'm proposing here is that we create a generic template that could be used both as a checklist and a submission format when the review is complete. As I mentioned, the template would be a starting point, and could/should be modified as needed. Those who are new to code reviews would have a great checklist to get them started, and those who are experienced would have a common format to start with that could be modified to suit the code being reviewed.

        A common submission format would also allow the creation of parsing routines to do all manner of interesting things with the contents of the review.

        As long as folks understand that this is just a starting point, and they keep to the format of the review, I think we could end up with a very flexible and robust system for accomplishing this. I may disagree with you on what needs to be reviewed, but it's moot if we can both review code in whatever way we see fit.

        If we can get an agreement to this, perhaps we could move on to the next stage and talk about the implimentation of the above. (Do we create scripts to help us do this, or just a text template? Maybe a script that spits out a text template based on initial input from the reviewer? :) )