Best Practices for Peer Code Review

I was prompted by a survey on review practices to dig up some more literature on effective peer code review strategies. Many of the papers are long and detailed, but 11 Best Practices for Peer Code Review is a quick read which is packed with actionable information.

I highly recommend implementing a code review process if you do not already have one. Code reviews have helped me improve significantly over the years, both in terms of reducing errors as well as exposing me to new techniques and design styles.

Note that this paper is somewhat biased as SmartBear produces a review tool, but the practices are still worth reviewing and incorporating. Just gloss over some of the sales pitches in the side bars!

Read 11 Best Practices for Peer Code Review here

Many of these points are also detailed in Social Aspects of Peer Review, featured as a December reading recommendation.

My Highlights & Notes

Strategy 1: Review fewer than 200-400 lines of code at a time

  1. beyond 400 lines, defect finding diminishes
  2. 70-90% yield on defects found (vs defects present) in 200-400 LOC range
  3. When increasing, the number of defects found drops considerably when compared to the lines of code

Strategy 2: Inspection rate of less than 300-500 lines of code per hour

You must take your time on code reviews. Your results will improve. If you are reviewing code too quickly, you’re not reviewing well.

Strategy 3: Limit your reviews to 60-90 minutes tops.

Always spend at least 5 minutes reviewing code, even if it is just one line. Even lines can have a massive impact on the system’s behavior.

Strategy 4: Annotate code before the review begins

The idea of “author preparation” is that authors should annotate their source code before the review begins. We invented the term to describe a certain behavior pattern we measured during the study, exhibited by about 15% of the reviews. Annotations guide the reviewer through the changes, showing which files to look at first and defending the reason and methods behind each code modification. These notes are not comments in the code, but rather comments given to other reviewers.

Our theory was that because the author has to re-think and explain the changes during the annotation process, the author will himself himself uncover many of the defects before the review even begins, thus making the review itself more efficient. As such, the review process should yield a lower defect density, since fewer bugs remain.

Sure enough, reviews with author preparation have barely any defects compared to reviews without author preparation.

We also considered a pessimistic theory to explain the lower bug findings. What if, when the author makes a comment, the reviewer becomes biased or complacent, and just doesn’t find as many bugs? We took a random sample of 300 reviews to investigate, and the evidence definitively showed that the reviewers were indeed carefully reviewing the code – there were just fewer bugs.

Strategy 5: Establish quantifiable goals and capture metrics so you can improve your process.

You don’t know if you’re doing a good job if you don’t have a metric to compare against. Decide those metrics in advance.

Strategy 6: Checklists substantially improve results for both authors and reviewers

Not surprising! This advice is often repeated in many fields.

Checklists are a highly recommended way to find the things you forget to do, and are useful for both authors and reviewers. Omissions are the hardest defects to find – after all, it’s hard to review something that’s not there. A checklist is the single best way to combat the problem, as it reminds the reviewer or author to take the time to look for something that might be missing. A checklist will remind authors and reviewers to confirm that all errors are handled, that function arguments are tested for invalid values, and that unit tests have been created.

This example is something I will be looking to incorporate into my own process:

Another useful concept is the personal checklist. Each person typically makes the same 15-20 mistakes. If you notice what your typical errors are, you can develop your own personal checklist (PSP, SEI, and CMMI recommend this practice too). Reviewers will do the work of determining your common mistakes. All you have to do is keep a short checklist of the common flaws in your work, particularly the things you forget to do.

Strategy 7: Verify that defects are actually fixed

Re-review before submitting the changes!


Note: The next three items mentioned in the paper relate to the social pitfalls of peer review. You can find more information on the social aspects of peer review here.

Strategy 8: Managers must foster a good code review culture in which finding defects is viewed positively.

Code review can do more for true team building than almost any other technique we’ve seen – but only if managers promote it at a means for learning, growing, and communication. It’s easy to see defects as a bad thing – after all they are mistakes in the code – but fostering a negative attitude towards defects found can sour a whole team, not to mention sabotage the bug-finding process.

Managers must promote the viewpoint that defects are positive. After all, each one is an opportunity to improve the code, and the goal of the bug review process is to make the code as good as possible. Every defect found and fixed in peer review is a defect a customer never saw, another problem QA didn’t have to spend time tracking down.

Teams should maintain the attitude that finding defects means the author and reviewer have successfully worked as a team to jointly improve the product. It’s not a case of “the author made a defect and the review found it.” It’s more like a very efficient form of pair-programming.

Strategy 9: Beware the “big brother” effect

Metrics should be used to measure the efficiency of the process or the effect of a process change. Remember that often the most difficult code is handled by your most experienced developers; this code in turn is more likely to be more prone to error – as well as reviewed heavily (and thus have more defects found). So large numbers of defects are often more attributable to the complexity and risk of a piece of code than to the author’s abilities.

If metrics do help a manager uncover an issue, singling someone out is likely to cause more problems than it solves. We recommend that managers instead deal with any issues by addressing the group as a whole. It’s best not to call a special meeting for this purpose, or developers may feel uneasy because it looks like there’s a problem. Instead, they should just roll it into a weekly status meeting or other normal procedure.

Managers must continue to foster the idea that finding defects is good, not evil, and that defect density is not correlated with developer ability. Remember to make sure it’s clear to the team that defects, particularly the number of defects introduced by a team member, shouldn’t be shunned and will never be used for performance evaluations.

Strategy 10: The Ego Effect: Do at least some code review, even if you don’t have time to review it all.

Imagine yourself sitting in front of a compiler, tasked with fixing a small bug. But you know that as soon as you say “I’m finished,” your peers – or worse, your boss – will be critically examining your work. Won’t this change your development style? As you work, and certainly before you declare code-complete, you’ll be a little more conscientious. You’ll be a better developer immediately because you want the general timbre of the “behind your back” conversations to be, “His stuff is pretty tight. He’s a good developer;” not “He makes a lot of silly mistakes. When he says he’s done, he’s not.”

The “Ego Effect” drives developers to write better code because they know that others will be looking at their code and their metrics. And no one wants to be known as the guy who makes all those junior-level mistakes. The Ego Effect drives developers to review their own work carefully before passing it on to others.

Reviewing 20-33% of the code will probably give you maximal Ego Effect benefit with minimal time expenditure, and reviewing 20% of your code is certainly better than none!

Strategy 11: Lightweight-style code reviews are efficient, practical, and effective at finding bugs.

Long reviews with multiple developers are not worth the engineering time they take up. Use a lightweight tool, save time, and know that it is helping.

Share Your Thoughts

This site uses Akismet to reduce spam. Learn how your comment data is processed.