[x]Blackmoor Vituperative

Wednesday, 2013-11-20

Peer Review Lessons from Open Source

Filed under: Programming — bblackmoor @ 09:43

I recently found an article in the Nov/Dec 2012 issue of IEEE Software that sounded interesting, “Contemporary Peer Review in Action: Lessons from Open Source Development“. (Rigby, P., Cleary, B., Painchaud, F., Storey, M., & German, D. (n.d). Contemporary Peer Review in Action: Lessons from Open Source Development. Ieee Software, 29(6), 56-61.)

The authors examined the peer reviews of approximately 100,000 open source projects, including Apache httpd server, Subversion, Linux, FreeBSD, KDE, and Gnome. They compared these to more formal methods of software inspection and quality control, traditional used in complex, proprietary (non-open source) projects.

The open source reviews are minimal, and reviewers self-select what sections they will review. This results in people reviewing sections of code they are most competent to review (or at least, most interested in reviewing). The formal code inspections for proprietary projects are cumbersome, and the reviewers are assigned their sections, meaning they are often unfamiliar with the code they are reviewing. The peer reviews are completed more efficiently and are more likely to catch inobvious errors, but they lack traceability.

As a result of their research an analysis, the authors have five lessons that they have taken from open source projects which can benefit proprietary projects.

  1. Asynchronous reviews: Asynchronous reviews support team discussions of defect solutions and find the same number of defects as co-located meetings in less time. They also enable developers and passive listeners to learn from the discussion.
  2. Frequent reviews: The earlier a defect is found, the better. OSS developers conduct all-but-continuous, asynchronous reviews that function as a form of asynchronous pair programming.
  3. Incremental reviews: Reviews should be of changes that are small, independent, and complete.
  4. Invested, experienced reviewers: Invested experts and codevelopers should conduct reviews because they already understand the context in which a change is being made.
  5. Empower expert reviewers: Let expert developers self-select changes they’re interested in and competent to review. Assign reviews that nobody selects.

The authors go on to make three specific recommendations:

  1. Light-weight review tools: Tools can increase traceability for managers and help integrate reviews with the existing development environment.
  2. Nonintrusive metrics: Mine the information trail left by asynchronous reviews to extract light-weight metrics that don’t disrupt developer workflow.
  3. Implementing a review process: Large, formal organizations might benefit from more frequent reviews and more overlap in developers’ work to produce invested reviewers. However, this style of review will likely be more amenable to agile organizations that are looking for a way to run large, distributed software projects.

To be honest, I don’t have enough experience to have an informed opinion on these recommendations as they pertain to complex, proprietary projects. Virtually all of the projects I have worked on have been distributed, open-source projects, and nearly all of those had less peer review than I think they should have. That being said, the author’s recommendations and the “lessons” on which they’ve based them seem reasonable to me, and do not contradict with my own experience.