From Academy to Industry: Coverity

In the February 2010 issue of the Communications of the ACM there is an article by the team behind the Coverity static analysis tool describing how they went from a research project to a commercial tool. It is quite interesting, and I recognize many of the effects that real customers have on a tool from my own experience at IAR and Virtutech (now part of Wind River).

Basically, the article describes the changes in mindset that a development team goes through as they progress from research to enthusiastic early adopters to normal users. The large population of normal users want useful results, not necessarily a deep understanding for how a tool works. In my experience, this leads to “ah, why didn’t we see that” moments as big (and in hindsight, obvious) flaws in the product and its user interface are revealed.

It also leads to “aich, why do we have users” moments when users want something that just is not feasible, sensible, or even sane. There is a nice quote from the article that really made me laugh:

Users really want the same result from run to run. Even if they changed their code base. Even if they upgraded the tool. Their model of error messages? Compiler warnings. Classic determinism states: the same input + same function = same result. What users want: different input (modified code base) + different function (tool version) = same result. As a result, we find upgrades to be a constant headache. Analysis changes can easily cause the set of defects found to shift…

There are so many similar requests made by users of technically advanced tools… and finding ways to get around them is part of the real fun of technical marketing. Reconciling impossible requirements with the technically feasible in some way that meets customer expectations is a very nice challenge to go through.

In this case, the fundamental issue is that the Coverity tool is an approximation, not an exact tool. Small fluctuations in input or the tool itself can change the set of issues flagged in a body of code. If a tool tried to be exact, its runtime complexity would explode. Even going some distance in that direction like the Polyspace Verifier (now part of the Mathworks) drastically increases the runtime. For more on static analysis in practice, I recommend a MSc Thesis from Linköping.

The article is a good read for anyone interested in either static checking as a technology, or in how to commercialize good research.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.