Static program analysis is a key component of many software development tools, including compilers, development environments, and verification tools. Practical applications of static analysis have grown in recent years to include tools by companies such as Coverity, Fortify, GrammaTech, IBM, and others. Analyses are often expected to be sound in that their result models all possible executions of the program under analysis. Soundness implies the analysis computes an over-approximation in order to stay tractable; the analysis result will also model behaviors that do not actually occur in any program execution. The precision of an analysis is the degree to which it avoids such spurious results. Users expect analyses to be sound as a matter of course, and desire analyses to be as precise as possible, while being able to scale to large programs.
Soundness would seem essential for any kind of static program analysis. Soundness is also widely emphasized in the academic literature. Yet, in practice, soundness is commonly eschewed: we are not aware of a single realistic whole-programa analysis tool (for example, tools widely used for bug detection, refactoring assistance, programming automation, and so forth) that does not purposely make unsound choices. Similarly, virtually all published whole-program analyses are unsound and omit conservative handling of common language features when applied to real programming languages.
Hear hear! I frequently make points like this to counter unreasonable claims by some tool vendors that their offerings are invariably sound. Most end users are prepared to believe such claims with little skepticism because they don't know to ask "under what assumptions?"
Disclaimer: I lead the team at GrammaTech that develops CodeSonar, a tool that is definitely not sound, nor soundy for that matter.
The use of the "soundiness" resonated with me for several reasons. First of all, Colbert's use of "truthiness" came to mind. He, of course, was talking opinion ruling over fact.
But, this current usage bears directly on the issue of truthfulness in many ways but on these two, in particular: 1) if a computational system is not sound, when do we know if it is reliable (ignore, please, for the moment, the multitude of issues being raised - but be cognizant that many system developers think that they have no problem in this regard due to the numerical nature of their domain processing) operationally; 2) how do we increase awareness of lurking problems in a climate where hot-dogging via code (encouraged by the companies involved) throw out systems, continually, and seemingly on the fly and do we try to alter the willy-nilly patterns that seem so prevalent the past decade or so?
In short, everywhere "lmost" true is our norm? Just as "soundiness" is something to engineer, so too is "truth" itself. Hence, my focus: truth engineering.
Displaying all 2 comments