Check out the new USENIX Web site.



next up previous
Next: Metric Definition Up: No Title Previous: Introduction

Related Work

 

The early work on metrics for object-oriented software includes an OOPSLA paper by Chidamber and Kemerer[5] and a book by Lorenz[13]. The topic of testing metrics for the object oriented paradigm has emerged only as a recent topic. Research in the area of testability metrics is meager and is limited to the procedural paradigm. Several code models/tools have been proposed to determine the testability of procedural code, but these cannot be directly applied to object-oriented programs. These techniques fail to recognize the associations among the methods in a class and they fail to account for the state of the objects created from the class.

Among the measures in the procedural paradigm that are of interest is the Domain Range Ratio metric( DRR ) proposed by Voas[24]. This metric is a simple measure that calculates the testability of a procedure based on the information that is flowing into and out of the a procedure. Although DRR establishes an upper bound on the testability of the code, it does not help locate the particular piece of code where an internal data state collapse occurs. Voas has also developed the PIE[23] technique to help identify locations that have a probability of hiding faults. The technique uses a set of semantic and syntactic mutants to help identify locations having a high probability of internal data state collapse[23].

A major goal that has animated our research has been to estimate the effort required for testing a class at a very early stage of the development life cycle. Given the fact that the process of testing involves the actual code, early calculation of the metric limits the testing-related attributes that are available for the computation. So our metric calculates an upper bound on the testability of a class. We also feel that the PIE model, although very demonstrative, is computationally intensive and impractical to apply using only specifications. The use of syntactic and semantic mutation testing are also factors influencing our decision.

We have designed the metric to use the design information in the class specifications. The attributes that are available from the specifications include the data attributes of the class and the method names along with their respective signatures. The objects that are returned from each of the methods are also considered.

The metric focuses on the information content in the class specification rather than on the actual interaction of the methods and their messages. Although one might argue that this limits the accuracy of the forecast, it is our opinion that the metric strongly represents areas in the class deserving additional attention. This can be used to estimate and schedule the resource allocation during the testing phase of the lifecycle.

We have validated the metric against the formal list of desiderata proposed by Weyuker[25]. Although the list has its drawbacks, the discussion provides one form of theoretical validation of the metric. Additional validation work has been carried out and will be discussed in setion 5.

We have also developed a testing architecture that improves the testability of the software by improving the visibility of the internal attributes of the objects[19]. This organizing principle reduces the amount of effort required to test by requiring a test class for each production component and by using language mechanisms to overcome the information hiding built into the design.



next up previous
Next: Metric Definition Up: No Title Previous: Introduction



John McGregor
Sun May 5 14:43:24 EDT 1996