摘要

Software metrics are essential for component certification and for the development of high quality software in general. Accordingly, research in the area of software metrics has been active and a wide range of metrics has been proposed. However, the variety of metrics proposed for measuring the same quality attribute suggests that there may be some sort of inconsistencies among the measurements computed using these metrics. in this paper, we report a case study considering class cohesion as a quality attribute of concern. We present the results of our empirical investigation to confirm that prominent object-oriented class cohesion metrics provide inconsistent measurements. We also present an analysis of the uncertainty that should be considered in these class cohesion measurements due to their inter-inconsistencies. Considering such uncertainty, we provide a framework for associating a probability distribution of error to the measurements computed by each metric; thus enabling the assessment of the degree of reliability of measurements of each metric when used to rank a set of classes with regard to their cohesion quality. The error probability distribution would be useful in practice where it is seldom feasible to use a large set of metrics and rather a single metric is used.

全文