The Rating of Individuals in Organizations: An Alternate Approach

Document Type

Article

Publication Date

8-1974

Digital Object Identifier (DOI)

https://doi.org/10.1016/0030-5073(74)90040-3

Abstract

Raters at different organizational levels probably observe significantly different facets of a ratee's job performance in most organizations. If so, their ratings ought to reflect these differences. In assessing “validity” of performance ratings, high agreement between such raters may be an unduly severe and perhaps even an erroneous requirement. Instead of demanding convergent validity across organizational levels on performance dimensions common to the different levels, it was suggested that the raters be subgrouped by organizational level with each rater group providing performance evaluations using only dimensions appropriate to their level's position to rate. In a test of this idea, scaled expectation behavior rating scales were developed separately by secretaries and by university instructors for the job of secretary. Each rater group was asked to introduce performance dimensions for only those criterion areas in which they felt members of their own organizational level would be readily able to observe secretary behavior. These additional instructions seemed to be effective. The four job behavior dimensions (Job Knowledge, Organization, Cooperation with Co-Workers, and Responsibility) developed by secretary raters showed only modest conceptual similarity with the three job behavior dimensions (Judgment, Technical Competence, and Conscientiousness) developed by instructors. Moreover, when members of each organizational level evaluated ratees using their own level's dimensions and those developed by the other level, within-level interrater agreement for instructor raters was greater on their own dimensions than on the secretaries' dimensions (p < .005), and for secretary raters interrater agreement was greater for their own dimensions (p < .10). The conceptual inappropriateness of requiring across-organizational level interrater agreement for ratings on all performance dimensions, along with the results of this study, suggest that the multitrait-multirater analysis may not provide a realistic assessment of the quality of ratings in many organizational settings. Instead, a “hybrid” multitrait-multirater analysis in which raters make evaluations on only those dimensions their level's members are in good position to rate was offered as a more reasonable method to judge the “goodness” of ratings in organizations. In the hybrid analysis, within-level interrater agreement is taken as an index of convergent validity. Besides the improved conceptual fit the hybrid matrix provides for analyzing the performance ratings of individuals in organizations, it was suggested that the probability of obtaining convergent and discriminant validity is higher for this method than for the multitrait-multirater analysis.

Was this content written or created while at USF?

No

Citation / Publisher Attribution

Organizational Behavior and Human Performance, v. 12, issue 1, p. 105-124

Share

COinS