Graduation Year

2012

Document Type

Dissertation

Degree

Ph.D.

Degree Granting Department

Educational Measurement and Research

Major Professor

Jeffrey Kromrey

Keywords

Job Analysis, Survey Validation Methodology, Survey Validation Study, Task Analysis, Task Rating Scales

Abstract

The first step in developing or updating a licensure or certification examination is to conduct a job or task analysis. Following completion of the job analysis, a survey validation study is performed to validate the results of the job analysis and to obtain task ratings so that an examination blueprint may be created. Psychometricians and job analysts have spent years arguing over the choice of scales that should be used to evaluate job tasks, as well as how those scales should be combined to create an examination blueprint. The purpose of this study was to determine the relationship between individual and composite rating scales, examine how that relationship varied across industries, sample sizes, task presentation order, and number of tasks rated, and evaluate whether examination blueprint weightings would differ based on the choice of scales or composites of scales used. Findings from this study should be used to guide psychometricians and job analysts in their choice of rating scales, choice of composites of rating scales, and how to create examination blueprints based upon individual and/or composite rating scales.

A secondary data analysis was performed to help answer some of these questions. As part of the secondary data analysis, data from 20 survey validation studies performed during a five year period were analyzed. Correlations were computed between 29 pairings of individual and composite rating scales to see if there were redundancies in task ratings. Meta-analytic techniques were used to evaluate the relationship between each pairing of rating scales and to determine if the relationship between pairings of rating scales was impacted by several factors. Lastly, sample examination blueprints were created from several individual and composite rating scales to determine if the rating scales that were used to create the examination blueprints would ultimately impact the weighting of the examination blueprint.

The results of this study suggest that there is a high degree of redundancy between certain pairs of scales (i.e., the Importance and Criticality rating scale are highly related), and a somewhat lower degree of redundancy between other rating scales; but that the same relationship between rating scales is observed across many variables, including the industry for which the job analysis was being performed. The results also suggest the choice of rating scales used to create examination blueprints does not have a large effect on the finalized examination blueprint. This finding is especially true if a composite rating scale is used to create the weighting on the examination blueprint.

Share

COinS