Literaturnachweis - Detailanzeige
Autor/inn/en | Coleman, Donald G.; Areglado, Ron; Adams, R. C. |
---|---|
Titel | Establishing Construct Validity and Reliability for the NAESP Professional Development Inventory: Simplifying Assessment Center Techniques. |
Quelle | (1998), (22 Seiten)
PDF als Volltext |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Monographie |
Schlagwörter | Assessment Centers (Personnel); Construct Validity; Elementary Education; Evaluation Methods; Factor Analysis; Instructional Leadership; Job Skills; Principals; Professional Development; Reliability; Simulation |
Abstract | This report documents the construct validity and reliability of the Professional Development Inventory (PDI), an assessment center sponsored by the National Association of Elementary School Principals (NAESP). The assessment center provides a means for assessing participants in situations simulating those confronted by principals on the job. The NAESP has developed a new version of the assessment center, which focuses on 13 skills for principals. The time for assessing candidates has been dropped from 2 days to 1 day of simulated activity, and the time for assessing each candidate has been dropped from 20 hours to 8. Factor analyses of the 104 descriptors using data collected from assessment centers were analyzed several ways to verify that no unusual factor patterns or difficulties were found with field-based data. The study was constrained because only 113 data sets were available, when more than 312 would be required for complete analysis. However, the number of data sets was sufficient to divide the data into three different data sets based on different simulations the participants completed. Thirteen individual factor analyses across 3 different simulations found, with 1 exception within the 39 analyses, unidimensional factors. The six Management and seven Leadership constructs retained their construct integrity regardless of the way the factor analyses were structured, but some of the items forming more than one construct tended to load more heavily on a single factor because multiple skills were assessed within a given simulation. Data suggest that assessors tended to score all skills alike if the skills were assessed within the same simulation. Constructs retained their individuality if they were assessed independently. Results of this study verify with field data a constructually sound evaluation instrument. High reliability was found, based on individual skills, or when the skills were grouped into Management and Leadership skill configurations. Four appendixes describe the "vision" construct, give construct definitions, list some simulated activities associated with the inventory, and contain the skill-simulation matrix. (Contains 13 references.) (SLD) |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |