Exploring the Limits in Personnel Selection and Classification
eBook - ePub

Exploring the Limits in Personnel Selection and Classification

  1. 637 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Exploring the Limits in Personnel Selection and Classification

About this book

Beginning in the early 1980s and continuing through the middle 1990s, the U.S. Army Research Institute for the Behavioral and Social Sciences (ARI) sponsored a comprehensive research and development program to evaluate and enhance the Army's personnel selection and classification procedures. This was a set of interrelated efforts, collectively known as Project A. Project A had a number of basic and applied research objectives pertaining to selection and classification decision making. It focused on the entire selection and classification system for Army enlisted personnel and addressed research questions that can be generalized to other personnel systems. It involved the development and evaluation of a comprehensive array of predictor and criterion measures using samples of tens of thousands of individuals in a broad range of jobs. The research included a longitudinal sample--from which data were collected at organizational entry--following training, after 1-2 years on the job and after 3-4 years on the job. This book provides a concise and readable description of the entire Project A research program. The editors share the problems, strategies, experiences, findings, lessons learned, and some of the excitement that resulted from conducting the type of project that comes along once in a lifetime for an industrial/organizational psychologist. This book is of interest to industrial/organizational psychologists, including experienced researchers, consultants, graduate students, and anyone interested in personnel selection and classification research.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Exploring the Limits in Personnel Selection and Classification by John P. Campbell,Deirdre J. Knapp in PDF and/or ePUB format, as well as other popular books in Psychology & History & Theory in Psychology. We have over one million books available in our catalogue for you to explore.

Information

V

Selection Validation, Differential Prediction, Validity Generalization, and Classification Efficiency

13

The Prediction of Multiple Components of Entry-Level Performance

Scott H.Oppler, Rodney A.McCloy, Norman G.Peterson, Teresa L.Russell, and John P.Campbell
This chapter reports results of validation analyses based on the first-tour longitudinal validation (LVI) sample described in Chapter 9. The questions addressed include the following: What are the most valid predictors of performance in the first term of enlistment? Do scores from the Experimental Battery produce incremental validity over that provided by the ASVAB? What is the pattern of predictor validity estimates across the major components of performance? How similar are validity estimates obtained using a predictive versus concurrent validation design? When all the predictor information is used in an optimal fashion to maximize predictive accuracy, what are the upper limits for the validity estimates? This chapter summarizes the results of analyses intended to answer these questions and others related to the prediction of entry-level performance.

THE “BASIC” VALIDATION

This chapter will first describe the validation analyses conducted within each predictor domain. We call these the basic analyses. The final section will focus on maximizing predictive accuracy using all information in one equation.

Procedures

Sample

The results reported in this chapter were based on two different sample editing strategies. The first mirrored the strategy used in evaluating the Project A predictors against first-tour performance in the concurrent validation phase of Project A (McHenry, Hough, Toquam, Hanson, & Ashworth, 1990). To be included in those analyses, soldiers in the CVI sample were required to have complete data for all of the Project A Trial Battery predictor composites, as well as for the ASVAB and each of the CVI first-tour performance factors. Corresponding to this strategy, a validation sample composed solely of individuals having complete data for all the LV Experimental Battery predictors, the ASVAB, and the LVI first-tour performance factors was created for longitudinal validation analyses. This sample is referred to as the “listwise deletion” sample.
Table 13.1 shows the number of soldiers across the Batch A MOS who were able to meet the listwise deletion requirements. LVI first-tour performance measures were administered to 6,815 soldiers. Following final editing of the data, a total of 6,458 soldiers had complete data for all of the first-tour performance factors. The validation sample was further reduced because of missing predictor data from the ASVAB and the LV Experimental Battery.
TABLE 13.1
Missing Criterion and Predictor Data for Soldiers Administered LVI First-Tour Performance Measures
Number of soldiers:
  • in the LVI Sample ……………………………………………. 6,815
  • who have complete LVI criterion data ……………………….. 6,458
  • and who have ASVAB scores …………………………………6,319
  • and who were administered LV Experimental Battery
    (either paper-and-pencil or computer tests) ……………………4,528
  • and for whom no predictor data were missing …………………. 3,163
Of the 6,319 soldiers who had complete criterion data and swhose ASVAB scores were accessible, 4,528 were administered at least a portion of the Experimental Battery (either the paper-and-pencil tests, the computer tests, or both). Of these, the total number of soldiers with complete predictor data was 3,163.
The number of soldiers with complete predictor and criterion data in each MOS is reported in Table 13.2 for both the CVI and LVI data sets. With the exception of the 73 soldiers in MOS 19E, the soldiers in the right-hand column of the table form the LVI listwise deletion validation sample. MOS 19E was excluded from the longitudinal validation analyses for three reasons. First, the sample size for this MOS was considerably smaller than that of the other Batch A MOS (the MOS with the next smallest sample had 172 soldiers). Second, at the time of the analyses the MOS was being phased out of operation. Third, the elimination of 19E created greater correspondence between the CVI and LVI samples with respect to the composition of MOS (e.g., the ratio of combat to noncombat MOS).
In the alternative sample editing strategy, a separate validation sample was identified for each set of predictors in the Experimental Battery (see below). More specifically, to be included in the validation sample for a given predictor set, soldiers were required to have complete data for each of the
TABLE 13.2
Soldiers in CVI and LVI Data Sets With Complete Predictor and Criterion Data by MOS
MOS CVI LVI (Listwise Deletion Sample)
11B Infantryman 491 235
13B Cannon Crewmember 464 553
19Ea M60 Armor Crewmember 394 73
19K M1 Armor Crewmember 446
31C Single Channel Radio Operator 289 172
63B Light-Wheel Vehicle Mechanic 478 406
71L Administrative Specialist 427 252
88M Motor Transport Operator 507 221
91A Medical Specialist 392 535
95B Military Police 597 270
Total 4,039 3,163
a MOS 19E not included in validity analyses.
first-tour performance factors, the ASVAB, and the predictor composites in that predictor set only. For example, a soldier who had data for the complete set of ABLE composites (as well as complete ASVAB and criterion data), but was missing data from the AVOICE composites, would have been included in the “setwise deletion” sample for estimating the validity of the former test, but not the latter.
There were two reasons for creating these setwise deletion samples. The first reason was to maximize the sample sizes used in estimating the validity of the Experimental Battery predictors. The number of soldiers in each MOS meeting the setwise deletion requirements for each predictor set is reported in Table 13.3. As can be seen, the setwise sample sizes are considerably larger than those associated with the listwise strategy.
The second reason for using the setwise strategy stemmed from the desire to create validation samples that might be more representative of the examinees for whom test scores would be available under operational testing conditions. Under the listwise deletion strategy, soldiers were deleted from the validation sample for missing data from any of the tests included in the Experimental Battery. In many instances, these missing data could be attributed to scores for a given test being set to missing because the examinee failed to pass the random response index for that test, but not for any of the other tests. The advantage of
TABLE 13.3
Soldiers in LVI Setwise Deletion Samples for Validation of Spatial, Computer, JOB, ABLE, and AVOICE Experimental Battery Composites by MOS
image
the setwise deletion strategy is that none of the examinees removed from the validation sample for a given test were excluded solely for failing the random response index on a different test in the Experimental Battery.
As a final note, there is no reason to expect systematic differences between the results obtained with the listwise and setwise deletion samples. However, because of the greater sample sizes of the setwise deletion samples, as well as the possibly greater similarity between the setwise deletion samples and the future examinee population, it is possible that the validity estimates associated with these samples may be more accurate than those associated with the listwise deletion sample.

Predictors

The predictor scores used in these analyses were derived from the operationally administered ASVAB and the paper-and-pencil and computerized tests administered in the Project A Experimental Battery. For the ASVAB, three types of scores were examined. These scores, listed in Table 13.4, include the nine AS VAB subtest scores (of wh...

Table of contents

  1. Cover
  2. Half Title
  3. Full Title
  4. Copyright
  5. Contents
  6. List of Figures
  7. List of Tables
  8. Preface
  9. Foreword
  10. About the Editors and Contributors
  11. I INTRODUCTION AND MA JOR ISSUES
  12. II SPECIFICATION AND MEASUREMENT OF INDIVIDUAL DIFFERENCES FOR PREDICTING PERFORMANCE
  13. III SPECIFICATION AND MEASUREMENT OF INDIVIDUAL DIFFERENCES IN JOB PERFORMANCE
  14. IV DEVELOPING THE DATABASE AND MODELING PREDICTOR AND CRITERION SCORES
  15. V SELECTION VALIDATION, DIFFERENTIAL PREDICTION, VALIDITY GENERALIZATION, AND CLASSIFICATION EFFICIENCY
  16. VI APPLICATION OF FINDINGS: THE ORGANIZATIONAL CONTEXT OF IMPLEMENTATION
  17. VII EPILOGUE
  18. References
  19. Author Index
  20. Subject Index