Revisiting selection systems - Sackett et al. (2023)

13 important questions on Revisiting selection systems - Sackett et al. (2023)

What did Sacket et al. (2023) report about updated validity estimates than previously found by Schmidt & Hunter, in Sacket et al. (2023) - revisiting selection systems?

Sacket et al reported that updated validity estimates are lower than previously found by Schmidt & Hunter (1998)

What are the most predictive assessments, in Sacket et al. (2023) - revisiting selection systems?

Structured interviews (.42)
Job knowledge tests (.40)
Empirically keyed biodata (.38)
Work samples and assessment centers (.33)
Cognitive ability tests (.31)

Broadly speaking, what can be assumed about job-specific predictors, in Sacket et al. (2023) - revisiting selection systems?

Job-specific predictors generally outperform broad general predictors such as personality and cognitive ability alone
  • Higher grades + faster learning
  • Never study anything twice
  • 100% sure, 100% understanding
Discover Study Smart

Which factor did the authors used in evaluating bias in subgroup mean differences, in Sacket et al. (2023) - revisiting selection systems?

The authors evaluated bias using subgroup mean differences (White-Black d-values)

Which assessment is most biased, in Sacket et al. (2023) - revisiting selection systems?

Cognitive ability tests show the largest adverse impact (d=.79)

Which assessment all have a smaller subgroup difference, compared to cognitive ability tests, in Sacket et al. (2023) - revisiting selection systems?

Structured interviews, integrity tests, and biodata all have smaller sub-group differences (d= .10 - .30)

What do structured interviews, integrity tests, and biodata have in common, when it comes to subgroup differences, in Sacket et al. (2023) - revisiting selection systems?

They provide a better balance between validity and diversity

True or false: composite systems excluding ability tests can maintain a strong validity while minimizing bias, in Sacket et al. (2023) - revisiting selection systems?

True

Variations in validity across studies arise from several factors. Which factors are they, in Sacket et al. (2023) - revisiting selection systems?

- Subtype differences (e.g., structured vs. Unstructured interviews)
- Quality of implementation, such as rater training in assessment centers
- Differences in performance criteria (task vs. Contextual performance)
- Study design effects (predictive vs. Concurrent validity studies)
- Job-specific effects - some traits matter more in certain contexts

What does Sacket et al. Suggest about how we should treat validity, in Sacket et al. (2023) - revisiting selection systems?

Validity should be treated as a distribution rather than a fixed value, as its validity varies across studies because of several factors

What things are corrected in operational validity, in Sacket et al. (2023) - revisiting selection systems?

Corrected for measurement error and range restriction

What is the suggested order of correction that are applied to operational validity, in Sacket et al. (2023) - revisiting selection systems?

First correcting for criterion unreliability, then for range restriction, using interrupter reliability for performance ratings

True or false: correcting for criterion unreliability, then for range restriction, using interrupter reliability for performance ratings. They caution against overcorrection when data are uncertain and advocate a conservative estimation approach to ensure credible, generalizable results, in Sacket et al. (2023) - revisiting selection systems?

True

The question on the page originate from the summary of the following study material:

  • A unique study and practice tool
  • Never study anything twice again
  • Get the grades you hope for
  • 100% sure, 100% understanding
Remember faster, study better. Scientifically proven.
Trustpilot Logo