Validity and Reliability of Scores Obtained on Multiple-Choice Questions: Why Functioning Distractors Matter

Syed Haris Ali, Patrick A. Carr, Kenneth G. Ruit


Purpose Plausible distractors are important for accurate measurement of knowledge via multiple-choice questions (MCQs). This study demonstrates the impact of higher distractor functioning on validity and reliability of scores obtained on MCQs. Methods Free-response (FR) and MCQ versions of a neurohistology practice exam were given to four cohorts of Year 1 medical students. Consistently non-functioning multiple-choice distractors (<5% selection frequency) were replaced with those developed from incorrect responses on FR version of the items, followed by administration of the revised MCQ version to subsequent two cohorts. Validity was assessed by comparing an index of expected MCQ difficulty with an index of observed MCQ difficulty, while reliability was assessed via Cronbach’s alpha coefficient before and after replacement of consistently non-functioning distractors. Result Pre-intervention, effect size (Cohen’s d) of the difference between mean expected and observed MCQ difficulty indices was noted to be 0.4 – 0.59. Post-intervention, this difference reduced to 0.15 along with an increase in Cronbach’s alpha coefficient of scores obtained on MCQ version of the exam. Conclusion Multiple-choice distractors developed from incorrect responses on free-response version of the items enhance the validity and reliability of scores obtained on MCQs.


assessment; psychometrics; validity; reliability

Full Text:



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.










ISSN 1527-9316