ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

eLearn Magazine Special Issue: Beyond Multiple Choice

Special Issue: Advancing Beyond Multiple Choice eAssessment

By Brian Moon / September 2021

TYPE: OPINION
Print Email
Comments (1) Instapaper

Authors for this special issue presented their work, perspectives, and recommendations that look to move learning assessment at all levels "beyond multiple-choice."

A century ago, the U.S. Army faced a monumental and pressing challenge: how to ascertain the intelligence of millions of recruits as it ramped up for World War I. The Army had options-written and oral examinations were already in widespread use. But the urgency of the moment and sheer scale of the need precluded these time-consuming techniques. Into this predicament stepped the president of the American Psychological Association, Dr. Robert Yerkes, to propose a method with the potential to meet the need: multiple-choice questions.

This natural experiment produced two outcomes that have shaped the course of assessment ever since. The first was that rapid methods of assessment could be implemented at scale. The Army tested 1.7 million recruits, a feat that would likely have been unattainable without the innovation. Efficiency was firmly etched as a necessary component to assessment mechanisms and programs and has only become more entrenched as learning ecosystems have expanded to meet population growth and ever-diversifying bodies of learners.

The second outcome should have given the movement pause. The Army, it seems, hadn't learned much about their recruits. To put it less charitably: "The Army found no value in the results." [1] And yet, the extraordinary achievements in efficiency and scale took hold as the key storyline. Organizations seeking such benefits seized on the approach, and for the next century advancement focused primarily on methods and technologies geared toward introducing more efficiencies at broader scales. The advent of computing technologies only expanded the possibilities. And even as critics of the multiple-choice method (and its variants) offered research rebuttals and alternative techniques, the continuing need for efficiency carried the day. Those that offered the promise of deeper insights into learners and their progress have struggled to gain traction with institution leaders charged with running tight ships and remaining accountable.

The Beyond Multiple Choice (BMC) conference and exhibition was founded in 2018 to start to reset the imbalance between insight and efficiency. The field of learning assessment has expanded significantly since Yerkes' day, particularly with the explosion of computer-based assessment options. While many conferences discuss and expose advances in assessment technologies, few have sought to address the imbalance-accepting an "if you build it for efficiency, insight will come" paradigm. Old techniques get digitally wrapped for broader delivery and compliance. But the necessary conversation about how other equally, if not more so, valid approaches for assessing learning might achieve comparative levels of efficiency has been placed on the back burner.

BMC offers an opportunity to place focus back on the balance. Those developing innovations in assessment methods have been challenged to meet the expectations for efficiency at all stages of assessment: authoring, disseminating to takers, analyzing results, and demonstrating insights in ways that encourage interrogation at the individual and learning group level. Each stage offers opportunities to the innovators and also raises questions about how the status quo in e-assessment is working-or not. BMC has become the primary platform for discussing these important issues in the context of the evolving policy and technology landscape.

This special issue is the first of many that provide highlights from our BMC conference and meeting series. Our first conference, held in person in 2018, attracted about 90 educators, innovators, sponsors, and researchers; by 2020, our virtual conference had attracted more than 1,600 stakeholders from over 800 organizations across 40 countries. Interest in the topic of assessment has never been higher. Our goal is to set the tone for the next century of assessment, striking a more equitable balance between efficiency and insight. Authors for this special issue presented their work, perspectives, and recommendations that look to move learning assessment at all levels "beyond multiple choice." Future special issues will encourage the discussion as we continue to publish the latest advancements in e-assessment.

References

[1] Ramírez, A. The dark history of the multiple-choice test. Edutopia. May 20, 2013

Author

Brian M. Moon is the chief technology officer for Perigean Technologies, president of Sero! Learning Assessments, and cofounder of BMC, along with Jeff Ross and Martyn Roads. His research interests include expert decision-making in naturalistic environments, and the assessment of mental models, particularly using concept-mapping techniques.

©2021 ACM  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.



Comments

  • Sun, 23 Jan 2022
    Post by Igor Schagaev

    I feel we live in different planets: since 2001 till 2017 multiple choice answer approach not MCQ was designed developed and applied in several universities in UK, editor of E_learn then did ask to write a small simple paper to explain what is wrong with E-Learn - see it here

    https://elearnmag.acm.org/featured.cfm?aid=2611524

    Book about e-learn and multiple choice answer approach - rigorously justified, btw, was written and even translated voluntarily by Russian academics for themselves: https://www.academia.edu/43791395/E_LEARNING_HOW_IT_SHOULD_BE

    but it looks like we all just write and do not read... and wheel of the bus turn over and over again exactly the same way...