ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Going Beyond Multiple Choice
Advances in eAssessment (Special Series)

By Brian Moon / August 2022

TYPE: OPINION
Print Email
Comments Instapaper

For as long as human learning has been institutionalized, educators have sought means to assess where their students were in their learning journey. Concept mapping was developed in the early 1970s as a method to help reveal cognitive structures of students and how they change—that is, to assess learning. By the late 1990s, concept mapping had evolved from students using sticky notes and string on butcher paper, to a software-mediated method that enabled creating, sharing, and linking to other digital resources [1]. As the technique expanded globally, so too did the number and variety of tasks and rubrics educators could use to assess students' current understanding and the progress of their learning. Mounting evidence was demonstrating the validity and reliability of technique, even compared to other assessment formats, including multiple choice questions.

Indeed, by 2010, the National Assessment of Educational Progress (NAEP) called for the use of concept mapping as an Interactive Computer Task in its Science Framework for 2011 (a call that has continued through the 2019 Framework): "Concept-mapping tasks may be considered a complex item type because of the cognitive demands placed on students. Concept maps can be used as a reliable and valid assessment of students' ability to make connections among science principles. Thus, concept-mapping tasks tap a science ability that is difficult to measure by other means."

I began using concept mapping in the early 2000s to elicit the knowledge of experts in fields as diverse as nuclear engineering and pet food production. I also began communing with concept mapping practitioners who were implement the technique in their classrooms and were eager to see its expansion. Seeing a gap in the available software capabilities to conduct learning assessments using concept mapping techniques???particularly those described by and at the scale required for the NAEPe—my team and I set about developing the next generation of concept mapping software tools. We found sponsors in the U.S. Department of Defense who were seeking innovations in the learning assessment spacee—specifically, in assessment techniques, integration with other learning technologies, and efficiency in assessment authoring. By 2018, early versions of our software, Sero!, had racked up international recognitions for innovation and been featured in showcase competitions at some of the largest learning assessment conferences. We were clearly on to something, as industry leaders saw genuine innovation in our direction. "I've never seen anything like this," was the refrain we heard most often.

And yet …

Attending these major conferences and exhibit halls, I began to see what else was being offered to fill an increasingly widening chasm. On the one side were educators and researchers bemoaning the state of available solutions in the assessment space. This camp was calling for solutions that offered deeper insights into learner thinking that were also scalable and efficient enough to meet the practicalities of expanding learner populations. On the other side were vendors who had all but solved issues of scalability and efficiency but were doing so at the expense of insight. The standard offerings were large banks of items that implemented a century old format: multiple choice questions. "Innovations" involving drag and drop and hotspot techniques were being offered and gaining interest, but their capacity for deep insight that guides meaningful learning was scarcely greater.

Having seen the state-of-the-possible in my own network of researchers and innovators, it became apparent that closing the chasm would require new connections to be formed. Both sides would need to see it from the others' perspective. Educators and researchers would need to see what true innovation looks like and how it was progressing. Vendors, in turn, needed to hear directly from the source where the pain points were so that they could shape their creativity toward implementable solutions. Most importantly, the community needed a unifying theme to rally around, one that clearly demarcated an intention to advance time-worn approaches that had perhaps outlived their actual utility.

In 2018, colleagues from the UK, Jeff Ross and Martyn Roads, established the Beyond Multiple Choice Conference and Exhibition. Our goal was to showcase innovation in next generation assessment technologies, or e-assessment, in the context of exploring opportunities for and challenges facing implementation. Our first event in Washington DC attracted nearly 100 professionals. By our third annual event—conducted virtually during the pandemic—we attracted more than 1,600 registrants from around the world. Clearly, our goals and forum had struck a chord with educators, researchers, vendors, and policy drafters.

We also established a relationship with eLearn Magazine, and published select articles from the event. We are pleased to continue this relationship through the launch of this new series: "Beyond Multiple Choice: Advances in eAssessment." Over the coming months, we will publish select articles leading researchers and developers in the field of e-assessment. Upcoming themes will explore the context of innovation in assessment, which I hope will build even more bridges between educators and innovators.

References

[1] Novak, J. D. and Ca??as, A. J. The Theory Underlying Concept Maps and How to Construct and Use Them. Technical Report IHMC CmapTools 2006-01 Rev 01-2008. Florida Institute for Human and Machine Cognition, 2008.

About the Author

Brian Moon is the chief technology officer for Perigean Technologies, president of Sero! Learning Assessments, and cofounder of BMC, along with Jeff Ross and Martyn Roads. His research interests include expert decision-making in naturalistic environments, and the assessment of mental models, particularly using concept-mapping techniques.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Copyright © ACM 2022. 1535-394X/2022/08-3551870 $15.00 https://doi.org/10.1145/3551870


Comments

  • There are no comments at this time.

ADDITIONAL READING

Advances in eAssessment (Special Series) This series of articles covers advancements in eAssessment. The series features educators, developers, and researchers from around the world who are innovating how learning is assessed while meeting the challenges of efficiency, scalability, usability, and accessibility.
  1. Going Beyond Multiple Choice
  2. Centering All Students in Their Assessment
  3. Harnessing the Power of Natural Language Processing to Mass Produce Test Items
  4. Getting Authoring Right—How to Innovate for Meaningful Improvement
  5. Closing the Assessment Excellence Gap—Why Digital Assessments Should go Beyond Recall and be More Inclusive