ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Beyond Multiple Choice Conference in review

Special Issue: Advancing Beyond Multiple Choice eAssessment

By Kristine Hadeed / September 2021

REVIEW: EVENTS, TYPE: MANAGEMENT
Print Email
Comments Instapaper

This review analyzes ideas and themes that emerged from Beyond Multiple Choice 2020 (BMC2020), the third iteration of an annual conference exploring the future of assessment. Taking place online due to restrictions surrounding COVID-19, BMC2020 attracted speakers and delegates from across the globe, representing various facets of the assessment industry. Altogether, the presentations compiled a bird's-eye snapshot of challenges and opportunities for advancing assessment to better meet 21st century training and education needs.

My involvement as an organizer for Beyond Multiple Choice came on the heels of a two-year service teaching contract. I entered my first year of teaching at a low-income, predominately Black elementary school enthusiastic about building relationships with students, supporting their growth, and cultivating 21st century skills in my classroom. Unfortunately, one thing seemed to stand in the way of that, more than anything else: the overwhelming pressure of preparing for state assessments.

Especially for under-resourced schools and districts, I learned how much pressure exists for students to perform well on standardized tests. After students returned from winter break, teachers switched from our standard curriculum to state test prep books. Focus shifted from culture-building and immersive learning experiences to drills and strategies for scoring higher on predominantly multiple-choice-based exams. Students felt stressed by teachers, and teachers felt constrained in turn by our principal, whose worth as a leader was judged in significant part by how children performed during a single end-of-year event.

Kate McConnell of the Association for American Colleges and Universities aptly called this dynamic "the tail wagging the dog." Despite educators' training and instincts for guiding learner development, state assessments set the implicit parameters for pedagogy in the classroom.

The global pandemic has forced assessment stakeholders to reexamine assessment practices. Diverse professionals in all levels and facets of education are asking how we move forward to address increasingly visible failings of assessments. This dynamic context made the Beyond Multiple Choice (BMC) conference in 2020 exceedingly relevant.

One thing is clear: we will not find solutions to looming assessment problems in an answer key. BMC2020 convinced me that the way forward is not by determining universal verdicts of what methods are "right" or "wrong," but rather by asking questions that challenge us to think more creatively and holistically. I explore this takeaway further in the following event analysis.

Setting the Tone

Kristen DiCerbo, Khan Academy's newly appointed chief learning officer, kicked off the conference with a keynote titled, "A Future of Assessment Without Testing."

Rather than dictate a clear solution, DiCerbo posed a set of questions to frame her vision:

  • What if we had all the evidence we need in our digital learning environments to infer what students know and can do without them having to take a traditional test?
  • What if a test wasn't a one-time event?
  • What if we had assessment without tests?

DiCerbo shared insights gleaned over her career studying digital learning environments, touching on themes that other presenters echoed throughout the conference. Among those are the role assessment plays in system-wide racial and class disparities, along with a need for more cumulative, personalized, and adaptive assessment activities that better simulate real-world hard and soft skill demands.

Next-Gen STEM Assessment

Presenters consistently referenced two points of concern and optimism. On the one hand, the pandemic has revealed the extent to which education institutions are woefully ill prepared to equip students with the skills demanded to thrive in the 21st century. On the other hand, the pandemic has also created the pressure necessary to get those institutions up to speed.

Presenters Jordan Morrow of Qlik and Emily Brown of Sheridan College's Pilon School of Business each expressed that assessments should diagnose individual learning gaps and needs. They recommended personalized, self-paced learning paths to master math and data literacy skills that have application in real-world environments.

In this session's panel discussion, Brown emphasized a goal her team has had to make "education accessible to every learner, when they're able to learn it, in a way they're able to learn it."

How do we do that, and how do we measure our effectiveness in pursuing that noble vision? That is a question for the rest of us.

Gaming, VR, and AI Assessment Technology

According to a preliminary survey of registrants, this was one of the conference's most anticipated sessions. Presentations inspired the use of advanced technologies to create more engaging, relevant, and multidimensional learning models.

However, the panel discussion revealed that one of these technologies' greatest strengths also poses one of their most significant challenges: massive amounts of data collected and the lack of frameworks and tools to analyze and represent it effectively.

Che Osborne, chief security officer of Immerse.io, compared it to "drinking from a fire hose." His presentation raised the question: With so much data collected in a multidimensional space, how do we make sense of it in a way that is valid, reliable, and useful?

Multiple speakers expressed a need for interdisciplinary collaboration between developers, data scientists, designers, and education specialists. "We've got to speak the language of a lot of different groups now if we're going to break down these walls," said Eliot Winer, director of Iowa State University's Virtual Reality Applications Center.

Remote Proctoring and Test Security

Proctoring—or "invigilation," as assessors outside the U.S. call it—has long been a staple of high-stakes summative tests. COVID-19 has presented new challenges for doing the job of proctoring well while respecting individual privacy.

David Foster, CEO of Caveon Test Security, noted in his presentation that proctoring has always been "a mediocre solution for test security—even with assistive technology." He shared research showing that, even under normal circumstances, human proctors are notoriously bad at detecting and deterring cheating.

To mitigate seemingly inevitable blind spots and increasingly sophisticated cheating techniques, Foster says, "We need to change how proctoring is done," by increasing deterrence measures and changing how we design tests, focusing on cheating prevention rather than detection.

The question of how to create "balanced assessment systems,"—which reflect not just what learners know but also how they apply their knowledge—reemerged in later sessions.

Protecting Learner Data

Continuing themes from the previous session on proctoring, the topic of protecting learner data has become much more prominent during the pandemic as learners are sharing more personal data than ever before. Upholding legal privacy standards and evading hacking threats are forefront concerns in this area as new troves of data—from contact information to video surveillance to AI biometrics—are being collected from children and adults through online education platforms.

"Privacy and security in education are not as mature as it needs to be," to mitigate the threats, says Linnette Attai, founder of the global compliance consulting firm Playwell, Inc. "It's really just starting to get there."

Technology writer Natalie Smolenski touted blockchain—the encryption technology used by cryptocurrencies—as a means for reforming centralized data silos, which are notoriously vulnerable to breaches and leaks. With blockchain technology, data ownership shifts from the platform to the user. When we centralize data in this way, Smolenski contended that it becomes much more difficult for data hackers to hit a payday.

Blockchain certainly seems like one promising avenue to explore for those seeking ways to enhance online data privacy and user safety.

Assessing Through Performance Evidence

Another highly requested topic, this session brought together professionals in performance-based K-12, higher education, and vocational assessment to discuss the approach's benefits, best practices, and challenges for continued growth.

"What [our candidates] wanted is a more performance [-based], authentic assessment of their skills, rather than multiple-choice questions," said Liberty Munson, head psychometrician at Microsoft. She shared insights into key objectives and takeaways in developing browser-based labs for Microsoft's Office suite certification exams in her presentation.

"Students need to be able to apply what they've been learning," added Lacrecia Terrance, a longtime K-12 educator whose consultancy, The EduProject, trains teachers on how to implement project-based learning (PBL) in the classroom.

"We're dealing with real-world problems—things that actually impact students themselves—and inviting that into the content," says Terrance. She said this also gives students a chance to see their cultures, identities, and values reflected in the curriculum.

Shane Sutherland, the founder of PebblePad, expounded on how online portfolios can support performance evidence learning and assessment: "Portfolios surface the process of learning, so it's visible to multiple stakeholders [who] can give you feedback over time," he says. "In a way, it doesn't matter if a project is a failure, because you learned all the way through," which he contrasted to tests that often deliver a binary result of pass or fail.

Eric Shepherd of Talent Transformation Guild noted that the high-stakes nature of summative exams helps fuel anxiety around assessment. Terrance and Sutherland both highlighted ways that performance evidence assessment can quell such anxieties by reframing failure as a natural part of the learning process and by embedding formative assessment into the culture of learning through feedback and student reflection.

While performance-based learning and assessment have been growing in popularity amongst students and educators for the reasons above, Terrance noted how aligning PBL with state accountability tests poses a significant hurdle for expanding its use in the classroom.

Engaging with Government

Scott Marion, executive director of the Center for Assessment, segued seamlessly from the previous session into a presentation called, "How Policies Need to Change for Us to Move Beyond Multiple Choice." He made a pedagogical case for the benefits of performance assessments. He argued that political realignment of state and federal assessment priorities to recenter student growth is necessary to reduce the stakes of standardized testing, give districts more freedom to personalize learning in the classroom, and ultimately advance assessment "beyond multiple-choice."

Denis Saunders, the founder of e-assessment solution provider Calibrand, disagreed somewhat with Marion's position. He proposed alternately that large-scale innovation will hinge not on political changes but instead on one or more education "disrupters" that yield outcomes so much better than the status quo that governments will have to adapt to a new paradigm. One might compare this idea to how ridesharing services like Uber disrupted the taxi economy and catalyzed policy changes in the transportation industry—not the other way around.

Moderator John Kerr noted COVID-19 was a disrupter of sorts, pressuring political and educational institutions to reexamine their assessment policies and practices in response to an external force outside their control.

Marion acknowledged in the panel discussion that we do need both the pressure from disrupters and new models supported by governments to effectuate lasting structural reforms in education—the two go hand in hand.

Convergence of Formative and Summative Assessment

Several aspiring "disrupters" presented during this session, sharing tools and technologies that can be used in assessment of, and for, learning.

"We're really after a paradigm shift in the format, the delivery, the general holistic nature of an assessment […] I think that's where the convergence between formative and summative starts to happen," said Brian Moon. In addition to his role as a BMC conference organizer, he is president of Sero!, a concept map-based assessment platform used to evaluate learners' understanding of the relationships between concepts.

"Instant feedback is a really big deal," when it comes to formative assessment, expressed John Winkley. He works closely with administrators and teachers on assessment solutions as a director of AlphaPlus consultancy. Winkley noted that adaptive assessment technology can do a good job at spotlighting learners' weaknesses—which they might otherwise be tempted to avoid—and encouraging them to try again until they achieve mastery.

Sharing her perspective as an assessment technology vendor, Sofie Emmertsen, head of Inspera Denmark, spoke of the role that students play in developing innovative technology. She described a bottom-up chain of influence where learners shape digital assessment formats by providing feedback on the look and function of technology as end-users, which developers then translate into efficient platforms that institutions can implement. These institution-level models can further inform policymaking to ensure system-wide consistency and accountability.

Joe Wilson, head of digital skills at City of Glasgow College, said the pandemic reoriented his institution back to the learner: "We've questioned what we're assessing, and why."

Institution-Level Assessment Models

The final session's presentations and panel discussion reemphasized themes that surfaced from the beginning of the conference and throughout, particularly those surrounding equity in the education ecosystem.

"I think the measurement community has done very little to address issues of equity and inclusion," said Stephen Sireci, psychometrics professor at the University of Massachusetts Amherst. "The only thing surprising about the backlash against testing is that it didn't start sooner."

"We're supporting teacher judgment," said David Leng, professional advisor to the Scottish government's Learning Directorate. He described assessment as simply a tool, "Not the be-all to end-all," and decried dependency on tests to dictate students' life opportunities. "We really need to diminish that," he said, emphasizing the primacy of the student-teacher relationship. "If we get that right, we have a more equitable society."

Conclusion

Even as one of the event organizers, I was impressed at how well BMC presentations connected dots and reiterated core concepts without becoming redundant. Sessions knitted together perspectives from all corners of the assessment world to create a cohesive panoramic of where the industry stands and what forces are shaping its trajectory.

While technological innovations were a big focus and appeal, my biggest takeaways from the various presenters had to do with the core of why and how we use assessment generally. These insights presented some guiding questions for personal reflection:

  • How can we reorient assessment as a tool to support instruction and not a remedy in and of itself?
  • How do we ensure that formative and summative assessments complement rather than compete with each other?
  • How can we align all our assessments so they measure success in real-life terms that impart applicable skills and opportunities to close the achievement gap?

Discourse shared at BMC revealed a collective desire to recalibrate toward this vision of assessment, empowered by new technology that can make individualized learning and assessment feasible on a large scale.

As the world faces increasing uncertainty due to climate change and political volatility, disruptors like COVID-19 seem like they may become more common than not. However, as we have seen, such dire circumstances do indeed create political pressure for institutions to adapt in long-overdue ways. Are values-driven changemakers like those drawn to attend BMC prepared to seize the opportunity?

Time will tell.

Author

Kristine Hadeed is an education reform advocate, content creator, and digital marketing strategist. In her work, she supports various organizations, programs, and initiatives, including Beyond Multiple Choice and Sero! Learning Assessments. She is a Teach for America alumna and earned her Bachelor of Science in broadcast journalism from Virginia Commonwealth University.

©2021 ACM  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.



Comments

  • There are no comments at this time.