ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Centering All Students in Their Assessment
Advances in eAssessment (Special Series)

By Mark Johnson, Uma Venkateswaran, Fiona Hinds, Steve Ferrara, Megan Bairstow / September 2022

TYPE: OPINION
Print Email
Comments Instapaper

Many teachers are familiar with students asking, “Why do I have to learn this?” or “When am I ever going to use this?” The truth is, educators likely have asked these questions themselves. Education delivery models should be designed to serve the student, and likewise, the authors of this paper believe strongly that assessment, too, should serve the needs of the students and provide multiple ways for students to demonstrate what they know. We must remain student focused and commit to picturing students at the center, as the primary stakeholder of their learning. Safir and Dugan [1] describe this as a shift from a pedagogy of compliance to a pedagogy of voice. They provide many characteristics of the latter, including teachers and students building knowledge together; active learning through inquiry, dialogue, projects, simulations, etc.; and viewing students as culturally grounded critical learners.

Since effective assessment can be an integral part of learning, we view students as a central element of their assessment. In order to maintain a student focus that encompasses all students, we must utilize our understanding of diversity and inclusion and confront all inequitable uses of assessment. Indeed, “educational assessment is a powerful tool that can help advance racial equity,” and racial justice for Black, Latinx, Indigenous, and other marginalized populations is an important lens for defining ways forward [2]. This paper will explore a few ways in which assessment can better serve the needs of all students and includes research that supports this work, as well as the implications related to pursuing (or not pursuing) such approaches.

Performance Assessment

Looking well beyond multiple choice and common student-constructed-response items, performance assessment offers abundant opportunities to enhance equity for all students. The guiding principle for this discussion is that students learn and construct knowledge in classroom social contexts and that assessment should, likewise, be designed and conducted from a social constructionist point of view, not simply a positivist point of view [3].  Performance assessment enables us to take steps in student-centered assessment, as we describe below.

  • All students should see themselves in their assessments. Currently, many assessments include reading passages and topics, visual and other displays (e.g., graphs), and scenarios for test items that illustrate mainstream culture with little representation of people of color. The same is true for English learners and students with disabilities. It is a simple matter to expand our sampling of stimuli, topics, and so forth to represent people of color proportionally and to include works that represent different cultural beliefs, perspectives, and practices in our assessments. When students do not see themselves in their assessments, they are likely to be less engaged and less motivated to give their best effort or do their best work. 
  • Students should be able to choose how they are assessed, whenever feasible. Choice of assessment activities can be problematic in assessments where strict comparability is required (e.g., for state accountability testing; see for example Campbell and Donahue [4]). Strict comparability requires that students must respond to assessment activities in standardized ways (e.g., everyone must write an essay). In formative assessment situations where performance assessments are utilized, standardization requirements can be relaxed [5], and opportunities to enhance equity can be designed into performance assessments. For example, culminating activities in performance-based assessments can offer students options to demonstrate their knowledge and skills in the ways they know best, such as writing, formal presentations, demonstrations, or other modes of expression.
  • We can use current and emerging technology capabilities to enhance equity in assessment. Technology capabilities include web searches, delivering technology-enhanced items with animated or dynamic stimuli and response features (e.g., drag-and-drop responding), automated scoring of student responses to assessment activities, and explorable, manipulable, immersive environments (e.g., EcoMuve immersive learning environments). Performance assessment approaches like performance tasks and project-based assessments are particularly good opportunities to capitalize on technology capabilities. And the flexibility and choice that are delivered through technological capabilities provide opportunities to enhance equity and place students at the center of their assessments. 
  • Some technologies, perhaps especially automated scoring of student responses in assessments, can contain unintended and unsuspected biases. AI-generated scores, just like human-generated scores on performance tasks, are only as equitable and valid as the student responses used to train automated scoring algorithms. If responses from students of color, English learners, and students with disabilities, who may provide quality responses using strategies unlike those of their mainstream-culture classmates, are not included in the training, biased scoring and scores can be the outcome [6]. A particularly vivid example comes from a report on the training of self-driving cars. Samuel [7] reported a study indicating the detection accuracy of dark-skinned pedestrians by self-driving cars was five percent lower than for lighter-skinned pedestrians. Why? Because dark-skinned pedestrians were not adequately represented in training the visual-detection algorithm. He describes this study as an example of a “problem in the development of automated systems: algorithmic bias.” 

Assessing Students with Significant Cognitive Disabilities

Assessment of students with significant cognitive disabilities requires considerable focus on students as individuals, as student needs and familiar classroom practices can vary greatly. Common technologies as well as low or no-tech options can foster better individualized assessment approaches to meet student needs. Supports can include flexible timing, 1:1 administration, object exchange, augmentative and alternative communication (AAC) devices, a variety of assistive technology, and other adjustments to administrative procedures. These should mirror what student use in their everyday instruction.

These technologies can be designed to promote higher academic outcomes for students with the most significant cognitive disabilities. The Multi-State Alternate Assessment (MSAA) program is made up of a consortium of U.S. states and territories that seeks to allow flexibility in presentation and response mode that represents how students interact with materials in daily instruction. This allows them the opportunity to best show what they know, as independently as possible. Embedded supports and accommodations are available for students who take the computer-based, paper-based, or hybrid presentation of the test. One of the more familiar technological tools is the audio player, which reads assessment material aloud to the student. Likewise, due to one-to-one administration, a trained test administrator, familiar to the student, can also read the test aloud. Additionally, the assessment platform itself is compatible with a range of assistive technology that a student may use in daily instruction. This encourages continuity between how they receive instruction and how they may engage with an assessment, as well as more flexibility in how they demonstrate understanding of a concept. An extensive administration window allows flexibility in the length of time a student may need to complete the assessment. Where one student may answer several items during a testing session, another student may be able to answer a single item. Additional computer-based supports include individualized visual options such as color contrast, magnification, and answer masking. Students who are blind or have low vision can be provided with tactile graphics [8]. Sireci, in his explanation of his coined term “understandardization,” points out “incorporating understanding of examinee heterogeneity into the formal standardization process does not give us something less than standardization; it gives us a better understanding of the testing conditions that are best for each examinee” [5].

The classroom setting can often be the best source of guidance for assessment accommodations. The technologies that students use in their daily classroom instruction should be those that the students utilize on their assessments. After all, good assessment practices place assessment as a part of the instructional process, not separate from it. Students’ performance on assessment should be indicative of their content knowledge and should not assess their ability to adapt to unfamiliar tools.

Formative assessment, especially formative assessment that provides rich feedback to the student, is becoming increasingly understood as a powerful use of assessment. To date, however, comparatively little attention has been given to formative assessment tools for students with significant cognitive disabilities. Technology can play an important role in formative assessment for students with significant cognitive disabilities, as can performance assessment. For example, project-based assessment and performance tasks can be scaffolded in ways that meet students’ needs, allowing them to access and build upon securely held knowledge. Additionally, removing the constraints of a hard deadline and time limits enables students to maximize their learning and focus on what to learn as opposed to simply rushing for the sake of completion. This flexibility can be assisted by technology that provides greater student responsibility and agency, such as a platform that provides information about learning targets and self-monitoring features in a student-facing dashboard. Feldman observes the practice of adaptable deadlines communicates that learning has a flexible timeline and pace and that high-quality work is more valuable than ending work when it is incomplete [9]. These types of assessment approaches, since they are classroom-based, can and should embrace assistive technologies that are available and familiar to students, simultaneously adopting classroom practices that support greater student opportunity for success.

Research-Backed Approaches

An evolving body of research is increasing support for centering students in their assessment. Elements of success include increased collaboration, more opportunities for student voice and choice, and better use of technology, all of which have the potential for greater student motivation and understanding of the benefits of assessment.

In the U.S., the idea of student-centered learning dates to the Progressive era, when reformers like John Dewey called for the education of the “whole child,” and has been rediscovered periodically.  The idea has remained elusive, however, due to the lack of a unified definition of student-centered learning and the difficulties in implementing it, given the wide-ranging needs and learning styles of children.  Most advocates agree on one aspect: current practices need to shift from being an “adult-centered,” top-down approach to one that emphasizes “learning that is personalized to students’ unique needs and aspirations and designed with their ideas and voices at the table” [10].

Experts have identified seven principles for student-centered learning: positive relationship, whole child needs, positive identity, student ownership and agency, real-world relevance, competency progression, and anytime anywhere [10]. Others have attempted to address the gap in practices by identifying key elements in student-centered learning approaches such as project-based learning.

One study acknowledges the challenges in creating assessments for performance-based learning and notes the following as hallmarks of a good student-centered assessment [11]:

  • Enables student agency, including choice of projects and problems.
  • Ensures individual learning is assessed in addition to group learning.
  • Provides clear parameters of project grading and rubrics.
  • Includes an anchor: overall project concept.
  • Delineates clearly the driving question: main question that focuses student’s research
  • Outlines the artifacts: all the materials that need to be gathered throughout the course of the project.
  • Provides detailed rubrics clearly spelling out criteria that are shared with students at the beginning of the project.

There is extensive documentation in the literature regarding the success of schools that embraced a student-centered approach and outperformed most traditional schools. To note a few outcomes [11]:

  • Higher graduation rates, student achievement, college preparatory course completion data, and college persistence.
  • Significant gains in ELA and math as students who started with below national averages ended up with higher than national average.
  • Increased motivation, creativity, metacognition, and collaboration. 

Research also indicates that performance assessments are uniquely suitable to meet the needs of students with cognitive disabilities, due to their flexibility, adaptability, and inclusiveness.  These assessments can be designed with learner variability in mind, incorporate accommodations to the level of proficiency a student needs to demonstrate as explicitly outlined in their Individualized Education Program (IEP), structure the assignment by scaffolding the tasks to guide the learner, and offer students opportunities to use multiple languages by including some native language portion of the product [12].  A recent study has identified four critical areas that need to be studied to enhance assessments for students with cognitive disabilities: (1) understanding the unique cognitive pathways to learning, (2) researching ways in which interactive-dynamic assessments can be utilized in charting different learning progressions, (3) improving the validity of assessments for students previously in the margins, and (4) documenting the potential consequences of technology-enabled assessments for students with disabilities. The study notes that technology-enabled assessments are more inclusive and measure achievement with improved validity for all students, especially students with disabilities [13].

Student-Centered Assessment in Action

Taking a critical look at schools that have fully aligned themselves to student-centered assessment allows us to investigate the practical and stakeholder implications of this work. Student-centered assessment provides students with a voice and choice in meeting expectations based on individual interests and pacing; this includes options for how they will demonstrate that they are meeting standards, what they have learned, and what they know. The learning process becomes an opportunity for students to have agency in how they will demonstrate their learning and meet standards and curriculum expectations with guidance from teachers and support from the community. Engagement with the community provides context for real-world application and feedback and further involves the student with relevance to real-world application. The role of the student in the learning process, according to a sociocultural learning perspective, is based on an understanding that learning is dependent on motivation, engagement, and a sense of efficacy [14]. In addition, engagement with learning also relies on relational aspects of a student’s identity and is not exclusive to cognitive resources [15].

One example of student-centered assessment in action includes a charter school system in Michigan, FlexTech High Schools. FlexTech offers students a flexible schedule, with a focus on project-based learning, student agency, relationships, 21st-century skills, and community.  The educational framework is designed to provide students with options for meeting curriculum expectations. Students are required to identify individual learning goals in consultation with teachers and/or advisors to meet the expectation of the Michigan Merit Curriculum and graduation requirements. Pathways to meeting these expectations include a variety of experiences related to project-based learning and assessment, community experiences, self-paced online classes, and classroom instruction and assessment. “Students integrate knowledge and skills from a variety of disciplines, consult with community experts and present solutions to authentic audiences” [16].

The educational model has an embedded advisory program that is significant to the educational engagement framework. The learning process begins with an exploration of student interests, skills, passions, and purpose to support and guide meaningful learning related to what students are excited about. Students are at the center of topic selection and determining methods and timing of assessment in a way that is at the center of activities and meaningful learning options.

FlexTech’s educational framework and delivery model provide us with a number of applications to study, including curricular applications aligned to demonstrations of knowledge; student-teacher dynamics; teacher as facilitator; student agency for identification of topics and mode of assessment; personalized learning; advisory programs to keep students on pace; and how to best serve students with an Individualized Education Plan (IEP).

Students are expected to apply learning in order to do something bigger. For example, in the culinary arts program students create meal plans based on health conditions, and students are presenting new innovations and business plans to the community in a “Shark Tank” format, bringing new products and services to life. Assessment of learning in such projects results in a showcase to the community and families as a demonstration of learning.

Student-centered assessment involves all students and ensures that historically marginalized students have an opportunity to be directly engaged in decision-making about the learning process, specifically as it relates to meaningful topics that are connected to experiences that are not always included in the curriculum. It provides choice in identifying how to express and demonstrate knowledge. Students of color and marginalized students are often underrepresented in curriculum materials and standardized assessment content. Assessments that are intentionally designed to validate, engage, and empower students provide a culturally sustaining framework [2]. When all students are validated and empowered to center themselves in the learning process, including how they are assessed, they have an opportunity to see themselves in the process. The performance-based framework provides opportunities for students to identify interests and contexts that are culturally relevant.

References

[1] Safir, S. and Dugan, J. Street Data: A next generation model for equity, pedagogy, and school transformation. Corwin, Thousand Oaks, CA, 2021.

[2] Lyons, S., Johnson, M., and Hinds, B. F. A call to action: Confronting inequity in assessment. Lyons Assessment Consulting. July 2021.

[3] Dixon-Román, E. J. and Gergen, K. J. Epistemology and measurement: Paradigms and Practices. 2013.

[4] Campbell, J. R.,  and Donahue, P. L. Students selecting stores: The effects of choice in reading assessment. National Center for Education Statistics, Washington, DC. 1997.

[5] Sireci, S. G. Standardization and UNDERSTANDardization in educational assessment. Educational Measurement: Issues and Practice 39, 3 (2020).

[6] Ferrara, S. and Qunbar, S. Validity arguments for AI-based automated scores: Essay scoring as an illustration. Journal of Educational Measurement (2022). https://doi.org/10.1111/jedm.12333

[7] Samuel, S. A new study finds a potential risk with self-driving cars: Failure to detect dark-skinned pedestrians. Vox. March 6, 2019.   

[8] Multi-State Alternate Assessment (MSAA). Test Administration Manual. 2021.

[9] Feldman, J. Grading for Equity: What it is, why it matters, and how it can transform schools and classrooms. Corwin, Thousand Oaks, CA, 2019. 

[10] Kaput, K. Evidence for Student Centered Learning. Education Evolving, St. Paul, MN, 2018. 

[11] Friedlander, D., Burns, D., Lewis-Charp, H., Cook-Harvey, C. M., Zheng, X., and Darling-Hammond, L. Student-Centered schools: Closing the opportunity gap. Stanford Center for Opportunity Policy in Education. Stanford University. June 2014.

[12] California Performance Assessment Collaborative. Best practices for designing accessible performance assessments. 2019.

[13] Bechard, S. et al. Measuring cognition of students with disabilities using technology-enabled assessments: Recommendations for a national research agenda. Journal of Technology, Learning and Assessment, 10, 4 (2010).

[14] National Academies of Sciences, Engineering, and Medicine. How People Learn II: Learners, contexts, and cultures. National Academies Press, Washington, DC. 2018.

[15] Lorrie A. Shepard. Classroom assessment to support teaching and learning. The ANNALS of the American Academy of Political and Social Science, 683, 1 (2019).

[16] FlexTech High School. Academics. 2021.

Additional Reading

Clark, B. Project based learning: assessing and measuring student participation. Research and Evaluation in Literacy and Technology 39 (2017). University of Nebraska, Lincoln.

Darling-Hammond, L. Developing and measuring higher order skills: Models for state performance assessment systems. Learning Policy Institute and Council of Chief State School Officers. Palo Alto, CA. 2017.

Nair, P. Blueprint for Tomorrow: Redesigning Schools for Student-Centered Learning. Harvard Education Press, Cambridge, 2014.

Shultz, G., Ellis, S., and Reif, G. A qualitative study of student-centered learning practices in New England high schools. Nellie Mae Education Foundation and the University of Massachusetts Donahue Institute. Boston. 2016.

Wolfe, R., Steinberg, A., and Hoffman, N. Anytime, Anywhere: Student-Centered Learning for Schools and Teachers. Harvard University Press, Cambridge, 2013.

About the Authors

Mark Johnson joined Cognia in 2011 and serves as the Measurement Services Senior Advisor, Content.  He taught mathematics at the middle and high school levels for 17 years. Early in his career, Mr. Johnson recognized the importance of good assessment in education and became a leader on various assessment projects. In 2003, he joined the Office of Student Assessment at the Massachusetts Department of Education, where he served in a variety of roles, including as Director of Test Development, prior to joining Cognia. He is passionate about providing all students with ways to equitably demonstrate their achievement.

 Uma Venkateswaran is a part-time educational professional at Cognia with deep expertise in assessment, innovation, and business development.  She has a strong knowledge of test design, business strategy, production, publishing, and manufacturing resulting from work experience at Cognia, Educational Testing Service, College Board, Educational Records Bureau, and John Wiley & Sons.  She has a doctorate in US History from Case Western Reserve University and was awarded the Six Sigma Green Belt for innovative research at ETS.

Fiona Hinds, Ed.D. is a Senior Advisor for Equity and Transformation at Cognia. Dr. Hinds is interested in advancing equitable assessment designs that include engagement with diverse stakeholders to ensure that inclusive cultures and identities are represented. She is committed to racial and gender equity and serves as the Chief Strategy Officer for the nonprofit organization Women in Measurement, Inc. 

Megan Bairstow, M.Ed., is the manager of the Content Development – Accessibility group for Cognia. She has spent her entire career in the field of special education, first teaching in public and private schools, and over the past 10 years, she has devoted her attention to the field of alternate assessment and accessibility. Megan puts her expertise in special education and technical assessment knowledge to use overseeing and leading the test development process and protocol for various projects across Cognia.  

Steve Ferrara is a Senior Advisor at Cognia. He was State Assessment Director in Maryland and a Head Start and high school special education teacher in Massachusetts. Steve designs summative and formative assessments for K-12 educational achievement, special education, and English language proficiency programs. He conducts research on standard setting, examinee response processes to test items, and for test validation. He edited a professional measurement journal and serves on editorial advisory boards. He has produced more than 200 published conference papers and presentations. Steve was co-recipient of professional awards in 2014 and 2006; he co-holds US patent 10,796,230.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Copyright © ACM 2022. 1535-394X/2022/09-3514256 $15.00 https://doi.org/10.1145/3514256


Comments

  • There are no comments at this time.

ADDITIONAL READING

Advances in eAssessment (Special Series) This series of articles covers advancements in eAssessment. The series features educators, developers, and researchers from around the world who are innovating how learning is assessed while meeting the challenges of efficiency, scalability, usability, and accessibility.
  1. Going Beyond Multiple Choice
  2. Centering All Students in Their Assessment
  3. Harnessing the Power of Natural Language Processing to Mass Produce Test Items
  4. Getting Authoring Right—How to Innovate for Meaningful Improvement
  5. Closing the Assessment Excellence Gap—Why Digital Assessments Should go Beyond Recall and be More Inclusive