ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Online testing
making it count

By Nikki Shepherd Eatchel / May 2007

Print Email
Comments Instapaper

A significant increase in online testing and assessment has resulted from the proliferation of Internet-based courses at corporations as well as educational and government institutions. If courses are delivered via the Web, it's a pretty safe bet that course examinations and competency assessments are delivered online as well.

Online or computer-based testing (CBT) poses many of the same challenges as traditional paper-based testing (PBT), including those relating to security, psychometric editing, and legal defensibility. New issues arise with CBT, including an increased risk of candidate cheating and item overexposure. To address these risks, organizations should follow best practices for online test development and psychometric editing.

The increased risk of candidate cheating can be mitigated by a number of factors, including an expanded online test item bank and standardized test item development. Developing a sizeable online test item bank enables the routine refresh of online test content and minimizes the chances of candidate information sharing. Taking the lead from large test administrators, organizations of all sizes are instituting scheduled test-item refreshment policies and processes that ensure candidates do not see the same online items or test design, generally decreasing the likelihood of information-sharing.

Standardized test-item development processes for CBT can also mitigate the risks of candidate cheating and item overexposure, as it ensures that the same question is asked in a number of ways. Organizations that use multiple item writers to develop content must develop and train content developers on standards in order to ensure the proper variation in test-item style, format, and difficulty. A style guide with templates and online item development standards can go a long way in improving item consistency, format, and variety. In addition, online content development training can ensure that item developers have the tools they need to develop credible, legally defensible items and item templates that can be used to create different variations on the same question.

Any organization developing or administering computer-based testing should be conscious of the psychometric editing process - one that includes the evaluation of item difficulty-levels and takes things such as grammar, sensitivity, and style into account. Psychometrics also provide for the review of test item form and function, such as parallel options, if there is sufficient information to answer the question, and answer length. Ultimately, proper psychometric editing mitigates both cheating and item overexposure, as it ensures item variety, objectivity, and standardization.

With the importance placed on objectivity, psychometric editing is best performed by test development professionals, not subject-matter experts or item writers. Individuals trained in the complexity of psychometric editing evaluate items in a different, critical light than subject-matter experts or item writers. It is important, however, to also have review and approval of the final, edited item by subject-matter experts in the appropriate field.

With so many organizations turning to the Web for testing and assessment, it is important to consider the issues and risks specific to CBT. A proactive approach accounting for the increased risk of candidate cheating and item overexposure serves both the organization and testing candidate better over the long term, as it increases test validity and candidate fairness, and offers a higher level of protection against legal challenges.



Comments

  • There are no comments at this time.