ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Effectively evaluating online learning programs

By John Sener / May 2006

Print Email
Comments Instapaper

Effectively evaluating online learning programs starts with two simple ideas: Apply tried-and-true approaches, and take advantage of the opportunity to use new ones. In practice, while there are many effective approaches to evaluating online learning programs, many organizations do not know these approaches or how to apply them.

A theory based on cognitive science research helps explains why. Frame theory holds that "frames are mental structures that shape the way we see the world," according to George Lakoff, professor of Cognitive Science and Linguistics at the University of California at Berkeley [1]. As Lakoff notes, frames or framesets shape goals, plans, actions, and perceived quality of outcomes because they strongly influence which facts will be seen (those that fit the frame) and which will be ignored (those that don't).

To understand the power of frames, consider how the cliché expression "Sage on the Stage vs. Guide on the Side" reflects two very different ways of perceiving appropriate instructional practice. Such commonly used frames affect not only how one perceives online learning, but how one evaluates it. For instance, evaluating such factors as social presence, communities of inquiry, or discussion-thread design is far more likely to result from using a frame which views online learners as "connected" rather than "isolated," since the latter frame excludes these factors by definition (i.e., "isolated" learners are not social, do not have communities, and do not participate in discussions).

Ineffective evaluation of online learning results from faulty frames based on misperceptions about both online learning and evaluation itself. Because the frames one uses affect how online learning is perceived, effective approaches to evaluating online learning programs require using appropriate frames. Here is a quick guide to selecting useful frames and avoiding defective ones.

1. Use what you already know.
Avoid: Online Learning Evaluation Is a Different Universe
Alternative: Evaluation Is Evaluation

Online learning is not a creature from a different universe. A lot of familiar evaluation tools, for example the commonly-used Kirkpatrick and Phillips scales, apply just as well to online learning programs as to other types of programs. Academic programs routinely measure student satisfaction (Level 1) and learning effectiveness (Levels 1 and 2 on the K/P scales) in online courses, while some corporate and academic online learning programs make attempts to evaluate impact at higher levels (behavior change, effects on business, return on investment). Collections of available evaluation tools, resources, and publications such as The Evaluation Center at Western Michigan University are also useful for evaluating online learning programs.

2. Seek value and meaning.
Avoid: Evaluation as Judgment
Alternative: Take Evaluation Literally

This faulty frame is more common in academia. In Barbara Wright's view, judgmental evaluation in higher education is a legacy of the accountability movement starting in the 1980s. In this view, "'evaluation' is about making value judgments" and enforcing "accountability" (often in the form of negative consequences) based on evaluation results. Evaluators are generally "external" to the process and thus supposedly "objective," despite their focus on passing judgment. In practice, evaluators are usually resisted or sometimes co-opted in such situations. Seeking to avoid the consequences of negative findings, program administrators focus on damage control, and the process disconnects ownership of implementing proposed recommendations for improvement [2]. Approaching online learning evaluation as an occasion for judgment is unnecessary and likely to be unproductive.

A good antidote to the "evaluation as judgment" frame is to consider what evaluation literally means: to draw out ("e-") the worth (from the Latin valere) of something. This means focusing on figuring out what's of value to key "internal" and "external" stakeholders—everyone from project or program participants and their constituents to (in some cases) society at large. Thus, evaluators are in one sense "meaning makers," which is a particularly rewarding aspect of the work. Far from being some abstract philosophical construct, "meaning making" moves beyond data collection, analysis, and results reporting to evaluation, which helps explain the significance and ramifications of program results in terms that make sense to stakeholders. Clients seek and welcome such an approach once they understand it.

3. Evaluation is an important part of a bigger picture.
Avoid: Evaluation as Episode; Evaluation as Autopsy
Alternative: Integrate Evaluation into Ongoing Practice

During a recent phone conversation, a colleague described her difficulties in a research and evaluation course she was taking for her doctoral program. She kept wanting to describe how the evaluation results could be applied, but the professor kept telling her to focus on the numbers and not be concerned with possible applications. Her efforts to toe this line were passable—she got a "B" in the course. In the world of practice, however, integrating evaluation with strategic planning, dissemination, and sustainability planning makes a lot more sense than treating evaluation as a "standalone" or episodic process.

In instructional design, such integration is taken for granted in models such as ADDIE (the "E" stands for Evaluation) and other Instructional Systems Design (ISD) approaches. Evaluating online learning programs should be integrated into strategic processes as well. This opens up evaluation to a broader range of possibilities for purposeful uses, for instance using evaluations to inform future grant proposals, to justify investment expenditures or funding increases, or to drive continuous quality improvement processes.

At the course level, relying exclusively on end-of-course evaluations is what Matt Champagne of IOTA Solutions refers to as "evaluation as autopsy." Instead of waiting until the patient has died, why not identify and address potential problems before the course is over? For example, David Sachs of Pace University uses ongoing evaluation to monitor online courses offered for the National Coalition for Telecommunications Education & Learning (NACTEL) project. These 15-week courses are evaluated at five-week intervals (Weeks 5, 10, 15) through detailed student surveys. Sachs can quickly review quantitative and narrative student feedback to identify potential problems and alert instructors to deal with them.

Ongoing evaluation can easily be applied to shorter courses as well. For instance, a ten-day online workshop which I recently co-facilitated included a mid-workshop "critique and comments" discussion forum which invited participants to give feedback on activities conducted during the first half of the workshop. We used the feedback to make minor but helpful adjustments to the course.

Sometimes a corpse is all one has to work with, but use ongoing evaluation where possible—when done well, it enhances courses, engages learners, and improves the entire process.

4. Products and results are important; process is important too.
Avoid: Content Is King
Alternative: Evaluate Process as well as Products

Too often, vendors equate learning with content—in effect, they view learning as solely a function of learner-content interaction. An evaluation approach based on this frame would naturally focus on the efficacy of this interaction—how good is the content? How well do learners interact with it?

Recently, I attended a vendor demonstration of a learning management system (LMS) at the invitation of a client. The LMS had all sorts of features to support development of content and enable it to be packaged as re-usable learning objects (RLOs) in a variety of modes—print, SCORM, HTML, and XML among others. At one point, the vendor rep stated that with their LMS, "you can take a PowerPoint presentation and convert it into a course."

Of course, this was precisely the problem, since my client's interest was in finding an LMS that would support instructor-facilitated, case-based, and other learning approaches. Collaborative group projects, discussion forums, or qualitative assessments did not fit neatly into a "content" box, so the vendor's product was essentially incapable of supporting this functionality coherently.

The real problem with the frame is that learning does not equate with content. Effective learning environments tend to be systems, and in practice evaluating the results or products of a particular program usually involves evaluating other deliverables besides content.

However, evaluating products and results is not enough; evaluating process is often at least as valuable. Listing key evaluation questions which pertain to specific process as well as product deliverables is one common strategy. Another good strategy is to include creating a process guide as a project deliverable. One good tool to use to capture process is to have key project staff complete process or reflection logs, a simple journaling process which invites them to reflect on and record observations about project activities while in progress.

Evaluating process is also important for capturing unanticipated benefits of projects. For instance, the Quality Matters (QM) project, funded by the US Department of Education's Fund for the Improvement of Post-Secondary Education (FIPSE) and administered by MarylandOnline (MOL), was originally focused on facilitating course-sharing among MOL member institutions to improve student access to online learning opportunities. In practice, an unanticipated benefit of the QM project is that the QM process is serving as a viable structure for promoting faculty development. Faculty participants in the peer review process (both peer reviewers and course developers) commonly report that it is a rewarding professional development opportunity.

Paying attention to process is also useful in uncovering significant events which would otherwise be ignored. Recently, a client had delayed project evaluation of a series of online learning modules because only one module serving only a few students had been offered to date, while the other modules were still in production due to development delays. Nonetheless, the client was eager to start a comparative evaluation of this module with their in-person classroom offerings because the window of opportunity was shrinking: They were already planning to make changes to their classroom offerings based on what they had learned from online course development. Rather than waiting for the results and analysis of the pilot offerings of all the modules, what they had learned from the development process was by itself enough evidence for them to make the change. As a result, documenting how this process occurred will become a focus of the project evaluation, but its significance would never have been uncovered without conscious attention to evaluating process.

5. Making it better is the ultimate aim.
Avoid: The Comparison Trap
Alternative: The Ultimate Aim = Make Things Better

Comparisons of some sort are unavoidable when evaluating education or training, but comparing delivery modes (in particular online/distance learning with traditional classroom-based instruction) for the purpose of establishing the superiority of one delivery mode over another is specious, irrelevant, and counterproductive [3]. Such comparisons assume a non-existent uniformity of practice, so even if a "significant difference" is found, there are a multitude of reasons for that difference which are independent of delivery mode.

A better frame is to focus on making programs better. This may seem too commonsensical to be worth mentioning—until one considers how commonly it is not applied in practice. In some cases, the "evaluation as judgment" frame also gets in the way. By contrast, Wright's "assessment loop" model focuses on using ongoing assessment to support improvement instead of enforcing accountability. The QM peer review process is another model which emphasizes course improvement, in this case for course design. The process is designed to be collegial rather than adversarial, and the labels used to evaluate quality are "meets expectations" or "needs improvement" rather than the more punitive pass/fail frame which is so deeply wired into the brains of educators. Administrators are cautioned against using peer review results judgmentally as part of faculty evaluation, unless it's to reward faculty for participating in the process.

Sample Approaches

A number of established approaches can be easily applied to evaluating online learning programs effectively—for instance ROI (Return on Investment)-focused methods, surveys of student engagement, and many others. However, evaluating online learning programs effectively can be much more than "do what you've always done." Just as the advent of online learning has given practitioners the opportunity to reflect and re-think how learning, training, and instructing are done, the same holds true for evaluating online learning. There are now a variety of new approaches for evaluating online learning effectively. What follows are a few examples of how such approaches can apply productive frames to effective evaluation.

Sloan-C Quality Framework. The Sloan Consortium (Sloan-C) has developed a framework for quality online education [4], a composite framework whose five key "pillars" or components (Access, Student Satisfaction, Learning Effectiveness, Faculty Satisfaction, and Cost Effectiveness) can be used separately or in combination with a variety of measures to evaluate the effectiveness of online learning. Although a major focus of this conceptual framework is supporting online learning programs that are equal or better in quality to campus-based programs in higher education, all of the framework's five pillars also focus on effectiveness, and the Faculty Satisfaction and Cost Effectiveness also apply to quality improvement. The Sloan-C Quality Framework also easily accommodates evaluating results and process and integrates well with other evaluation tools; for example, the Kirkpatrick/Phillips (K/P) scales correspond closely with Student Satisfaction (K/P Level 1), Learning Effectiveness (K/P Level 2), and Cost Effectiveness (K Level 4, P Levels 4 & 5).

The CEIT Model for Evaluating Online Learning. Another approach to evaluate online learning programs is a four-stage model I developed for evaluating online learning called the CEIT model [5] which focuses on Comparisons, Effectiveness, Quality Improvement, and Transformation.

  • Using comparisons to understand the key attributes or capabilities of various online learning practices can be useful, so long as one avoids the comparison trap described previously. The results of comparative studies can be used to improve programs as described below.
  • Focusing on effectiveness can be deceptively simple ("Is it effective?" "What elements make it effective?" "What changes would make it more effective?"), but it enables evaluation of online learning on its own terms and can provide useful information on what works and doesn't work.
  • Evaluating quality improvement focuses on measuring the effects of implementing changes, so the "evaluating product and process" frame is particularly applicable here since both influence the results in practice. For instance, online learning object repositories often don't improve quality despite containing excellent resources because existing processes don't adequately support their use (good product, bad process). Conversely, online courses often link to a variety of learner support services, but the services themselves may not be adequate (good process, bad product).
  • Transformation focuses on how online learning programs affect the institution or organization. Although even effective, ever-improving online learning programs often remain isolated islands, some institutions decide to make a conscious and relatively comprehensive effort to create a transformative effect as a result of their experience.

The CEIT model accommodates multiple paths of progression through the four stages. Many online programs start at the comparative stage and progress more or less linearly through the other stages, while others exhibit these stages in a non-linear or concurrent fashion. Although still under development, the CEIT model accommodates application of well-established evaluation tools, determining what's valuable about a project's results and process, and focusing on the ultimate aims of improvement and transformation (making things better).

Mix and Match. These two approaches can be incorporated into an evaluation design; in fact, a "mix and match" approach can be used quite effectively to evaluate online learning. For instance, one evaluation of a series of instructor-led online bioterrorism courses incorporated student characteristics [6], applied media attribute theory [7], and social presence theory [8] into the measurement instruments (student and instructor surveys). Each course module gave students the opportunity to provide feedback on that module; course discussions were also analyzed using a content analysis instrument [9] to assess the effectiveness of creating social presence in course discussions. The evaluation data was analyzed and the results used to incorporate improvements into subsequent iterations of the course. [3]

Conclusion

The best strategy for effectively evaluating online learning programs is using what you already know, but also to move beyond that—seek to add value and make meaning, integrate evaluation into larger processes, evaluate process as well as results, and keep the focus on making things better. In the short run, clients are happier with results that reflect what they value; in the long run, evaluations that add value also build capacity and increase the potential for integrating evaluation more systemically into the organizational culture. Make the evaluation process more rewarding for all parties concerned by using frames and approaches that matter.

References

1. Lakoff, G. (2004). Don't Think of an Elephant! Know Your Values and Frame the Debate. White River Junction, VT: Chelsea Green, pp. xv, 37.

2. Wright, B. (2004). More Art Than Science: The Post-Secondary Assessment Movement Today. In J. Bourne & J. Moore (Eds.), Elements of Quality Online Education: Into the Mainstream, pp.185-198. Needham, MA: Sloan Center for OnLine Education.

3. Sener, J. (2004/2005). Escaping the Comparison Trap: Evaluating Online Learning on Its Own Terms. Innovate, 1(2). Retrieved June 1, 2005 at: http://innovateonline.info/index.php?view=article&id=11

4. Moore, J. (2002). Elements of Quality: The Sloan-C Framework. Needham, MA: Sloan Center for OnLine Education.

5. Sener, J. (2005). Beyond the Comparison Trap: Strategies for Effective Online Learning Evaluation. Eleventh Sloan-C International Conference on Asynchronous Learning Networks, Orlando, November 2005.

6. Diaz, D. (2000, March/April). Carving a new path for distance education research. The Technology Source. Retrieved November 22, 2005, from http://technologysource.org/article/carving_a_new_path_for_distance_education_research/

7. Smith, P.L., and Dillon, C.L. (1999) Comparing distance learning and classroom learning: conceptual considerations. The American Journal of Distance Education, 13(2), 6-23.

8. Gunawardena, C. (1995). Social presence theory and implications for interaction and collaborative learning in computer conferencing. International Journal of Educational Telecommunications, 1(2-3), 147-166.

9. Swan, K. (2002). Immediacy, social presence, and asynchronous discussion. In J. Bourne & J. C. Moore (Eds) Elements of Quality Online Education, Volume 3. Needham, MA: Sloan Center for Online Education. Accessed online July 17, 2006 at: http://www.kent.edu/rcet/Publications/upload/ISP&ADpict.pdf



Comments

  • There are no comments at this time.