ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

What is usable e-learning?

By Michael Feldstein / September 2002

Print Email
Comments Instapaper

A recent eLearn Magazine feature revealed that most major producers of e-learning are not doing substantial usability testing, probably because most major purchasers and consumers of e-learning have no way of evaluating the degree to which a course is usable. To put it another way, despite the huge and growing numbers of dollars being spent on e-learning, nobody is really checking to see whether the courses being developed are usable and therefore useful. In fact, we don't seem to even have a way to talk about usability in the context of e-learning.

There is, however, one ray of hope in this otherwise depressing report. Usability guru and fellow eLearn Magazine Advisory Board member Don Norman gives us a clue about how we can start this critical conversation. He is quoted as saying that for e-learning, "usability is not the major issue; learnability is." I believe this is a profoundly important statement, but it is also one that is difficult to unpack and translate into practical terms. My aim in this article is to look a little deeper into Dr. Norman's statement—deep enough that I can begin to define a research program for usability in e-learning. I will not make an attempt to solve the usability problem here. Instead, my goal is to define that problem well enough to make it solvable.

In order to do this, I will look at two questions. First, how can we define "usability" for e-learning in a way that can be measured? If Dr. Norman is correct, then we must come up with one or more ways of measuring the degree to which a course makes its contents "learnable." Anyone who has made a serious attempt to measure the effectiveness of a learning intervention, whether it is in academia or in the corporate world, knows that this is a challenge. We need a definition of our goal that is narrow enough to be measurable yet broad enough to be meaningful.

Assuming that we can answer this first question, we have a second question: Can we create meaningful usability tests that will be simple, quick, and cheap enough? And can we realistically hope that the e-learning creators will be able to perform them given the time and budget constraints in the e-learning market? Part of this second challenge is intimately related to the first one, since we have to make the case that the usability problems we are trying to fix are costly enough to be worth the investment required to fix them. Another part of it, though, has to do with crafting tests that are quick, easy, and affordable to implement. Our goal, after all, is not merely to be able to determine how usable a course is in an academic environment. Our goal is to make sure that real-world e-learning courses are made to be more usable. This is an engineering problem, not a pure science problem.

I will tackle each of these challenges in turn and provide a couple of examples of usability research questions that show the direction I think we need to take in order to solve the usability problem.

"Usable" or "Learnable?"

At the risk of being presumptuous, I'm going to rephrase Dr. Norman's claim somewhat by saying that "learnability" is one of the most important measures of "usability" in e-learning. In other words, learning is usually the use to which e-learning is supposed to be put.

This may seem like a trivial point until you consider the possibility that the things we call "learning objects" can and often are put to other uses besides learning itself. Back in the days when the phrase "learning objects" was usually preceded by "just-in-time" and "just-enough" rather than "re-usable," instructional designers tended to think of learning objects as primarily useful for performance support. If you needed to perform a task, you would call up a learning object that would walk you through that task. Because it was "just-in-time" and "just-enough," you actually didn't need to learn the content. It functioned as a job aid, holding the knowledge for you until the moment you needed it. Nevertheless, learning often did happen as a result of using the learning objects, but it was really an ancillary purpose (despite the name). In essence, the learning object becomes an extension of our brains, holding the information and procedures for us so that we don't have to cram yet more stuff into our overtaxed memories. We internalize, or "learn," the knowledge embodied in the learning object only to the degree that having the knowledge in your head is more useful than having it in your computer. (For a much deeper discussion of this sort of use, read Don Norman's book Things That Make Us Smart and note the places where he writes about "distributed cognition.")

Let's try out a thought experiment to see how this distinction matters when we try to measure the "usability" of a particular learning object. Imagine that two different financial services companies want to create a learning object that covers how to calculate the total interest accrued over the lifetime of a savings bond. Company A wants to use the learning object as part of a course designed to help their brokers pass a certification test. Company B wants to use the learning object as a just-in-time tool that customer service representatives can call up on their screens and use interactively to answer questions on the telephone when their customers call in.

How would you measure the "usability" of the learning object in each of these cases? Company A might look at the percentage of students who correctly calculated the interest in the appropriate questions on the certification test. Company B, on the other hand, might look at customer satisfaction levels for calls regarding bond interest, the length of time it takes a customer service representative to answer a bond interest call, and the percentage of times that the reps actually choose to use the learning object when they have the opportunity to do so.

Given these different measures of usability, one can imagine that the two companies might design their learning objects quite differently, despite the fact that the content is the same. Company A might include detailed explanations and analogies as well as multiple simulations. Company B might choose to start instead with a wizard or calculator that has rich explanations for each step that the learner/performer can choose to either read or ignore, depending on the time pressure and the needs of the customer being serviced.

Clearly, e-learning is a tool that can be and is used to support a number of cognitive goals, only one of which is what we tend to popularly label as "learning."

Focusing on Cognitive Goals

So far, you may be thinking that my example of e-learning tools used as performance support (or as tools for distributed cognition) seems like a side-problem that doesn't get to the core of what it means to talk about usability for e-learning. The example above is very…well…vocational. It is appropriate for teaching narrowly defined skills within a corporate context but not necessarily inextricable from the core question of e-learning usability writ large. To show that there's something deeper going on here, let's look at a more academic example.

Suppose you have two courses that will make use of an interactive CD-ROM that contains the annotated works of John Locke. Course A is called "The Class Consciousness of the American 'Revolutionaries" and is taught in the history department. Course B is called "Individualism and the Novel" and is taught in the Comparative Literature department.

The professors in both classes are specifically interested in having students know something about that fact that John Locke was the first writer to use the word "person," meaning a unique individual, as distinct from a more generic "soul." Professor A wants her students to think about that fact in relation to the justifications for the revolution that appeared both in political pamphlets and other forms of political speech from 1700 to 1780. For the in-class final exam, she expects them to be able to place Locke's writings and evidence of his influence on a timeline and write generally about the reasons why middle-class colonial dissidents might have found the notion of individualism to be useful. Professor B wants the students to be sensitive to how novelists like Gide and Goerthe might be responding to these ideas in their writings. He wants them to be able to write an insightful analysis for the end-of-the-term paper.

At the end of the semester, the students in the two classes may have "learned" much of the same content, but they will probably be able to make use of it in very different ways. Students in Class A should be able to talk off the top of their heads about sequences of particular historical events. When they learn about a new event or historical document, they should be able to make deductions about what may have influenced that event or document and what, in turn, it may have influenced based on what they know about historical timelines and the ways in which these particular ideas tended to cross-pollenate. They also may be more likely to be able to quote Locke, since they probably would have had to memorize passages for the purpose of the in-class exam. The students in Class B, on the other hand, may have a much fuzzier grip on the timeline or on Locke's exact words, but they should be very good at identifying echoes of, or responses to, Locke's ideas when reading new works of literature of the relevant time and place. Both groups of students "learned" about Locke's idea of personhood in some sense.

Given these two different cognitive uses and consequent measures of usability, one can imagine designing the CD-ROM differently for the two classes. Class A might benefit from an interactive timeline and highlighted excerpts of Locke's writings. They will want tools that help them memorize, or "learn," the appropriate information for the in-class test. Class B, on the other hand, might find it useful to have hyperlinks to debates within French and German literary circles of how to translate the word "person." They will want reference tools that support them as they attempt to discover, or "learn," the intricacies of an intellectual debate.

Usability, then, is defined by the ability of an object to support or enable a very particular concrete goal. Usability in e-learning is defined by the ability of a learning object to support or enable (or, to use Dr. Norman's term from The Design of Everyday Things, "affords") a very particular concrete cognitive goal. "Learning" is a somewhat sloppy term; a concrete cognitive goal that can be supported by a "learning object" (e.g., calculating the interest on a bond or looking up a reference that helps in the analysis of a novel) is not necessarily something that we would colloquially call "learning." Nevertheless, it is essential to finding a measurable sense of usability.

The Medium is the Message

One point that I have tried to make through my examples but haven't brought out explicitly yet is that "usability" in e-learning is about the way content is presented, and not just about the content itself. The concepts in the learning objects designed for the broker and the customer service representative may be exactly the same. Many of the words may even be the same. However, one may present these ideas in the context of an interactive flowchart and calculator that walks the student through the process while the other presents the same ideas in the context of an example and "cheat sheet" that are designed to help the student memorize them. Conversely, the same "cheat sheet" that is very effective for helping learners memorize the steps in a task may not be very effective at helping them make task-related decisions in the moment.

If we want to advance the study of usability for e-learning, then we have to look at the ways in which specific presentation or interface features have measurable impact on specific cognitive tasks or goals. Here are a few examples of the kinds of research questions that I think would be practical:

  • To what extent do site maps, on-screen menus, and tables of contents help learners internalize and remember the structure of the content? (Example: Does an always-visible content menu/outline of a course that teaches a set of structured tasks make the learners more likely to remember the tasks and their order?)
  • To what extent do site maps, on-screen menus, and tables of contents help learners to find key concepts when reviewing later? (Example: Do learners use courses with site maps as post-training performance support more than they use courses without site maps?)
  • Does audio-narration doubling of a text presentation (i.e., having a narrator read the same words that are on the screen) affect the learner's ability to remember key facts and concepts? (Example: Does hearing a vocabulary word and definition along with seeing the text increase recall rates?)
  • Does audio narration doubling of a text presentation affect the learner's ability to process complex concepts? (Example: Do the differential speeds at which we read the text and hear it spoken cause interference that makes difficult concepts less likely to be comprehended?)
  • Does a threaded discussion board interface affect the frequency with which learners "harvest" particular ideas or facts? (Example: Are people searching discussion boards for specific information more likely to find that information when the search results return hyperlinks to individual messages rather than entire conversation threads?)
  • Does a threaded discussion board interface affect the frequency with which learners synthesize various viewpoints in a conversation? (Example: Are people less likely to read an entire conversation when each message is on its own web page instead of having them all presented on one page?)

When formulating these sorts of questions, I urge researchers to resist the temptation to over-theorize the cognitive goal that they're testing for. Because we're trying to test how usable our e-learning is, the way we define the cognitive goal has to bear a reasonably close resemblance to the way the learner is defining what she is trying to accomplish (i.e., the use to which she is putting the e-learning in the moment). Think of the cognitive usability goal as something that is defined anthropologically rather than psychologically. (This is why I have tended to use the word "goal" instead of "task.") It's what we ourselves describe what it is we're trying to do when we're "learning" in a particular situation. This is not to say that cognitive theory is irrelevant; to the contrary. Cognitive theory comes in when we try to explain not what we're trying to do but how our brains are doing it (or, in some cases, why our brains are not doing it). We draw on our understanding of the learner's intentions when we decide whether the learner is trying to "memorize" or "understand" the content that she is both hearing and reading. We draw on our knowledge of visual and auditory language processing in the brain to help determine whether the audio/text combination serves the learner's cognitive goals and why or why not.

Making Usability Studies Usable

All this talk of e-learning usability testing sounds great in theory. The hard truth is that the vast majority of e-learning courses will never be tested using typical methods because those methods are expensive and take a long time to complete. The economics of e-learning production, whether in business or in academia, simply do not support it. Under these circumstances, how can we design usability studies in such a way that their results will actually have an impact on real e-learning design and development?

I believe that at least a partial answer can be found in a technique called "heuristic usability testing," developed by usability luminary Jakob Neilsen. Sometimes referred to as "discount usability," heuristic testing is faster and cheaper than many of the more traditional methods because it doesn't require bringing in real end-users to try out the application or tool (or e-learning course) being tested. Instead, a small team of experts is trained to look for violations of general guidelines. The team members would get together after individually evaluating the software (or course) and compare notes, prioritizing those problems that at least several of them rated as more serious. So, for example, one of the reviewers may notice that one of the video clips on a Web page doesn't display a "loading" status message when the video is being buffered from the server and therefore violates the usability principle that the system status should always be visible. (I ran into a situation recently in which a client saw two video clips in a course. One was programmed to show the status message and the other wasn't. Both took exactly the same amount of time to load but the client subjectively perceived that the one without the status message was taking a lot longer. Had we done a proper heuristic test, we probably would have caught and corrected this oversight rather than having it show up as a bug in the course.)

On the surface, the heuristic testing process sounds much like what goes on informally in e-learning development teams today. My colleagues and I routinely agonize together over issues like whether a course's navigation design teaches the learner about the course structure or just creates visual static that interferes with the learner's ability to process the main points on a page. However, unlike our informal conversations, in heuristic usability testing both the heuristics themselves and the process by which the team of experts arrives at a consensus (and even how big the team needs to be) have been empirically validated using more expensive and time-consuming methods of usability testing. In other words, Neilsen and his colleagues tested to make sure that their teams of experts following their well-defined method uncovered an acceptable percentage of the same problems that end-user testing techniques were finding in a given situation. We have nothing remotely like that kind of validation for e-learning usability testing right now. Without it, all of our carefully thought-out "expert" assertions about what design features will make a particular course more usable are mostly just whistling in the dark.

If we are serious about making our e-learning usable, we in the field must make a concerted effort to define usability questions that are related to the learners' cognitive goals and to situate the answers to those questions in a heuristic testing framework that will make them useful. This is a tall order but it is by no means impossible. It's also critical. When defined in terms of achieving cognitive goals, "usability" gets at the heart of what it is that we claim our courses (whether they have an "e" in front of them or not) are supposed to do. A "usable" course is one that teaches in the ways that the students need in order to get the value that they were looking for when they signed up.

So we have to take this on. One possible next step might be for real, trained usability experts to take the loose, layperson's perspective I have presented here, put it under a microscope, find the flaws, and see if we can construct a more rigorous description of a research agenda. We need a dialog. eLearn Magazine could be a home for such a conversation. If you are a usability expert and find the ideas of this article even mildly provocative, then I urge you to put your thoughts on (virtual) paper and submit a response. Likewise, if you are a student in a usability program now and are interested in putting together an article summarizing the research to-date in the field, then please consider doing so.

Let's start building a repository of e-learning usability knowledge.


  • There are no comments at this time.