ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Usability testing of e-learning
easy, effective, affordable

By Carol M. Barnum / April 2008

Print Email
Comments Instapaper

When an instructor hears nothing negative about students' experiences with online course tools, it's easy to assume that all is well. After all, the course or learning management system (LMS) has been reviewed, assessed, and "passed" as effective. And the same may be true for the course content, as a result of internal or external review by subject matter experts.

But, these measures of effectiveness may not tell you much about the effectiveness from the point of view of your users. In a previous article ("What do you Mean When You Say Usability'?"), I described the various ways in which usability assessment is applied to e-learning and the essential-yet-often-missing element of usability testing in this process. I promised a follow-up piece to describe how to add this essential component easily, effectively, and affordably. Here's that follow up.

Why Test?

Since usability testing focuses on learning about the experience of the user engaged in an e-learning course, usability testing is basically nothing more than observing the user performing typical tasks in pursuit of his or her goal. The goal could be as simple as reading a discussion thread and posting a response, or posting an assignment to the dropbox, or using the course e-mail tool to send an e-mail to team members. Typically, a usability test provides the opportunity to observe users performing a number of these small tasks, and learning about the user's perceptions of the course structure and design in supporting the user's goals. As I wrote in my previous article, "the user is the validator of usability." So, the bottom line for determining usability is in the user's perception of satisfaction from the experience.

With this introduction to the critical role that the user's perception of effectiveness and ease-of-learning plays in understanding the usability of an e-learning course or LMS, how can you, as the instructional designer, learn about the user's experience?

You can set up a simple test to find out.

How to Test

Although I am the director of a state-of-the art usability center, I will be the first to tell you that you don't need a lab to conduct an effective usability test. Sacrilege? Not really. I'd rather spread the gospel of usability to encourage as much internal testing as possible than preach that you have to use a fancy facility, such as ours, or you can't get good results. Of course, if you have budget and want the extras that such a facility can provide, then there are advantages to testing in a lab. But there are no requirements to test in a lab.

That said, what do you need to set up a test? Here's a short list of requirements:

  • A room with privacy (a door).
  • A facilitator, who can also do double duty as official note taker.
  • A simple test plan, which identifies what you want to test, who you want to test, and how many you want to test.
  • A process for testing that establishes consistency in the script or "scenarios" and the methodology in each test session.
  • A computer for the user, set up to match the typical requirements for the course.
  • Optional-a video camera to record the session (not needed unless you want to review the results or share the results with others).
  • Optional-observers, who can also be note takers. If you include observers, you should create a template for note taking so that similar sorts of notes can be compared after testing.

When to Test

There are two types of usability tests:

  • Formative-testing that takes place while the course or LMS is under development.
  • Summative-testing that takes place after the course or LMS has been completed, either just before or after "release."

The advantage of formative testing is that it provides ample opportunity to build the user's experience into continuing course development. Optimally, formative testing is iterative, meaning that there will be a chance to repeat the testing after further development to see if the changes are working well for users. Iterative testing answers the question, "Did we get it right?"

The advantage of summative testing is that it provides the opportunity to see how the whole course or LMS works as a cohesive product. The disadvantage, obviously, is that it is too late at this point to make any substantive changes. However, the reason to conduct a summative test is to learn about the user's experience for future course development. Summative testing is often conducted to establish a baseline before development begins on a totally new or radically revised version of a course or an LMS.

How Many to Test

This is always the big question. And the answer could be expensive if the belief is that you have to test with large numbers for validity. In the case of validity, especially when validating an LMS in a summative evaluation, large numbers may be needed to "prove" the usability of the LMS. And, if this is the case, it can become a marketing strategy to promote the usability testing process and the usability of the product in the company's literature, at conferences, and in sales calls.

However, when usability testing is formative, it is focused on learning about the user's experience so as to build that learning into the continuing improvement of the course or LMS. In these instances, small numbers have proven to be very effective. Jakob Nielsen (and his colleague Tom Landauer) broke through the numbers barrier with seminal research that documented the magic number at five for identifying 85 percent of the findings of a usability study.

Many people have adopted the mantra of five, while others continue to challenge it. However, whether you are a believer or a skeptic, it should be understood that the premise for getting usable results with five (or fewer) users is based on two components:

  • Recruiting users that represent a narrow subset of the total user population.
  • Creating scenarios of use that place users in the same parts of the course or LMS, as they work toward accomplishing the same goals.

Recruiting for Sub-Groups

Because there can be any number of different users in a particular e-learning course or LMS-traditional students, non-traditional students, students who have experience using an LMS before, and those who have not-it is important to identify the characteristics of a particular user group and recruit to these characteristics. If you want to learn about more than one sub-group's experience, you can actually reduce the number of users in each sub-group, because the overlap between groups will provide similar findings, while you will also learn about the differences in each sub-group. Thus, if you want to recruit from two sub-groups, you can recruit three or four users in each sub-group for a combined total of six to eight users. If you are testing with three sub-groups, you can drop the number for each sub-group to three users, for a total of nine users.

Working from Scenarios

The reason for scenarios is to focus the users' tasks on particular parts of the course or LMS. This provides overlapping findings in the space of an hour's time (the typical time for a test session). If you don't provide the users with tasks, they are very likely to wander into vastly different parts of the course. Although this can be useful for a study of perceptions and what interests users on first exposure, it won't document findings that need fixing. So, if your goal is to diagnose or confirm expected issues that need to be addressed, scenarios focus the users' attention on specific tasks.

What to Do During Testing

If you've followed my advice to this point, you've identified your goals for the test, decided the characteristics of the sub-group of the user population, screened participants who match this user profile, and determined the tasks you want your participants to do. Now, you're ready to conduct the test.

In the room you have obtained for testing, you should sit beside or off to one side of the participant. It's good to stay within the peripheral vision of your participant so that he or she can see you but not be distracted by you. If you have a laptop to take notes, you might want to sit a bit farther away so that your keyboard entry doesn't make a distracting noise. If you're using a clipboard with your note taking template clipped to it, you can sit closer. Wherever you sit, you want to position yourself to be able to observe where the participant is within the course.

Thinking Out Loud

In addition to observing what the participant does, you will want to encourage your participants to think out loud. Although this is not a normal activity for students in an e-learning course, the "think aloud protocol," as it is called, provides you with a window into the mind of the user-an invaluable part of a usability study. It requires a little practice on the part of the participant and some encouragement from you, but with an occasional reminder to your participants, you will help them share with you not only what they are doing but also their reaction-both positive and negative-to what they are doing. You can provide examples to give them the idea of what you mean by thinking out loud: "I'm clicking on the discussion icon . . . oh, my gosh, look at all those discussion postings! . . . I have no idea how to post a response to this one . . . "

As an observer and facilitator, you must resist the temptation to respond in any way other than neutrally to anything the participant says or does. If the participant turns to you and says, "Did I do that right?" your response might be: "Everything you do helps us understand the experience of using this course." Or you can throw back a question, "What do you think?" And if the participant says she's not sure, you can reply, "Help me understand your hesitation."

It's very hard to remain neutral in these situations and not to reveal your feelings. But, practice makes perfect. Or at least gets you closer to perfection with every study you facilitate.

So, What Does It Cost to Conduct a Test?

Since I promised to provide information about how to conduct usability testing that's easy, effective, and affordable; here's the "affordable" part.

The typical usability expert's response to just about any question is, "It depends." And, of course, the cost of testing is clearly one of these questions in which the answer depends on a number of factors.

Let's assume, however, that you have secured the room and you are donating your time or have approval from your management to take the time to plan and conduct the test and report the results. Let's also assume that you can recruit your participants for "free," meaning that they are readily available and they will work for free, meaning that they will participate in the study without compensation. This is not as unusual as it might appear if your users are students. You very likely have access to them and you can recruit them for perhaps the cost of refreshments. In this scenario, there is no actual cost for testing.

If your desire is to conduct a small study and promote the need for more testing, you can broadcast the results widely to push the need for more testing. As Nielsen says in the article cited earlier, "The difference between zero [users in a study] and even a little bit of data is astounding." Of course, if you're doing this on your own and you have never done it before, you'll want to read up on the methods to try and do it right. There are numerous books, articles, websites, and conferences devoted to the methodology that are easy to locate with a quick Web search.

If, however, you want a usability consultant to conduct the study-hopefully with your involvement in planning it-you can consult the Usability Professionals Association website for a list of consultants and companies in your area. Costs will vary widely, of course, but a ballpark baseline cost for a small study of five or six participants can run as low as $5,000, and will likely come in under $10,000.

While this may sound like a big number, it has to be factored into the overall cost of developing the course or LMS, as well as the potential return on investment. Measuring ROI can be done through student retention, student satisfaction surveys, passing grades, or completion data. For LMS systems, it can be measured in increased sales or repeat customers.

Ready, Set, Go

This overview of the requirements for a usability study should give you the essentials to get started. Everyone has to start somewhere, and you may be the one that has to initiate the process where you work. If that's the case, I hope I have convinced you of the value of testing so that you can learn from your users about their experience. After all, it's the users who are the ones who will ultimately determine the success or failure of our efforts. Better to learn from them sooner rather than later.


  • There are no comments at this time.