ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

What do you mean when you say "usability"?

By Carol M. Barnum / February 2008

Print Email
Comments Instapaper

Usability and e-learning seem a natural fit. Usability is one of the five subject areas on eLearn Magazine's homepage. That's a good thing. But do we all mean the same thing when we say "usability"?

If you are using the term interchangeably with "assessment"—whether by students or experts or perhaps yourself as the surrogate student and actual expert—then you are using it to mean something less than it should. If you think of usability as meaning quality, particularly as it refers to the quality of the content, you're addressing an important criterion of usability, but you are not getting at the crux of the matter. Likewise, if you equate usability with validation, particularly as it refers to the functionality of the learning management system (LMS), then you are again restricting usability to something that can be checked, confirmed, and certified as usable, but you are not getting at the true meaning of usability.

The User Validates Usability

No amount of review, assessment, validation, or any other metric conducted by experts can confirm the usability of a course. Only the user can do that. And you are not the user. Case in point: How many times have you heard or read about e-learning courses that have been carefully and expensively designed, assessed, and documented, only to fail miserably on roll-out? In engineering circles, this result is called "broken as designed."

While there are many worthwhile activities you can pursue to improve the usability of your e-learning course, there is only one sure-fire way to confirm the usability: You must watch and listen to your users while they are engaged in the process of using the course in pursuit of a meaningful goal.

So, to formalize a definition of usability, I offer up the following: "the result of actions taken after observing, listening, and learning from real users who are actively engaged in pursuit of a real learning goal." And let me add to the definition that usability is a process, rarely an outcome. The goal should be improvement, not perfection.

Now that we have a working definition, how do we apply it?

The answer may surprise you. To set the stage for the answer, I'll begin by working through the common practices that do not fully represent the answer.

Focus Groups Are Not the Answer

Although focus groups are a popular method of market research with certain useful outcomes, measuring usability is not one of these. Desirability, maybe. Usability, no. Why? Because in focus groups, you ask participants what they think they would do or might want to do with a product. They do their best to express their desires, but these desires frequently turn out to be wrong in actual use of the product. The reason for this disconnect between wishes and reality, proven time and again in usability studies, is that users really don't know what they want until they actually do something with a product. Their potential wish list of desires may, in actuality, make the product unusable, rather than usable.

Focus groups continue to have an important role to play in marketing, because highly competitive products must constantly offer new features to market to the desire of customers, even if the result is more complicated products. For this reason, companies that build products push features over usability almost every time. Usability enters the picture, typically, only after the product fails to be usable. Or when usability becomes the central selling point, such as with the iPhone, which answers the question of how a single device can do so much, so simply and beautifully (yes, aesthetics count in the user's experience).

Conversely, think of your experience setting the alarm on a hotel clock radio (and being confident that you have done it correctly). This simple act, so frustrating to so many, has resulted in headline grabbing news as one major hotel chain after another has commissioned custom-built clock radios that have simple features that guests will find easy to use and that will subsequently build trust and decrease calls to the "help" desk for wake-up calls.

E-learning does suffer the same feature-creep syndrome as many other products, particularly from the point of view of LMS providers. The thinking is that the LMS systems with the highest number of features are more likely to attract more buyers. If you are in the business of purchasing an LMS system, you've surely experienced this trend, perhaps even succumbed to it in your final purchasing decision. Typically, however, the end user is not the buyer.

Heuristic Evaluation Is Not The Answer

What little that has been written about usability and e-learning tends to focus on the benefit of heuristic evaluation, or expert review.

Heuristic evaluation has its place, but it is not a substitute for usability testing, nor should it be labeled "heuristic usability testing," as it is called by Feldstein and Neal. Its effectiveness as an inspection method derives from applying a set of rules or "heuristics" to an e-learning product to determine where rules violations exist. Its usefulness derives from the creation of a task list to clean up an e-learning course so that errors, such as consistency, user control and freedom, and the other heuristics, can be minimized or eliminated before users engage with the course. It does not, however, assure that the expert reviewers have caught all the issues that could seriously affect the user's experience. In fact, heuristic evaluation frequently produces what are called "false positives," meaning the identification of issues that do not, in actuality, turn out to be problems for users. It's really good at identifying the little things that can be annoying. It's not so good at identifying "show stoppers."

Here's an example: Let's say you want to conduct an expert review of the assignment tool in your e-learning course. You consider two of Nielsen's heuristics—visibility of system status and user control and freedom—as you assess the effectiveness of this tool. While you are able to determine that users can locate the assignment icon and can upload an assignment, you are not in a position to know how easily and, more important, with how much confidence and satisfaction users accomplish this task. Remember, you're not the user.

I can speak from experience that this particular feature of a popular LMS is not intuitive to the users and that the result is a lack of confidence, which has compelled numerous students to send me email, frequently with the assignment attached, after posting or attempting to post an assignment to the drop box. In their email, they ask me to confirm that they have actually posted the assignment successfully and that I can see it.

I can also state from experience that the function that allows them to withdraw an uploaded assignment may seem to be intuitive and obvious, but must not be the case for my students, as they frequently send me the assignment via an external email address, telling me that they couldn't remove the original assignment (or didn't know that they could), and that they want to be sure I have their correct assignment. This is frustrating for the students, frustrating for me, and time-consuming for all: "broken as designed." To determine the usability of this feature via heuristic evaluation might incorrectly affirm that it is usable because the reviewers already know how to do it or have the correct mental model, from prior experience, to know what to expect.

As those of us who teach e-learning courses know, stress and frustration on the part of the user (and the instructor) can color the whole experience of learning and teaching online. Can you, as the course designer or instructor, determine that your users/students will have anxiety and frustration about this or any other feature of a course? Perhaps in some cases, particularly if you experience the problem yourself, but it is far more likely that you will miss this potential problem because (a) you are too familiar with the product (the course tool) and (b) you are not the user.

You Can Measure the Usability of Something, But Should You?

Measurement seems to be the watchword for evaluation of e-learning. Mark Notess states that "Usability is a measurable attribute of a product." He describes things to measure: task completion times, rates, errors, and user satisfaction. No question that these things can be measured, including user satisfaction, in which you ask users to rate their satisfaction with a particular course or learning module. Likewise, measuring time on task is a frequent and frequently significant indicator of success in such activities as using websites, Web applications, software, and hardware, as well as locating answers in online help. As Feldstein and Neal write, "The usability of software (or courseware) is not a matter of personal opinions. It is a matter of measurable facts that ca be used to redesign a user interface to get better results."

Another View of Measurement

Let me play devil's advocate here. As popular as measurement is, what does measurement show or tell us about the user's experience? While I recognize that "time is [frequently] money"—as in the time it takes a customer service representative to solve a person's problem in a call center—the measurement of the time it takes a student to perform a task in an e-learning course is not necessarily an indicator of effectiveness.

Effectiveness is in the eyes of the beholder. Jared Spool, a well-known usability consultant, discovered that users' perceptions of download speed and time spent on downloading were thought to be less—even when the actual time was much greater—when these users felt they were making progress toward their goals. I have seen this response from users in my own studies, indicating that time is a relative measure of efficiency and satisfaction, not an absolute one.

Users indicate greater satisfaction and less frustration when they feel in control of the tasks they are performing, and when they feel confident of their ability to perform these tasks successfully. When they lose confidence in performing a task in an e-learning course, their trust in the course and the instructor, not just in the course tool, plummets. Some drop out of courses because too much effort is required to learn the tool or they have too little confidence about using the course. As Feldstein and Neal assert (and I agree): "Poor usability can have measurable negative impact on course completion rates and post-test scores."

How can we find out about these potential pitfalls before students take our courses? The missing element is motivation.

How to Measure Motivation?

Motivation can be measured in fairly traditional ways: course completion rates, passing grades, higher skills acquisition, student evaluations. But these measures are all conducted after a course is developed or delivered. Even when such measurements demonstrate effective outcomes, how can the mysterious but all important elements of satisfaction and motivation be measured? And how can we measure those who did not complete the course? Interviewing those who fail to complete courses may give us some information, but does it give us the complete picture? In other words, if we learn from our students that certain personal or work-related or other issues prevented them from completing the e-learning course, we may deduce that there was nothing wrong with the course. But how can we determine whether missing aspects of usability diminish students' motivation and commitment? Might the students overcome the obstacles that cause drop-outs if the perceived obstacles to participating in the course were removed? And the same question could be asked of students who complete our courses. We could confirm that they are satisfied. But how will we determine how well-satisfied they are, and what elements of the course might have provoked needless challenges, confusion, and frustration?

I think that measuring motivation is the wrong goal. Rather, we should be learning about motivation by observing our users and listening to their reactions to our course content and design while they are engaged in the process of pursuing a learning goal.

In other words, no matter how "useful" and even "easy" it is to obtain measurements of course and student success, the missing piece is the user experience, which only usability testing can provide. And the result should not be a measurement of success, but an appreciation of the user's experience of learning, which results in a list of questions about how to improve it.

Of course, there is a cost for testing (in both time and money), but it does not have to be prohibitive, and it should be budgeted into the course development process at the same time that any other associated costs are being budgeted.

In a future article, I will tell you how you can do it cheaply, easily, and effectively.



Comments

  • There are no comments at this time.