ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Usability, user experience, and learner experience

By Mark Notess / August 2001

Print Email
Comments Instapaper

E-learning stocks are a rare bright spot in a gloomy tech market these days. Boosters of on-line learning promote its lower costs, broader accessibility, and personalization potential. But much e-learning still has slow adoption and high dropout rates. Online learning leaves many students frustrated or unenthusiastic. The good news is that concepts and processes for addressing these shortfalls in learner experience can be found in the field of usability. In this paper, I outline ways in which the field of usability, properly understood, can help online learning fulfill its promise.

Usability Defined

"Usability" is finding its way into common speech, and its adjectival cousin "usable" is widespread: We talk not only about usable websites but also about usable information and even usable buildings. I recently found a recipe on the web for "Usable Chicken." The term has been around longer than you may think—it is a by-product of the industrial revolution, given new currency in the information age. "Usable" means "able to be used." Its new popularity arises from the excruciating un-usefulness of so much software. Granted, many mechanical devices are difficult to use, VCRs and microwave ovens being the most commonly cited sources of use-anguish. But software, with its hangs, cryptic error messages, and enforced waiting, has introduced new refinements in pain. And so usability has moved from being a binary attribute—I can use it or I can't—to being a continuous one, a measurable continuum, and hence a distinct discipline with its own jargon, methods, and societies. In fact, "usability" has come to mean not just an attribute but also a set of processes, even the people who apply the processes.

Usability as an Attribute

Usability is a measurable attribute of a product. Its definition is not standardized in the same way that, for example, some performance measurements have been assigned standard benchmarks. We cannot say something is "96.3% usable" and have people know what we mean. What we can do, however, is specify a user profile, a set of tasks, and a context of use. Then we can measure and report such metrics as task completion time and rate, error rate, and user satisfaction.

These sorts of metrics form the basis for usability goals. We might state a goal specifying that "95% of users install the software correctly in fifteen minutes or less fewer without having to call technical support." This can be measured in a lab setting. If we do our work well, we can help our company predict support-call volumes relative to unit sales.

Usability as a Process

If usability is a measurable attribute with a financial impact, the question arises how to ensure or improve it. Processes that attempt to ensure or improve usability—commonly called usability engineering or user-centered design—observe two principles:

  • Know your user. Before you design something, make sure you understand who your customer is. This includes user characteristics, motivations, and context.
  • Iterate. Don't expect to build something right the first time. Build a prototype and evaluate it; then fix what doesn't work. Evaluate and redesign until it is good enough to release.

Beyond these basics, usability-enhancing processes may include such methods as:

  • usability goals or metrics.
  • design guidelines.
  • heuristic evaluations.
  • cognitive walkthroughs.
  • usability testing.
  • participatory design.
  • field studies.

These methods are described later in this paper.

Usability as a Functional Group

In large organizations, people who know how to facilitate usability are sometimes in a separate functional group. "We'll send it over to Usability," the project team says (usually after it is too late to fix anything). Usability groups often have labs where prototypes and products can be subjected to user testing while the users' winces and groans can be observed through cameras or one-way mirrors, or recorded on videotape.

Usability groups within large organizations face several problems. They often have to work hard to justify their existence because they don't produce anything (other than criticism and reports). They sometimes have trouble convincing their client organizations to involve them in product development from the very beginning. They can feel isolated and undervalued. Some organizations choose instead to sprinkle usability expertise throughout their organization, making usability a part of their product development rather than an optional add-on. This has its own disadvantages—for example, it is harder to justify the cost of a user-testing lab. And the sprinkled usability people run the risk of losing some of their objectivity by being tied too closely to the product development organizations. But the usability people are sometimes happier with this model. They feel they can make more valuable contributions, and they are less likely to be seen as extraneous.

In a business context, usability must be balanced with business objectives, technical constraints, time constraints, etc. Usability is just one of many product attributes about which appropriate business tradeoffs must be made.

User Experience

I find discussions about usability often get confusing because they fail to distinguish between two usability-related concepts—ease of use and usefulness. I usually include both concepts under usability, but some people limit usability to mean ease of use. Either way, it is important to recognize that just because something is easy to use does not mean anyone will want to use it. I learned this distinction early in my career when I worked on products that people found easy to use but did not in fact use. The products weren't useful because they didn't fit well with what our customers needed to do.

It is primarily this narrow definition of usability as simply ease of use that led Don Norman and others to adopt the term "user experience" instead. User experience focuses on the user rather than on an isolated product attribute or a process. Was the user's total experience with a product successful and positive?

One might argue against introducing the new term "user experience," let alone its abbreviation, "UX" (I prefer "UE"). An enlightened view of usability has always been indistinguishable from user experience. Yet there is something to be said for rejuvenating a discipline that has too often been marginalized and whose practitioners have sometimes failed to attend to the big picture.

The term "user experience" emphasizes that when we make product design decisions, we are impacting real people (i.e., users). We are not just designing a product. We are designing an experience for a real person, who may or may not be happy with the result. User experience cannot be fully predicted from a laboratory test—we have to take off our white lab coats, venture out into the real world, and deal with the messy complexity into which our product must fit.

Learner-Centered Design

How do the concepts and processes of user experience apply to online learning? To the extent that an online learning system is another piece of software, the applicability is straightforward. All of our methods that have worked well with software applications should be used with online learning and should work equally well. But creating online learning is not identical to creating typical software applications because we have to concern ourselves with things like instructional strategies, content sequencing, and quality of learning.

Web usability has been a hot topic for the past few years, but web-based learning faces some different issues. Web usability has largely concerned itself with e-commerce—product catalog navigation and converting hits to purchases. Other web usability work has focused on information seeking and finding. But web-based learning is a different experience. It raises questions like these:

  • How can we keep learners engaged with large amounts of content?
  • How can learners get oriented and effectively navigate an online learning environment consisting of dozens of learning resources, tools, and activities?
  • How do we engender effective online collaboration between learners?

The remainder of this paper shows how usability methods and concepts can help answer these and other questions raised by the challenge of e-learning.

Usability as an attribute. The usability of online learning can be specified and measured. Usability is not some amorphous, subjective quality, the ultimate determination of which should be left to the taste of those with the most power in the organization. Usability goals for courseware can be specified, based on business objectives, and can be measured. Here are some examples:

  • 95% of first-time students connect to the ABC virtual classroom within two minutes without requiring technical support.
  • Average XYZ web-based course registration times are under 60 seconds.
  • Experienced users of QRST safety training are able to launch and complete the new lessons without any instruction on how to use the new navigation model.
  • Students completing the LMNOP web-based lessons give the new learning environment an average rating of 4 or better on a five-point Likert scale ranging from "very difficult to use" (1) to "very easy to use" (5).

These goals sound like typical usability goals and are not unique to online learning. Depending on the structure of your organization, the same people who set goals for the usability of online learning may also be in a position to set goals for pedagogical effectiveness. Characteristics to measure could include

  • learner satisfaction with the learning content.
  • learner perceptions about the applicability of the content.
  • learner enjoyment of the learning experience.
  • actual learning, measured via tests.

Many aspects of product usability and pedagogical effectiveness are measurable: Choose to measure just those few that matter most to your business or organization.

Know your learner. All too often, we assume we know everything about the people for whom we're designing. But do we? We may know our target learners are in a certain age range, are of a certain educational level, or have a certain amount of experience with the Internet. But this type of information is not specific enough to guide learner-centered design. Learner-centered designers ask (and find answers to) questions like these:

  • What are the preferred learning styles of our target population?
  • How do they currently learn this kind of information?
  • When and where will they do online learning? What are their time constraints?
  • What motivates them to participate in this online learning?

Although questionnaires and phone interviews can be used to build a learner profile, they almost never provide the detailed level of understanding we need to create effective e-learning. A far richer source of user data is contextual observation. Go where your learners are and get to know the context within which their learning must occur. Field study methods such as contextual inquiry can be very helpful here.

Iterate. No matter how exhaustive your understanding of your learners may be, it is still nearly impossible to build it right the first time. Rapid, successive design/evaluation cycles hone the learning solution until it is good enough to release. Although usability testing is the most well-known method for evaluating the usability of a design, other methods listed below may also be appropriate.

Usability testing the online learning with representative learners is a quick and effective method for uncovering both major and minor design flaws. Research has shown good results using as few as five or six test subjects. Traditional usability testing involves having representative users execute specified tasks in a laboratory environment. This method is useful when the framework for learning delivery is under development or when any new interaction style or navigation is introduced. Usability testing can answer the question whether learners will be able to figure out how to do what they have to do.

Eventually your learning delivery framework may stabilize—you standardize navigation and other interaction at an adequate level of usability. Is it still valuable to run usability tests as you develop new learning content for the established framework? I believe it is, but the focus changes somewhat. Instead of looking at the usability of specific tasks (e.g., register, start the first lesson, take the quiz), you focus on the macro task: learning. Are the learners able to achieve the learning objectives? Are the objectives achieved efficiently, effectively, and enjoyably? Usability testing can help answer these questions.

Evaluating learning may move usability practitioners outside their comfort zone. To be effective, they need to familiarize themselves with the evaluation frameworks and methods from instructional design. Also essential are acquaintance with educational testing research, learning styles, and the rudiments of learning theory.

Heuristic evaluations hold promise for online learning. In a heuristic evaluation, one or more evaluators use a set of principles (heuristics) to analyze a product for shortcomings. The heuristics should be based on research and/or industry best practices. While popular usability heuristics such as Nielsen's can be used with some effectiveness, we also need to develop sets of heuristics specific to different types of online learning. The challenge for most types of online learning is that established sets of heuristics do not exist. Web design heuristics that have grown up around e-commerce should be used with caution since many assumptions about the users of e-commerce do not apply to online learners.

Design guidelines may themselves be used as heuristics by evaluators. Ideally, applicable design guidelines are also followed by the instructional designers rather than being introduced by usability people after the fact. Again, general web design guidelines can be useful, particularly when establishing a delivery framework. But pedagogical guidelines typically have to be developed by each organization. Meanwhile, we can hope for (and work toward) research-based design guidelines covering a range of e-learning categories.

Cognitive walkthroughs are a technique wherein an analyst steps through a product to see whether a particular task can be accomplished without the user getting confused or lost at any step. Since the "task" in online learning is to learn, a cognitive walkthrough can reveal whether, as each new concept or exercise is introduced, there is enough prior context available to the learner for him or her to continue making progress.

Participatory design is a broad term encompassing methods for involving the ultimate users of a system in the design of that system. In developing online learning content, end users (learners) can be brought into the development process in a variety of ways: They can be test subjects in usability tests, they can be studied during needs analysis, they can provide feedback on storyboards or wire-frame mockups, etc. Focus groups of representative learners can provide feedback on a given instructional design.

Participatory design developed in Scandinavia as a way to ensure that information workers have some say in the systems they have to adopt. Online learning runs the risk of oppressing unwilling learners whose training is mandated. Participatory design helps required online learning be more palatable and useful.

Field studies are an effective way to gather compelling data about the total user (learner) experience. Observations of real learners working with your online learning in their real environments provide unique insight into what learners struggle with and what works well. For all their popularity, usability labs are not real environments. They rarely simulate the messier side of real-world environments, with their slow or intermittent modem connections, small screens, bad lighting, background noise, interruptions, time pressures, and other distractions, any of which could significantly detract from an online learning experience.

Learner Experience

Delivering superior online learning experiences requires a careful blending of concepts and methods from the domains of both user experience and instructional design. Today, this cross-pollination is not very advanced. One lesson we can learn from more than two decades of usability work is to avoid focusing our concerns too narrowly. Online learning is often not very usable today, but if we obsess about laboratory-measured ease of use while ignoring the broader issues of the learner experience (as well as broader business issues such as return on investment or time to market), we will repeat an oft-made mistake. For online learning to achieve its promise of providing high-quality interactive learning inexpensively anywhere, we need to make great strides in usability while ensuring that the total learner experience is satisfying, successful, and humane.


  • There are no comments at this time.