ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Designing usable, self-paced e-learning courses
a practical guide

By Michael Feldstein, Lisa Neal / August 2006

Print Email
Comments Instapaper

This guide was primarily designed to help teams of instructional designers and content experts create effective, self-paced e-learning. It teaches best practices for improving usability that can be applied by any instructional designers or content experts and was created so that no prior knowledge of usability is required to use the techniques.

Applying the systems in this guide will reduce the likelihood of learners getting confused or lost, or even failing to complete a course. The guide is intended to be helpful throughout the entire development cycle of e-learning courses. It includes strategies for usability engineering that can be used by the course designers early in the development process and information about usability testing that can be applied to completed courses.

USABILITY BASICS

At its simplest, "usability" is a measure of how easy it is for a user to complete a task. For example, imagine that you are assessing the usability of a bill-paying feature in an online banking Web site. You might ask some of the following questions:

  • Can new customers locate the link to where they can pay a bill?
  • How long does it take a customer to pay a bill?
  • How often does a customer get confused or frustrated and give up before completing the bill payment process?
  • How likely is a customer to make a mistake while paying a bill (for example, paying the bill out of the wrong bank account)?

Notice that the answers to these questions are all quantifiable. Given the time and the resources, we can measure exactly how many seconds it takes a customer to pay a bill and exactly what percentage of the time customers start a bill-paying process but don't complete it. The usability of software (or courseware) is not a matter of personal opinions. It is a matter of measurable facts that can be used to redesign a user interface to get better results.

On the other hand, there can be a measurable difference in how usable the same application is for two different people. Your mother, your teen-aged daughter, your mail carrier, and your accountant may find a particular bill-paying application much easier or harder to use than you do. There could be, for example, a measurable difference in how many young bank customers give up on paying their bills online versus how many older customers do. So part of the process of designing for usability includes asking the question, "Usable for whom?"

Why Usability Matters for Courseware

One of the most important goals of usability engineering is to increase the likelihood that the user will achieve his or her goal for using the software (such as paying a bill or learning about a topic). If you think of completing a course or passing a test as tasks that the "users" of courseware are trying to accomplish, then it is easy to see where usability engineering can be important. Courseware that is not designed for usability can create challenges for the learners that have nothing to do with the difficulty of the content. They can be distracted from learning the critical subject matter of the course by having to learn how to use the courseware. Poor usability can have measurable negative impact on course completion rates and post-test scores.

Testing For Usability

Usability is too often tacked on to the end of a project by conducting usability testing with test subjects who are as close as possible to the eventual users. A usability test typically seeks to answer questions like the one listed above for a bill-paying application and should lead to redesign if the results are unacceptable for the given application and user population. Completed software can be tested, but testing is also effective on prototypes or even paper sketches of a user interface. Traditionally, usability testing has been performed by trained experts using special labs equipped with one-way mirrors and video cameras or using equipment to track users' response times or eye movements. More recently, testing techniques have been developed which can be applied with no special equipment or lab and can be used effectively by testers with less training. While some of these testing techniques involve giving tasks to test subjects, others require only experienced testers, as described below.

Designing for Usability

Designing for usability from the very beginning increases the likelihood of a more usable product and reduces the need for testing at the end, when it is often too late or costly to make substantial changes. Designers can be educated on basic usability principles that they can apply from a project onset. This training should include techniques that help them to understand who they are designing for and what the learners' needs are, while enabling designers to test for usability at all stages of the development process.

The Role of Expertise

While it is true that the techniques described in this guide can be applied effectively by a team of non-experts, it is also true that trained and experienced Human-Computer Interface (HCI) professionals will get better results using the same techniques. You will get the best results by having a regular team that can gain experience together and share best practices over multiple projects. A team will gain expertise more quickly when they get feedback from learners once the courseware has been put into production. Also, a usability professional might be brought in to work with your team on the first few projects, particularly if getting usability right the first time is critical.

Assembling Your Team

Your usability team should include all of the roles that are needed to design and develop courseware:

  • Subject-matter experts
  • Instructional designers
  • Developers
  • Graphic designers
  • Multimedia specialists
  • Usability consultants

Before having your kick-off meeting, have all team members read this guide. You can use the kick-off meeting to assign usability engineering tasks and put together your project plan.

Two Tools That Can Be Used By Anyone

The rest of this guide focuses on two usability engineering tools that your team can employ on just about any project. The first, personas, enables you to identify important characteristics of your users (or learners) so you can design courseware that better meets their needs. After interviewing a handful of prospective learners using the interview protocol described in this guide, you construct a small number of profiles for imaginary, but prototypical, learners. These profiles, or personas, can be used by the team during both design and testing. Persona development should take place at the very beginning of a project, before the actual courseware design begins. The next section of this document describes persona development in more detail, and the appendices have templates that can be used to develop and document the personas for your projects.

The second tool is called a heuristic usability test. A heuristic is simply a rule of thumb. In this case, usability researchers have identified a heuristics that describe general principles of usable software design. Research has shown that groups of reviewers who are specifically looking for violations of these principles tend to catch a high percentage of the same problems that usability experts find using specialized (and sometimes expensive) testing equipment. For this reason, a heuristic usability test is one of several tools that are collectively called "discount usability tests." Formal heuristic reviews often take place later in the design process, but there are a variety of ways in which knowledge of the technique can be built into the entire design process. The third section of this document describes the heuristics and gives examples of how they can be applied to e-learning courseware. The appendices contain templates that can be used for your heuristic evaluation process.

Further Reading on Usability Basics

Jakob Nielsens's Web site http://www.useit.com has information about many aspects of usability. His Alertbox columns at http://www.useit.com/alertbox provide more detail about specific aspects of usability. A free subscription is available.

The US Government's site at http://www.usability.gov provides guidelines and checklists (note—new site in late May 2006, check out before including link).

The Society for Technical Communication's Usability & User Experience Community site at http://stcsig.org/usability/index.html also has many resources, particularly at http://stcsig.org/usability/resources/toolkit/toolkit.html.

PERSONAS

Have you ever purchased or been given a gadget or a piece of software that you end up abandoning because it doesn't work the way you need it to? Maybe you got a PDA but discovered that the color screen that you really don't need is shortening the battery life, causing it to run out of power in the middle of your work day. Maybe you bought special project-management software, only to find that it imposes a way of organizing your projects that really doesn't work for you. Whatever the item was, it just didn't fit with your needs and the way you work. It's not that there was anything wrong with the product; it just wasn't right for you. It didn't fit. The same problem of poor fit can happen with courseware. The course lessons could require longer periods of time than the learner has available in one sitting. Or it could assume a different level of comfort with computers. Or a different level of background knowledge of the subject matter.

The best designs are those that fit the needs of a variety of users. A digital camera can be designed for Sarah, a teacher who travels every summer and spends the rest of her year revisiting her trips through her photographs, as well as for Joe, who takes pictures of his children every weekend and emails them to relatives. The same could be said for photography courseware. At the same time, there are tradeoffs to consider. Sarah may want her camera to handle a variety of subjects and light conditions she may encounter on her travels, while Joe just wants taking pictures of his kids in the back yard to be as simple a process as possible. Designing a camera for both Sarah and Joe may be challenging.

No product and no courseware can be designed to perfectly meet the needs of every learner who might use it. However, designers can increase the likelihood of meeting the needs of most learners by thinking carefully and concretely about their target learner population. For example, an emergency room physician and a volunteer emergency management technician (EMT) may have very different needs and preferences for their courseware, even if they are learning the same topic. They will tend to have different comfort levels with computers, different background knowledge, different amounts of time to take the course, and different motivations for taking the course. Having a clear understanding of the relevant characteristics of the target learner population will enable your team to design courseware that leads to better learning outcomes.

Personas are profiles of prototypical learners that you create with your design team. They can be used for one or for many courses, depending on the courses and the applicability of the persona attributes. Ranging anywhere from a paragraph to a couple of pages in length, personas provide enough detail about relevant learner characteristics to help the team imagine the audience(s) they are designing for. The process of creating personas helps the members of the team develop a shared understanding of the learners. In the process, the team discusses many aspects of the learner population that help them in the subsequent design tasks. It also helps them avoid the mistake of designing the courseware for themselves—a common mistake that Tom Landauer calls the "egocentric intuition fallacy."

Figure 1.

How Personas Work

Personas work through the power of storytelling. Once you have a reasonably vivid mental picture of a prototypical learner, it's fairly easy to imagine that person trying to accomplish his or her goals using the courseware that you are designing. The team can make more effective decisions about the "fit" between the course design and the intended audience when the discussion is about what Ted or Maria would like instead of having abstract conversations about "good" design.

Persona descriptions contain information designed for three purposes:

  • Realism: Personas have to be vivid enough for team members to remember them and use them naturally. For this reason, they should contain a few strategically chosen pieces of fictitious biography in order to make them realistic, e.g., a first name, age, a few details about family or home life, and information about their educational background and job). There should be enough detail so that it is natural for a member of the design team to ask questions like, "Would James find this layout confusing?"
  • Goal definition: It is very important for the team to understand why learners are taking the course. Relevant questions to ask are as follows:
    • Are they required to do so as part of their job?
    • Will it help them in career advancement?
    • Are they taking it for personal satisfaction and enjoyment?
    This information will help the team identify effective ways to motivate the learners while being more aware of issues driving learners who may be at higher risk of dropping the course before they have completed it.
  • Learner strategies, competencies, and limitations: Every learner has a set of strategies for proceeding through the courseware depending on their particular skills, limitations, and environment. Some important questions are as follows:
    • Is the learner comfortable with computers?
    • Is he or she used to reading large quantities of text on the screen?
    • Is this his or her first online course?
    • Where will he or she be taking the course?
    These details can make a big difference in terms of the fit between a particular courseware design and a learner's needs.

Researching Your Personas

Personas are most effective when they are based on research about real people who might take your course. The goal of the research is to gather enough data about your probable learners to see patterns in their characteristics, needs, and goals. You will be profiling your interviewees, gathering the same kinds of information that go into the personas themselves, but in more detail. Appendix I provides a sample interview protocol, including how to locate people to interview, how to identify questions to ask, who should conduct the interviews, how they should be recorded, and how the people who participate should be compensated.

Generally speaking, more is better when it comes to the number of interviews you conduct, particularly in cases where you have a broad intended audience. A dozen interviews is generally the minimum needed to get a detailed picture of your audience, and 20 or more is not uncommon.

Analyzing Your Research Data

Once you have gathered your interview data, it is a good idea to review it as a team. You will be looking for patterns that indicate the characteristics that your personas should include and how many personas to develop.

Here are some of the types of patterns the team should look for:

  • End goals: Why are people taking the course? (Out of curiosity? A job requirement?) Do they expect to be able to do anything differently after having taken the course? How, if at all, do they hope that the course knowledge will impact their lives?
  • Experience goals: How do learners believe they will feel while taking the course? Are they anxious about getting lost or not passing a test? Do they hope the content might be engaging or fun?
  • Course environment: Where will the learners take the course? (At their desks at work? In a learning lab? At home?) Will they be able to take it all in one sitting, or will they have to take it in small chunks over a period of days or weeks? How likely are they to be interrupted while taking the course?
  • Literacy and computer literacy: How comfortable are the learners with computers? How much of their day do they spend using one? What kinds of work do they do with it? How much on-screen reading do they do? And how much background knowledge do they have of the courseware content? Do they know the vocabulary? What is their experience with the skills and situations dealt with in the course?

Figure 2.

The final result of this analysis should be a list of characteristics that are relevant to your course design and that will be used to create your personas. See Appendix II for a persona template.

Creating Your Personas

Once you have identified the relevant patterns from the interview data, you are ready to create some personas. Personas reflect your target learners. As you read through your interview results, you will start to see patterns. For example, although the team may have interviewed a pulmonologist, a bond trader, and a physics teacher, all of these interviewees have high experience and comfort levels with computers. Therefore, at least one of the personas that your team creates should be highly computer literate.

Unless your courseware is narrowly targeted at a very homogeneous group of learners, you will need to create multiple personas to reflect the diversity of your audience. For example, if the list of interviewees also included a volunteer paramedic who has a high school education and works as a gardener, the team might construct another persona that is less computer literate. Each persona should represent a pattern of needs, abilities, and goals of a type of learner that the team interviewed.

At the same time, if the team creates too many personas, it runs the risk of designing its courseware for too broad an audience. The team will need to make a judgment on how many personas it needs to represent all of the important differences in its main audiences that will impact the ways in which the designers approach the course. Three to seven personas is a manageable number, although there are no absolute limits.

In most cases it is advisable to pick one persona from the group that will be the primary persona. It represents the group of learners who are most important for the course to be successful. In some cases, it can also be useful to create a negative persona. If there is a group of potential learners that is not a good fit for your courseware goals and design constraints, it is good to identify and document their characteristics in advance. That way the team doesn't waste time trying to design for learners who are not likely to find the course useful or be successful students.

See Appendix III for an example of a persona.

Using Your Personas

Once your team has written and named the personas, it will become quite natural to use them during design conversations. "Is this text too dense for John?" "Would Sarah get lost when she leaves the course and then comes back?" "Will Jim find this animation helpful, or just annoying because it slows him down?" Personas can be particularly useful in resolving tough design debates by comparing the different needs of the different personas.

Figure 3.

In addition to helping at design time, personas can be useful later in the development process as an aid in heuristic usability testing. This will be described in the next section.

Further Reading on Personas

Alan Cooper, popularly credited with inventing personas, has published a book describing his method called The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity.

Also, his consulting company has many useful articles about how to use personas at http://www.cooper.com/content/insights/newsletters_personas.asp.

HEURISTIC USABILITY EVALUATION

Traditional usability testing methods involve watching actual users interacting with a product and recording the mistakes that they make. However, there are other testing methods that rely instead on the judgment of trained reviewers. These techniques, called "usability inspection methods," provide methods by which a group of reviewers can find many of the same problems that would be revealed by testing with end users. (In fact, inspection methods have been tested to prove that they can produce similar results to end user testing before they are considered valid.)

Heuristic usability testing is one of the easiest inspection methods for non-experts to learn. It works like this:

  1. A group of reviewers is given a small set of guidelines for usable software design.
  2. Each reviewer goes through the software two or three times and writes down anything that he or she thinks violates any of the guidelines.
  3. All reviewers look at the combined list of all violations and provide severity ratings.
  4. An average of the severity ratings is taken to determine the priorities for fixing the detected problems. Also factored into this is the difficulty of making the fix. Thus, a minor violation that will take a minute to fix may be of higher priority than a more serious problem that will be time-consuming to correct. A severe problem, however, will always be of the highest priority.
  5. The reviewers meet as a group and develop a list of recommended fixes for the highest priority problems.

The Heuristics

There are a number of different usability heuristics, but the most widely known is the list developed by Jakob Nielsen. His top ten heuristics are:

  1. Visibility of System Status: The system should always keep users informed about what is going on through appropriate feedback within reasonable time.
  2. Match between System and the Real World: The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
  3. User Control and Freedom: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
  4. Consistency and Standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
  5. Error Prevention: Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
  6. Recognition Rather Than Recall: Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
  7. Flexibility and Efficiency of Use: Accelerators—unseen by the novice user—may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
  8. Aesthetic and Minimalist Design: Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
  9. Help Users Recognize, Diagnose, and Recover from Errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
  10. Help and Documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.

Figure 4.

Appendix IV is a checklist of specific concerns that e-learning usability reviewers should examine for each of the heuristics.

Preparing for the Review

Start by assembling your team. In general, it is best to have three to five reviewers participate. Before you begin the review, discuss the heuristics to make sure that everyone has a shared understanding of them. This is particularly important for inexperienced teams. One way to normalize the group's understanding is to come up with three or four examples of violations of each of the principles. (They don't have to be examples that are specific to e-learning, but that will help.)

If the team has created personas, they should review them and discuss whether they raise any particular usability concerns. For example, if learners are likely to be interrupted or must take the courses in small chunks over time, then helping them find where they left off is a challenge (Heuristic 6: recognition rather than recall), as is making it easy for them to exit the tutorial gracefully (Heuristic 3: user freedom and control). Any persona-driven concerns should be added to the generic review checklist provided in Appendix IV.

Conducting the Review

Once the group has prepared, then each reviewer should go through the course several times, looking for violation of usability heuristics. (Appendix V provides a template form for reviewers to describe the problems that they find.) Generally, they should go through the course once as a learner would to get a general sense of the "flow" of the course and then a second time in more detail, writing down any problems that they find. Reviewers should take care to note exactly where the problem is and what they are seeing in as much detail as possible. Reviewers may want to use screen-capture tools so that they can show as well as describe any violations they encounter.

When reviewers have completed their two or three passes through the course, they should then go over their notes and make sure everything is captured there and that their notes are coherent. During the review process reviewers tend to focus on each screen, but when they are finished they should think about the overall usability of the course and any violations of usability heuristics. For example, while each screen may be consistent, there may be inconsistencies between screens or between units in the course.

Rating the Severity of the Problems

After the reviewers have written up the problems they find, their input should be compiled into one master list so the group can provide a severity. According to Nielsen's method, severity should be rated on a combination of three factors:

  1. The frequency with which the problem occurs: Is it common or rare?
  2. The impact of the problem if it occurs: Will it be easy or difficult for the users to overcome?
  3. The persistence of the problem: Is it a one-time problem that users can overcome once they know about it or will users repeatedly be bothered by the problem?

Considering these factors and keeping the personas in mind, each reviewer should rate every reported usability problem on a scale of zero to four:

0 = I don't agree that this is a usability problem at all

1 = Cosmetic problem only: need not be fixed unless extra time is available on project

2 = Minor usability problem: fixing this should be given low priority

3 = Major usability problem: important to fix, so should be given high priority

4 = Usability catastrophe: imperative to fix this before product can be released

Appendix VI provides a template for severity rating.

Particularly with a group of inexperienced reviewers, it is valuable to have the team discuss and compare their severity ratings for different problems. They don't need to agree, but sometimes hearing other perspectives can help them refine their own judgments.

Once all reported problems have been rated by all reviewers, then the average score can be calculated for a final priority ranking on each item.

Recommending Solutions

Unfortunately, there is no magic formula for developing recommendations for heuristic usability problems. Sometimes solutions will be obvious. For example, the solution to a bad font or font size is to change the font, and there are guidelines that can be used to find a better font. Other times, however, the problem may require a more creative solution. Once again, these answers are best addressed as a team in a brainstorming session. Since some solutions will be easier to implement than others, it is a good idea categorize recommendations into "items to fix in this version" and "items to fix in the next release or the next course."

Appendix VII is a template for compiling recommendations. It should be provided to the developers along with the severity rating document.

Further Reading on Heuristics

Jakob Nielsen's seminal and still very useful articles on heuristic usability testing can be found at http://www.useit.com/papers/heuristic.

The Stanford Web Creators Users Group has succinct summary of heuristic usability principles along with useful links for further reading at http://www.stanford.edu/group/web-creators/heuristics.

A paper on applying heuristic usability testing to e-learning, "Usability and Instructional Design Heuristics for E-Learning Evaluation," can be found at http://eric.ed.gov/ERICDocs/data/ericdocs2/content_storage_01/0000000b/80/21/fd/54.pdf.

Footnotes

DOI: http://doi.acm.org/10.1145/1170000.1165344

Figures

F1Figure 1.

F2Figure 2.

F3Figure 3.

F4Figure 4.

FA1Appendix I.

FA2Appendix II.

FA3Appendix III.

FA4Appendix IV.

FA5Appendix V.

FA6Appendix VI.

FA7Appendix VII.



Comments

  • There are no comments at this time.

ADDITIONAL READING

    Michael Feldstein
  1. Unbolting the chairs
  2. A call to arms
  3. There's no such thing as a learning object
  4. Want better courses?
  5. The digital promise
  6. Just "DO IT"
  7. When Weblogs Can Be Harmful
  8. Informational cascades in online learning
  9. What is usable e-learning?
  10. Ignore usability at your peril
  11. Don't Just Teach to the Metrics
  12. E-learning basics: essay: developing your e-learning for your learners
  13. Desperately seeking software simulations
  14. Do you really need reusability?
  15. How to design recyclable learning objects
  16. Ill-served
  17. What's important in a learning content management system
  18. Disaster and opportunity
  19. 'E-Moderating' by Gilly Salmon and 'In Good Company The Secrets to Successful Learning Communities' by Don Cohen and Laurence Prusak
  20. In defense of online learning (and veggie burgers)
  21. Back to the future: what's next after learning objects
  22. Lisa Neal
  23. eLearning and fun
  24. Everything in moderation
  25. The basics of e-learning
  26. Is it live or is it Memorex?
  27. The Value of Voice
  28. Predictions for 2006
  29. Five Questions...for Christopher Dede
  30. Five Questions... for John Seely Brown
  31. Five questions...for Shigeru Miyagawi
  32. "Deep" thoughts
  33. 5 questions... for Richard E. Mayer
  34. Want better courses?
  35. Just "DO IT"
  36. Five questions...
  37. Formative evaluation
  38. Senior service
  39. Blogging to learn and learning to blog
  40. My life as a Wikipedian
  41. Five questions...for Elliott Masie
  42. The stripper and the bogus online degree
  43. Five questions...for Lynn Johnston
  44. Five questions...for Tom Carey
  45. Not all the world's a stage
  46. Five questions...for Karl M. Kapp
  47. Five questions...for Larry Prusack
  48. Five questions...for Seb Schmoller
  49. Do distance and location matter in e-learning?
  50. Why do our K-12 schools remain technology-free?
  51. Music lessons
  52. Learn to apologize for fun and profit
  53. Of web hits and Britney Spears
  54. Advertising or education?
  55. Five questions…for Matt DuPlessie
  56. Back to the future
  57. "Spot Learning"
  58. Q&A with Saul Carliner
  59. When will e-learning reach a tipping point?
  60. Online learning and fun
  61. In search of simplicity
  62. Storytelling at a distance
  63. Q&A with Don Norman
  64. Talk to me
  65. Q&A with Diana Laurillard
  66. Do it yourself
  67. Degrees by mail
  68. Predictions for 2004
  69. How to get students to show up and learn
  70. Q&A
  71. Blended conferences
  72. Predictions for 2002
  73. Learning from e-learning
  74. Predictions For 2003
  75. Serious games for serious topics
  76. Five (or six) questions...for Irene McAra-McWilliam
  77. Learner on the Orient Express