- Home
- Articles
- Reviews
- About
- Archives
- Past Issues
- The eLearn Blog
Archives
To leave a comment you must sign in. Please log in or create an ACM Account. Forgot your username or password? |
Create an ACM Account |
This guide was primarily designed to help teams of instructional designers and content experts create effective, self-paced e-learning. It teaches best practices for improving usability that can be applied by any instructional designers or content experts and was created so that no prior knowledge of usability is required to use the techniques.
Applying the systems in this guide will reduce the likelihood of learners getting confused or lost, or even failing to complete a course. The guide is intended to be helpful throughout the entire development cycle of e-learning courses. It includes strategies for usability engineering that can be used by the course designers early in the development process and information about usability testing that can be applied to completed courses.
USABILITY BASICS
At its simplest, "usability" is a measure of how easy it is for a user to complete a task. For example, imagine that you are assessing the usability of a bill-paying feature in an online banking Web site. You might ask some of the following questions:
Notice that the answers to these questions are all quantifiable. Given the time and the resources, we can measure exactly how many seconds it takes a customer to pay a bill and exactly what percentage of the time customers start a bill-paying process but don't complete it. The usability of software (or courseware) is not a matter of personal opinions. It is a matter of measurable facts that can be used to redesign a user interface to get better results.
On the other hand, there can be a measurable difference in how usable the same application is for two different people. Your mother, your teen-aged daughter, your mail carrier, and your accountant may find a particular bill-paying application much easier or harder to use than you do. There could be, for example, a measurable difference in how many young bank customers give up on paying their bills online versus how many older customers do. So part of the process of designing for usability includes asking the question, "Usable for whom?"
Why Usability Matters for Courseware
One of the most important goals of usability engineering is to increase the likelihood that the user will achieve his or her goal for using the software (such as paying a bill or learning about a topic). If you think of completing a course or passing a test as tasks that the "users" of courseware are trying to accomplish, then it is easy to see where usability engineering can be important. Courseware that is not designed for usability can create challenges for the learners that have nothing to do with the difficulty of the content. They can be distracted from learning the critical subject matter of the course by having to learn how to use the courseware. Poor usability can have measurable negative impact on course completion rates and post-test scores.
Testing For Usability
Usability is too often tacked on to the end of a project by conducting usability testing with test subjects who are as close as possible to the eventual users. A usability test typically seeks to answer questions like the one listed above for a bill-paying application and should lead to redesign if the results are unacceptable for the given application and user population. Completed software can be tested, but testing is also effective on prototypes or even paper sketches of a user interface. Traditionally, usability testing has been performed by trained experts using special labs equipped with one-way mirrors and video cameras or using equipment to track users' response times or eye movements. More recently, testing techniques have been developed which can be applied with no special equipment or lab and can be used effectively by testers with less training. While some of these testing techniques involve giving tasks to test subjects, others require only experienced testers, as described below.
Designing for Usability
Designing for usability from the very beginning increases the likelihood of a more usable product and reduces the need for testing at the end, when it is often too late or costly to make substantial changes. Designers can be educated on basic usability principles that they can apply from a project onset. This training should include techniques that help them to understand who they are designing for and what the learners' needs are, while enabling designers to test for usability at all stages of the development process.
The Role of Expertise
While it is true that the techniques described in this guide can be applied effectively by a team of non-experts, it is also true that trained and experienced Human-Computer Interface (HCI) professionals will get better results using the same techniques. You will get the best results by having a regular team that can gain experience together and share best practices over multiple projects. A team will gain expertise more quickly when they get feedback from learners once the courseware has been put into production. Also, a usability professional might be brought in to work with your team on the first few projects, particularly if getting usability right the first time is critical.
Assembling Your Team
Your usability team should include all of the roles that are needed to design and develop courseware:
Before having your kick-off meeting, have all team members read this guide. You can use the kick-off meeting to assign usability engineering tasks and put together your project plan.
Two Tools That Can Be Used By Anyone
The rest of this guide focuses on two usability engineering tools that your team can employ on just about any project. The first, personas, enables you to identify important characteristics of your users (or learners) so you can design courseware that better meets their needs. After interviewing a handful of prospective learners using the interview protocol described in this guide, you construct a small number of profiles for imaginary, but prototypical, learners. These profiles, or personas, can be used by the team during both design and testing. Persona development should take place at the very beginning of a project, before the actual courseware design begins. The next section of this document describes persona development in more detail, and the appendices have templates that can be used to develop and document the personas for your projects.
The second tool is called a heuristic usability test. A heuristic is simply a rule of thumb. In this case, usability researchers have identified a heuristics that describe general principles of usable software design. Research has shown that groups of reviewers who are specifically looking for violations of these principles tend to catch a high percentage of the same problems that usability experts find using specialized (and sometimes expensive) testing equipment. For this reason, a heuristic usability test is one of several tools that are collectively called "discount usability tests." Formal heuristic reviews often take place later in the design process, but there are a variety of ways in which knowledge of the technique can be built into the entire design process. The third section of this document describes the heuristics and gives examples of how they can be applied to e-learning courseware. The appendices contain templates that can be used for your heuristic evaluation process.
Further Reading on Usability Basics
Jakob Nielsens's Web site http://www.useit.com has information about many aspects of usability. His Alertbox columns at http://www.useit.com/alertbox provide more detail about specific aspects of usability. A free subscription is available.
The US Government's site at http://www.usability.gov provides guidelines and checklists (notenew site in late May 2006, check out before including link).
The Society for Technical Communication's Usability & User Experience Community site at http://stcsig.org/usability/index.html also has many resources, particularly at http://stcsig.org/usability/resources/toolkit/toolkit.html.
PERSONAS
Have you ever purchased or been given a gadget or a piece of software that you end up abandoning because it doesn't work the way you need it to? Maybe you got a PDA but discovered that the color screen that you really don't need is shortening the battery life, causing it to run out of power in the middle of your work day. Maybe you bought special project-management software, only to find that it imposes a way of organizing your projects that really doesn't work for you. Whatever the item was, it just didn't fit with your needs and the way you work. It's not that there was anything wrong with the product; it just wasn't right for you. It didn't fit. The same problem of poor fit can happen with courseware. The course lessons could require longer periods of time than the learner has available in one sitting. Or it could assume a different level of comfort with computers. Or a different level of background knowledge of the subject matter.
The best designs are those that fit the needs of a variety of users. A digital camera can be designed for Sarah, a teacher who travels every summer and spends the rest of her year revisiting her trips through her photographs, as well as for Joe, who takes pictures of his children every weekend and emails them to relatives. The same could be said for photography courseware. At the same time, there are tradeoffs to consider. Sarah may want her camera to handle a variety of subjects and light conditions she may encounter on her travels, while Joe just wants taking pictures of his kids in the back yard to be as simple a process as possible. Designing a camera for both Sarah and Joe may be challenging.
No product and no courseware can be designed to perfectly meet the needs of every learner who might use it. However, designers can increase the likelihood of meeting the needs of most learners by thinking carefully and concretely about their target learner population. For example, an emergency room physician and a volunteer emergency management technician (EMT) may have very different needs and preferences for their courseware, even if they are learning the same topic. They will tend to have different comfort levels with computers, different background knowledge, different amounts of time to take the course, and different motivations for taking the course. Having a clear understanding of the relevant characteristics of the target learner population will enable your team to design courseware that leads to better learning outcomes.
Personas are profiles of prototypical learners that you create with your design team. They can be used for one or for many courses, depending on the courses and the applicability of the persona attributes. Ranging anywhere from a paragraph to a couple of pages in length, personas provide enough detail about relevant learner characteristics to help the team imagine the audience(s) they are designing for. The process of creating personas helps the members of the team develop a shared understanding of the learners. In the process, the team discusses many aspects of the learner population that help them in the subsequent design tasks. It also helps them avoid the mistake of designing the courseware for themselvesa common mistake that Tom Landauer calls the "egocentric intuition fallacy."
How Personas Work
Personas work through the power of storytelling. Once you have a reasonably vivid mental picture of a prototypical learner, it's fairly easy to imagine that person trying to accomplish his or her goals using the courseware that you are designing. The team can make more effective decisions about the "fit" between the course design and the intended audience when the discussion is about what Ted or Maria would like instead of having abstract conversations about "good" design.
Persona descriptions contain information designed for three purposes:
Researching Your Personas
Personas are most effective when they are based on research about real people who might take your course. The goal of the research is to gather enough data about your probable learners to see patterns in their characteristics, needs, and goals. You will be profiling your interviewees, gathering the same kinds of information that go into the personas themselves, but in more detail. Appendix I provides a sample interview protocol, including how to locate people to interview, how to identify questions to ask, who should conduct the interviews, how they should be recorded, and how the people who participate should be compensated.
Generally speaking, more is better when it comes to the number of interviews you conduct, particularly in cases where you have a broad intended audience. A dozen interviews is generally the minimum needed to get a detailed picture of your audience, and 20 or more is not uncommon.
Analyzing Your Research Data
Once you have gathered your interview data, it is a good idea to review it as a team. You will be looking for patterns that indicate the characteristics that your personas should include and how many personas to develop.
Here are some of the types of patterns the team should look for:
The final result of this analysis should be a list of characteristics that are relevant to your course design and that will be used to create your personas. See Appendix II for a persona template.
Creating Your Personas
Once you have identified the relevant patterns from the interview data, you are ready to create some personas. Personas reflect your target learners. As you read through your interview results, you will start to see patterns. For example, although the team may have interviewed a pulmonologist, a bond trader, and a physics teacher, all of these interviewees have high experience and comfort levels with computers. Therefore, at least one of the personas that your team creates should be highly computer literate.
Unless your courseware is narrowly targeted at a very homogeneous group of learners, you will need to create multiple personas to reflect the diversity of your audience. For example, if the list of interviewees also included a volunteer paramedic who has a high school education and works as a gardener, the team might construct another persona that is less computer literate. Each persona should represent a pattern of needs, abilities, and goals of a type of learner that the team interviewed.
At the same time, if the team creates too many personas, it runs the risk of designing its courseware for too broad an audience. The team will need to make a judgment on how many personas it needs to represent all of the important differences in its main audiences that will impact the ways in which the designers approach the course. Three to seven personas is a manageable number, although there are no absolute limits.
In most cases it is advisable to pick one persona from the group that will be the primary persona. It represents the group of learners who are most important for the course to be successful. In some cases, it can also be useful to create a negative persona. If there is a group of potential learners that is not a good fit for your courseware goals and design constraints, it is good to identify and document their characteristics in advance. That way the team doesn't waste time trying to design for learners who are not likely to find the course useful or be successful students.
See Appendix III for an example of a persona.
Using Your Personas
Once your team has written and named the personas, it will become quite natural to use them during design conversations. "Is this text too dense for John?" "Would Sarah get lost when she leaves the course and then comes back?" "Will Jim find this animation helpful, or just annoying because it slows him down?" Personas can be particularly useful in resolving tough design debates by comparing the different needs of the different personas.
In addition to helping at design time, personas can be useful later in the development process as an aid in heuristic usability testing. This will be described in the next section.
Further Reading on Personas
Alan Cooper, popularly credited with inventing personas, has published a book describing his method called The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity.
Also, his consulting company has many useful articles about how to use personas at http://www.cooper.com/content/insights/newsletters_personas.asp.
HEURISTIC USABILITY EVALUATION
Traditional usability testing methods involve watching actual users interacting with a product and recording the mistakes that they make. However, there are other testing methods that rely instead on the judgment of trained reviewers. These techniques, called "usability inspection methods," provide methods by which a group of reviewers can find many of the same problems that would be revealed by testing with end users. (In fact, inspection methods have been tested to prove that they can produce similar results to end user testing before they are considered valid.)
Heuristic usability testing is one of the easiest inspection methods for non-experts to learn. It works like this:
The Heuristics
There are a number of different usability heuristics, but the most widely known is the list developed by Jakob Nielsen. His top ten heuristics are:
Appendix IV is a checklist of specific concerns that e-learning usability reviewers should examine for each of the heuristics.
Preparing for the Review
Start by assembling your team. In general, it is best to have three to five reviewers participate. Before you begin the review, discuss the heuristics to make sure that everyone has a shared understanding of them. This is particularly important for inexperienced teams. One way to normalize the group's understanding is to come up with three or four examples of violations of each of the principles. (They don't have to be examples that are specific to e-learning, but that will help.)
If the team has created personas, they should review them and discuss whether they raise any particular usability concerns. For example, if learners are likely to be interrupted or must take the courses in small chunks over time, then helping them find where they left off is a challenge (Heuristic 6: recognition rather than recall), as is making it easy for them to exit the tutorial gracefully (Heuristic 3: user freedom and control). Any persona-driven concerns should be added to the generic review checklist provided in Appendix IV.
Conducting the Review
Once the group has prepared, then each reviewer should go through the course several times, looking for violation of usability heuristics. (Appendix V provides a template form for reviewers to describe the problems that they find.) Generally, they should go through the course once as a learner would to get a general sense of the "flow" of the course and then a second time in more detail, writing down any problems that they find. Reviewers should take care to note exactly where the problem is and what they are seeing in as much detail as possible. Reviewers may want to use screen-capture tools so that they can show as well as describe any violations they encounter.
When reviewers have completed their two or three passes through the course, they should then go over their notes and make sure everything is captured there and that their notes are coherent. During the review process reviewers tend to focus on each screen, but when they are finished they should think about the overall usability of the course and any violations of usability heuristics. For example, while each screen may be consistent, there may be inconsistencies between screens or between units in the course.
Rating the Severity of the Problems
After the reviewers have written up the problems they find, their input should be compiled into one master list so the group can provide a severity. According to Nielsen's method, severity should be rated on a combination of three factors:
Considering these factors and keeping the personas in mind, each reviewer should rate every reported usability problem on a scale of zero to four:
0 = I don't agree that this is a usability problem at all
1 = Cosmetic problem only: need not be fixed unless extra time is available on project
2 = Minor usability problem: fixing this should be given low priority
3 = Major usability problem: important to fix, so should be given high priority
4 = Usability catastrophe: imperative to fix this before product can be released
Appendix VI provides a template for severity rating.
Particularly with a group of inexperienced reviewers, it is valuable to have the team discuss and compare their severity ratings for different problems. They don't need to agree, but sometimes hearing other perspectives can help them refine their own judgments.
Once all reported problems have been rated by all reviewers, then the average score can be calculated for a final priority ranking on each item.
Recommending Solutions
Unfortunately, there is no magic formula for developing recommendations for heuristic usability problems. Sometimes solutions will be obvious. For example, the solution to a bad font or font size is to change the font, and there are guidelines that can be used to find a better font. Other times, however, the problem may require a more creative solution. Once again, these answers are best addressed as a team in a brainstorming session. Since some solutions will be easier to implement than others, it is a good idea categorize recommendations into "items to fix in this version" and "items to fix in the next release or the next course."
Appendix VII is a template for compiling recommendations. It should be provided to the developers along with the severity rating document.
Further Reading on Heuristics
Jakob Nielsen's seminal and still very useful articles on heuristic usability testing can be found at http://www.useit.com/papers/heuristic.
The Stanford Web Creators Users Group has succinct summary of heuristic usability principles along with useful links for further reading at http://www.stanford.edu/group/web-creators/heuristics.
A paper on applying heuristic usability testing to e-learning, "Usability and Instructional Design Heuristics for E-Learning Evaluation," can be found at http://eric.ed.gov/ERICDocs/data/ericdocs2/content_storage_01/0000000b/80/21/fd/54.pdf.
Footnotes
DOI: http://doi.acm.org/10.1145/1170000.1165344
Figures
To leave a comment you must sign in. |
Create an ACM Account. |