ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Usability and e-learning
the road towards integration

By Panagiotis Zaharias / June 2004

Print Email
Comments (2) Instapaper

Usability is the basic parameter for the evaluation of e-learning technologies and systems. Usability means quality and puts the users and their real needs in the center. Therefore investigation of usability and its integration or contribution to the learning process is worthwhile. In this article several questions regarding usability definition and usability evaluation techniques for e-learning are raised. In addition several relevant research works are briefly reviewed and the need for more focused research efforts and empirical validation is stressed. In conclusion it is proposed that a usability evaluation technique for e-learning has to satisfy three basic prerequisites/characteristics in order to be easily adopted and used.

The Concept of Usability in E-learning

The basic inspiration for what follows is an article by Michael Feldstein entitled "What Is 'Usable' E-learning?" published in eLearn Magazine. His article serves as a very good starting point for a discussion regarding usability in e-learning.

Feldstein poses two substantial questions to start with: "How can we define 'usability' for e-learning in a way that can be measured?" and "Can we create meaningful usability tests that will be simple, quick, and cheap enough?"

It is obvious that these crucial and straight-to-the-point questions cannot be answered in an article. Instead, the author would like to extend the relevant discussion initiated by Feldstein and stress the need for orchestrated and systematic initiatives and research—especially since additional questions must also be answered. For instance, it is important to understand how usability contributes (or not) to learning goals.

In order to address the issue of usability one must first define the context of use of an e-learning course. An increase in the diversity of learners, technological advancements, and radical changes in learning tasks (learner interaction with a learning/training environment is often a one-time event) all present significant challenges and render the possibility of defining the context of use of e-learning courses. It's important to point out that unlike users of a traditional software product who return time and again and gradually learn the interface, an instructional interface must make sense quickly since the user is unlikely to use the environment for an extended period of time [4].

Thus, another set of questions emerges: What is the role of usability in the context of e-learning design? Which usability attributes affect learning (if they do)?

It is obvious that major purchasers and consumers of e-learning have no way of evaluating the degree to which a course is usable. Feldstein tries to define that problem well enough to make it solvable [2]. He argues that it is a challenge to measure the effectiveness of a learning intervention. A "usable" course is one that teaches in the ways that the students need in order to get the value that they were looking for when they signed up. This argument imposes that we have to clearly assess the learners' needs and preferences and further examine the context in which they live, work, and learn. Research and practice must build upon both past experiences and the findings of several fields; most of all human factors, systems design, and instructional design.

For instance, while having to assess learners' needs and apply the most suitable learning and design strategies, we have to reconsider instructional systems design (ISD) theories and models, build upon their strong elements and adjust them for the new e-learning challenge. It has been agued that ISD is "dead" [13]. In a recent interview in eLearn Magazine, Diane Laurillard said that theories of instructional design are inappropriate for the kind of quality of learning that needs to be generated [6]. This is true for the traditional, instructivist type of theories and models, but is not for constructivist and social learning approaches that give power to the learner and support more effectively advanced cognitive (like problem solving or meta-cognition) and collaborative learning processes.

Additionally, Clark [1] has also suggested certain ways of applying cognitive strategies to instructional design and further supports that the basis for the new ISD (as she calls it) is the new models and techniques drawn from cognitive theories of learning. Squires [11] puts it in another way, trying to stress the need for HCI practitioners to take into account the latest developments of learning theories:

"Workers in HCI and educational computing areas rarely speak to each other or take note of each others' work: The educational computing literature is littered with naive and simplistic interpretations of interface design issues, and many writers in the HCI literature appear to be unaware of the significant developments that have been made in theories of learning."

Lohr also supports the constructive combination of instructional design and usability. This helps us to redefine usability, to reconsider the definition of "usability" for e-learning.

It is known that the major dimensions of usability are effectiveness, efficiency, and satisfaction. While elaborating on the first question posed by Feldstein, we may refer to Lohr's work, in which usability attributes are refined in order to fit in the instructional interface design process [4]. Conventional usability dimensions as defined by ISO (1993) are presented below along with their refined meanings in the e-learning context, as Lohr asserted:

Usability (ISO, 1993)

  • Effectiveness: The user's ability to achieve specific goals in the environment
  • Efficiency: The resources used (time, money, and mental effort) when performing a system-supported task
  • Satisfaction: The user's comfort level and acceptance of the system overall

Formative evaluation [4]

  • Effectiveness: The attainment of instructional objectives
  • Efficiency: How quickly and cost-effectively learning objectives were achieved
  • Satisfaction: The user's interest in the content and the desire to continue to learn

Definition for the instructional interface design process [4]

  • Effectiveness: Learner interprets instructional interface function correctly; instructional interface function performs according to the learner's expectations
  • Efficiency: Learner experiences minimal frustration interpreting instructional interface function; learner experiences minimal obstacles in using instructional interface element
  • Satisfaction: Learner seems comfortable in the environment overall

As we can see, what Lohr defines as instructional interface design process integrates the basic constituents from the conventional usability definition and formative evaluation definition from instructional design literature. Although it is in quite generic terms, which require further elaboration, this can serve as the first approach, a working definition for usability in e-learning context.

But what about the second question that Feldstein poses? This has to do with usability evaluation methods and techniques.

Investigating Usability Evaluation Methods for E-learning

As Feldstein points out, we should ensure that the usability problems we are trying to fix are costly enough to be worth the investment required to fix them. In addition, it seems so far that economics of e-learning production, (both in business or in academic settings) simply do not support usability evaluations that are costly and require a high degree of expertise on behalf of the evaluator/e-learning designer. The main problems with employing usability evaluation method in e-learning development lifecycle have to do with very strict time and budget constraints. E-learning requires more time to design than traditional instructor-led courses. The whole design and development process is quite complex; this adds to the cost and extends the development time.

It also seems that there is a lack of usability studies whose results will actually have an impact on "real" e-learning design and development. In the same vein under the auspices of SIGCHI (2001) it has been reported that: (a) very little quality control and usability testing has been going into the designing of courses and the design of e-learning technologies, typically because of time constraints and low perceived importance of usability, (b) there is a need to focus on how to develop useful and usable tools and environments since so far we are focused more on the technology and not on the pedagogy, and (c) there is very little thought at the decision-making level to usability issues.

Feldstein also believes that a partial solution to creating cheap and simple usability tests can be found in the "heuristic usability testing" technique. It should be interesting to see what other researchers and practitioners have already used for similar purposes. Many practitioners support that Web design heuristics that have grown up around e-commerce can also be used to evaluate online learning courses, but do these established sets of heuristics apply effectively in the e-learning context?

It is argued that these heuristics should be used with caution since many assumptions about the users of e-commerce do not apply to online learners. The challenge for most types of e-learning is that established sets of heuristics do not exist. Pedagogical guidelines need to be developed so that they can be combined with the aforementioned heuristics.

Squires and Preece [12] realized that simple application of these heuristics could not be effective because they fail to address the specific challenges of learner-centered interface design, as well as the issue of integration of usability and learning. Thus, these authors proposed an adaptation of Nielsen's [6] heuristics, taking into account socio-constructivism tenets [10]. The proposed set of "learning with software" heuristics contains the following:

  • Match between designer and learner models
  • Navigational fidelity
  • Appropriate levels of learner control
  • Prevention of peripheral cognitive errors
  • Understandable and meaningful symbolic representations
  • Support personally significant approaches to learning
  • Strategies for cognitive error recognition, diagnosis, and recovery
  • Match with the curriculum

Squires and Preece do not claim that this is a final set of heuristics. Rather it is an initial attempt towards integration of usability and learning that needs further discussion and elaboration.

Despite the efforts of scholars from educational research and usability the problem still remains. Many researchers and practitioners propose heuristics without further adaptation to the context of learning environments. Though Squires and Preece [12] propose an adaptation of heuristics there is still a clear need for further elaboration and empirical validation. Notess [7] stresses that the aforementioned methods need additional consideration under the light of online learning courses and learning environments. Additionally, Notess argues that evaluating online learning may move usability practitioners outside their comfort zone. In order for usability evaluation techniques to be more effective, they need to familiarize themselves with the evaluation frameworks and methods from instructional design [4]. Also essential are acquaintances with educational testing research, learning styles, and the rudiments of learning theory. Thus, usability and e-learning practitioners have to collaborate and work hard towards a new learner-centered usability evaluation method.

A recent study conducted by Reeves et al. [8] seems to provide a more elaborated tool for heuristic usability evaluation for e-learning programs. In this work, Nielsen's protocol [6] was modified and refined for evaluating e-learning programs by participants in a doctoral seminar held at The University of Georgia in 2001. The modifications primarily involved expanding Nielsen's original ten heuristics developed for software in general to fifteen heuristics designed to be more closely focused on e-learning programs. They propose a "Protocol for E-Learning Heuristic Evaluation" that explains how the whole process must be conducted. This process contains eight steps and the usability problems are supposed to be evaluated along two scales (severity and extensiveness). The fifteen proposed heuristics emerge from the combination of "Instructional Design and Usability" heuristics. The fifteen heuristics and the accompanying protocol were applied to a commercial e-learning program. Several usability problems were identified and appropriate changes took place. The basic limitation of this study is that it is not quite clear whether the proposed changes were brought about by the heuristic evaluation itself or the field evaluation conducted in parallel with real users. According to the authors of the paper "it is impossible to attribute exactly which enhancements were based upon which of two evaluations."

The author supports that heuristic and/or expert-led approach may not be enough, since this technique relies heavily on experts and reflects their view only. In order to realize and implement the learner centered design paradigm, reflect upon learners' needs, and understand their attitudes we should focus on evaluation techniques of e-learning that includes learners' perceptions as well.

Therefore it is argued that a more learner-centered usability evaluation method is needed. It is suggested that such a method incorporate the following basic characteristics:

  1. It must be built upon the creative integration of the aforementioned fields (usability and instructional design).
  2. It must take into account the learners' perceptions themselves.
  3. It must be short, easy to deploy so that e-learning economics can afford its use.

Lohr and Eikleberry [5] propose a three-step approach to learner-centered usability testing. The first step suggests that designers and usability experts do a quick run through of the instructional interface to see if it is addressing some of the most basic types of learner questions. The second step employs a check sheet matrix to guide the usability testing. The matrix consists of two sections: a) user actions that evaluators can observe and b) questions that evaluators can ask the users. Observation and interviews are the main methods proposed. The third step includes users and employs thinking aloud protocol, a very well known and widely used usability evaluation method, which is employed in usability tests with users' involvement.

What Lohr and Eikleberry proposed seems to incorporate the first two basic characteristics mentioned above: It is built upon the integration of instructional design and usability principles and takes into account learners' perceptions. Although the proposed method consists of only three steps, it is not clear yet whether this approach will be affordable. No empirical evidence has been provided so far. Nevertheless this seems to be an effort towards the right direction.

Finally, it is quite clear that additional systematic research efforts and empirical validation are needed. It is imperative for the researchers and practitioners with a special interest in e-learning design and evaluation to further investigate the issue of learning and usability integration, further elaborate on the findings of the reported studies and to work towards more useful and advanced tools for moving usability testing for e-learning to its next level.

Conclusion

Technology innovations abound in learning, education, and training, but the need for evaluation of such technology remains. Usability is a basic parameter for the evaluation of e-learning technologies and systems. Usability means quality and puts the users and their real needs in the center. Therefore investigation of usability and its integration or contribution to learning process is worthwhile. In this article a first attempt was made to define usability in e-learning. More concrete definitions are expected to emerge and then a considerable amount of effort is needed to assess how measurable these definitions are (according to Feldstein's first question: "Can we create meaningful usability tests that will be simple, quick, and cheap enough?").

In parallel, intense work is needed for developing new usability evaluation methods for e-learning or advancing the existing ones. So far, it seems that the major challenge is to provide a usability evaluation technique that will incorporate the three basic characteristics already mentioned in this article, and especially the third one: to provide a technique that will be short and easy to deploy so that e-learning economics can afford its use.

The whole issue of integrating usability and learning is still in its infancy; nevertheless it is believed that relevant work will provide significant output not only for research purposes but for market developments as well.

References

1. Clark, R. Applying Cognitive Strategies to Instructional Design, Performance Improvement (2002) Volume 41, Number 7, pp. 8-14.

2. Feldstein, M., "What is 'Usable' E-learning?" (2002) eLearn Magazine.

3. ISO (1993). ISO DIS 9241-11: Guidelines for specifying and measuring usability.

4. Lohr.L.L, Designing the instructional interface, Computers in Human Behavior (2000) 16 pp.161-182.

5. Lohr, L. and Eikleberry, C., Learner-Centered Usability: Tools for Creating a Learner-Friendly Instructional Environment, (2001).

6. Neal, Lisa. "Q&A With Diana Laurillard" (2003) eLearn Magazine.

7. Nielsen, J. Heuristic evaluation. In Nielsen, J. and Mack, R.L. (Eds.), Usability inspection methods. (1994). New York: John Wiley & Sons.

8. Notess, M. "Usability, User Experience, and Learner Experience," (2001) eLearn Magazine.

9. Reeves T., et al. Usability and Instructional Design Heuristics for E-Learning Evaluation, in proceedings of World Conference on Educational Multimedia, Hypermedia & Telecommunications (ED-MEDIA 2002), Vol. 2002, Issue. 1, pp. 1615-1621.

10. SIGCHI. "Notes from E-learning Special Interest Group (SIG) Discussion at CHI 2001."

11. Soloway, E., et al. Learning Theory in Practice: Case Studies in Learner-centred Design, In Proceedings of Computer Human Interaction CHI '96, ACM Press, 189-196 (1996).

12. Squires, D. Usability and Educational Software Design: Special Issue of Interacting with Computers, Interacting with Computers 11 (5) 463-466, (1999).

13. Squires, D. and Preece, J. Predicting quality in educational software: Evaluating for learning, usability and the synergy between them, Interacting with Computers 11 (5) 467-483 (1999).

14. Zemke, R., and Rossett, A. (2002). A hard look at ISD. Training Magazine, 39(2), 26-35.



Comments

  • Fri, 03 Feb 2012
    Post by eLearn Editor

    Thanks for the comment, we are working on this.

  • Thu, 05 Jan 2012
    Post by Nivedita

    Hi! this is an excellent article, one that gives me some answers I'm interested to find. However, some of the content overlaps the other, making it difficult to read. Can someone please make it straight, its a waste to have such good article be unreadable. Thanks.