ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Is Digital Learning Effective in the Workplace?

By Larry G. Moyer / May 2002

TYPE: CORPORATE LEARNING
Print Email
Comments Instapaper

Introduction

One should probably expect the title of this paper to be a statement instead of a question. However, the question is one that arises with ever increasing frequency in both corporations and academic instituitions throughout the world. Hence, it is a question that deserves more attention than a recounting of the plethora of anecdotal accounts and projections from technology and business analysts.

As a start toward providing reasoned answers to the question, this paper reviews existing research literature concerning digital learning effectiveness in academic contexts when compared to traditional classroom events. Additional objectives are to:

  • Point to strategies that enhance the effectiveness of digital learning in the workplace.
  • Suggest recommendations for incorporating digital learning in corporate training strategies.

Background

An Internet search for "digital learning" or "e-learning" produces an abundance of opinions, statistics, forecasts and experiences, most of which suggest that digital learning offers considerable advantage to corporations from a return-on-investment (ROI) perspective. The numbers touted by IBM are more than impressive, they are near unbelievable. As a more modest example, Campbell (2000) states that United Airlines reduced its training time for e-ticketing training from 40 hours to 18 by converting from classroom events to a digital learning format. Moreover, the best scores among those who attended the classroom events were lower than the worst scores among those who took the digital version.

Similar accounts produce similar conclusions-ROI is positive and grades are at least equivalent. But, equivalent test scores only partially answer the question posed here: Just how effective is digital learning when compared against an equivalent classroom event? Did the United Airlines experience also produce greater student satisfaction? How about retention of what was learned? What was the effect on productivity once the students were back on the job? Was there a difference in post-course supervision/mentoring that was required for those who had classroom training versus digital training? Was the learning experience even relevant to the real tasks on the job?

The answers are not easily found, in part, because such questions are not usually asked. Academic research generally suggests that digital learning produces outcomes that are similar to traditional classroom settings (Beare, 1989; McCleary & Egan, 1989; Sonner, 1999); however, these studies focus on grades and ignore questions such as what factors account for success and to what degree is competency actually demonstrated. In short, there is a paucity of credible research by which to support a claim that digital learning is at least as effective as traditional classroom training in areas such as retention, relevance, satisfaction and performance. There is also a shortage of consistently positive reports from corporations that were early adopters of digital learning. Some have decried the inititiave as a failed experiment while others are only able to report on ROI as the basis for their approval.

The growing number of accounts of high drop-out rates (failure to complete), lack of user satisfaction and no differences in performance suggest that digital learning might not be the panacea often implied by proponents of digital learning products and services. While we cannot yet establish all reasons for the complaints, there are obvious contributing consequences of a rush to "go-digital": (a) poor quality content regardless of format, (b) poor instructional design, (c) technology and infrastructure problems, (d) inappropriate software decisions, and (e) inappropriate content for the business and learning objectives.

Another part of the challenge in determining what is either right or wrong with digital learning is that we have few instances where we can make reliable comparisons. Received assumptions, such as the viability of traditional classroom instruction, obscure both reasoning and research. It is probably safe to claim that all readers are most experienced in classroom education and training. It is likely that readers will also acknowledge that not all classroom experiences are satisfactory, conducive to learning or promote increased performance. So, how shall we draw conclusions about the effectiveness of one approach versus the other?

The first step is to refine the question. Perhaps a more appropriate question is: When and where is digital learning as or more effective than traditional classroom training using equivalent learning content? While we must still define the terms "effective" and "digital learning," it is more likely that a comparison can be made when the content is controlled.

Of the required definitions, "digital learning" is probably easiest to articulate. Most often the term is considered to mean a formal course along with measurement, both being delivered by electronic means. However, such a definition is far too limiting in the sense that it is assumed to be a course. It is also limiting in that it prescribes measurement. For reasons that should later become apparent, digital learning is defined here in somewhat broader terms. It is the electronic delivery of material and/or interaction with the expectation of changing attitude, belief, thinking, and/or the behavior of the learner. This leaves the term "effectiveness."

How Do We Know if Any Learning Experience is Effective?

The question of effectiveness has plagued the learning community for decades, if not centuries. Just what is an effective learning experience? How do we determine what is a result of an essential learning skill as opposed to the contribution of the learning experience? Feuerstein (1980) asserts that a highly stimulating learning experience is not sufficient to guarantee that the person actually learns from it. Particular learning skills are also required (Howe, 1987): utilizing information in memory, remembering the past and imagining the future, understanding and looking for relationships between perceived objects, organizing and seeing patterns, regularities and other relationships. Hence, a person with very high learning skills might "learn" more readily than one without, even if the experience is highly ineffective.

The subject of learning skills demands far more exploration than is possible or necessarily useful here. The role of variables such as maturity and motivation in the development of learning skills also deserves considerable attention; and, as suggested by the research reviewed here, could be major determinants of the outcomes. Likewise, learning styles and models are important subjects; and they too deserve more attention than is possible in this paper. Hence, the focus here will be limited to understanding learning-event effectiveness in terms of expectations for outcome. How do we understand the effectiveness considering the expectations we have for the event?

To answer this question, it is useful to consider expectations in terms of the change facilitated by the experience. It is axiomatic that if learning occurs, there is change. Change might occur in attitude, thinking, beliefs and/or behavior. Something will have changed or else learning simply did not occur. Perhaps the material is captured and retained in memory but remains merely mental dust that can be recalled upon appropriate neural stimulation; but efficient recall is not learning. In the absence of change, we must question the viability of the learning event. Perhaps the material is better left in a library or digital repository until it is useful to the person or the enterprise.

Therefore, to understand effectiveness we need to understand the degree to which change is likely or has measurably occurred. To do this, it is useful to separate those learning experiences that might produce predictable results from those that produce measurable results. For purposes of this paper, learning events are described as being "survey" experiences or "competency" opportunities.

In both cases, it is often useful to measure the results of the learning experience. The traditional approach has been to provide some sort of examination. However, an exam might or might not be a useful or informative measure.

If the objective is to affect a change in attitude, thinking, or belief, we need only test for comprehension and use the results of an exam as a predictor of the possibility of change. Most learning opportunities available to students in the U.S. are treated as exercises in comprehension and measured accordingly.

If we want to verify competence, we must go farther than just testing for comprehension. We must actually observe that a change in behavior has occurred.

Therefore, tests might or might not point to the effectiveness of a learning experience. It all depends on the expectations: do we want to inform and do we expect comprehension (e.g., Cultural Diversity in the Workplace) or do we want to develop and validate competence (e.g., How to Lead a Culturally Diverse Team).

Survey Learning Experiences

When the expectation of a learning experience is to increase understanding, the solution can be classed as a survey experience. There are few expectations that can be articulated beyond increased understanding of concepts, principles, processes or practices, though there are potentially many outcomes that are neither well understood nor measurable, such as increased motivation and better self-perception. It should also be noted that an opportunity for increased understanding for one person, might be a requisite behavior change for another. For example, a course on introduction to abnormal psychology might be an informational course for a student focused on chemistry and an analytical competency course required of the student studying to be a clinical psychologist. In the first case, a test for comprehension is adequate. In the second case, validation of a behavior change is required, whether as part of the class or subsequent observation.

For the survey experience, measuring effectiveness is as simple as testing for comprehension. Here we need to know that the student has comprehended (or memorized) some percentage of the questions posed on the exam. We need only hope that a change in thinking, beliefs or attitude has occurred. We need only be able to predict that such a change is possible.

Competency Learning Opportunities

When the expectation is of a measurable change in some behavior, we have a learning opportunity that is competency based. While we might predict the opportunity for a behavior change through an exam, there is no validation in the absence of direct observation, before and after the learning opportunity.

Historically, this has occurred as part of internships, apprenticeships, peer review and supervisory review. However accomplished, someone must actually attest to the achieved competency. Some courses include this observation and others anticipate that it will occur subsequent to the learning experience. For example, training call-center staff is not considered complete without some period of supervision, observation and assessment. This is true of both traditional and online learning opportunities.

Another example might be a course dealing with Respiratory Protection for Hazardous Waste Remediation Workers. In this case, even successful completion of simulation exercises does not guarantee that a person can use any one of the many safety devices in the presence of innumerable hazardous materials. Some supervision, observation and/or possibly guidance are still necessary. On the other hand, a course in Hazardous Waste Awareness for Plant Managers need not necessarily require a period of observation and guidance.

Limitations of the Research

To answer the question of digital course effectiveness, we need to examine credible research from the standpoint of the expectations for the learning opportunities. Unfortunately, there are limitations to what we can hope to conclude.

First, it should be said that no human research is perfect. There will always be some methodological deficiency, some question about the participants or some question about the motivations of the researcher. The research examined here is not an exception.

  • The research was conducted in an academic environment where students are supposedly already motivated to learn. Unfortunately, the research does not include examples from commercial initiatives. This is largely because there are so few legitimate research projects conducted in a commercial setting where we have an opportunity to compare digital modalities against classroom experiences.
  • With one exception, the participants are generally self-selected - they volunteered for the experience and their motivation is unknown. o There is a paucity of research by which to compare the results. o There is a lack of broad-based subject matter and environments by which to compare the conditions.

Moreover, it is impossible to prove a statement that digital learning opportunities are always as effective as their instructor-led counterparts are. At best, we can only show that: (a) the preponderance of evidence suggests that they are as effective, or (b) the evidence clearly discounts digital learning as being an effective alternative. Such is the nature of science and research, regardless of the question

Review of the Literature

As mentioned before, the research literature offers as much theory and opinion as it does reliable research on the subject of effectiveness. Still, there is an abundance of well-considered positions on the value of and concerns about digital learning:

It is important to note that each of these positions, whether supporting or questioning digital learning is as much a function of dynamics between the individual's learning style and the learning environment as the method of delivery. For example, when interaction is made part of the process, isolation is not likely to occur. When observation is inherent in the process, such as is required for demonstrating competency, dishonesty is no greater than in classroom events.

The research selected for this review attempts to account for learning environments that are similar to what one might experience in a classroom setting and with similar expectations for time requirements and outcomes. Using the categories described above, four independent research projects are examined, all from academia. While it is possible to question the groupings - survey vs. competency, the projects are examined considering the usual expectations for such courses.

Survey Cases

Two projects are included here. Both projects were sponsored by professors who supposedly have a genuine interest in answering the question of effectiveness. In each case, any student focused on study that required competency in the material could consider the course a competency course. For the majority of students, it is expected that the courses were electives and not part of requisite training.

Introductory Psychology for Undergraduates

If digital learning opportunities do indeed produce similar results to classroom events in terms of grades, Stephanie Waschull (2001) was interested in understanding and comparing student attrition, performance and satisfaction in comparable sections of online versus classroom introductory psychology. Of particular importance, Waschull attempted to control for the effects of self-selection in digital courses.

To achieve these ends, Waschull (2001) conducted two studies:

  • Study 1 - Students in one section of introductory psychology received online digital courseware, and another section received the same material in the classroom. The online participants were self-selected.
  • Study 2 - Students in one section received online digital courseware, and another section received the same material in the classroom. Students were not informed of the delivery method at course registration.

In both studies, the classroom course met five times weekly for 50 minutes. Course meetings consisted of lectures supplemented by discussions. Students completed five written assignments and read assigned chapters from a textbook.

Online students visited the course Web site to read four to five lectures a week, visited 10 relevant Web sites a week, submitted five written assignments and read assigned chapters from a textbook. Aside from the differences in format, live discussion versus Web site review, the content was identical. All groups were given four identical tests and a final exam.

Results

In Study 1 (self-selection), there was no significant difference in outcomes in terms of student satisfaction or attrition. While the online student's test performance was similar to the classroom students, there was a difference. The proportion of students passing the course was significantly lower in the online version.

In Study 2 the proportion of students passing the course was not significantly different in the online section and the classroom version. Likewise, attrition rates, regardless of race, age or sex, were not significantly different. Though the difference in satisfaction did not reach significance, the Study 2 students, those who did not know about the course being digital, gave somewhat lower satisfaction evaluations than the classroom students.

Conclusions

In summary, Waschullm (2001), concludes that when self-selection was controlled, attrition, performance and satisfaction were similar in online and classroom sections. Since there was no attempt to actually assess competency, nor a need to do so, we can only say that Waschull's research points to the effectiveness of survey-based digital learning in terms of performance, attrition and learner satisfaction.

Limitations of the Research

This study has its limitations and therefore the results are only suggestive. For example, there is a lack of understanding of Waschull's (2001) interest in digital learning and her biases. Hence, it is not possible to know what influence she had on the results simply by virtue of the organization and content of the digital course and the classroom version.

In addition, the results cannot be generalized to a large population because of the small sample of students. In study one, there were only 14 students in the online section. In study two, there were nineteen.

Finally, there is the issue of the population-undergraduate students. These are students who are supposedly prepared to study and to focus on learning. However, age and maturity were not considered in the study and might or might not have been factors. Hence, they might or might not be representative of students in the commercial sector.

Unexpected Results

Even as this research, considering both studies, concludes that there was no significant difference in test performance in classroom versus online groups, there was a higher failure rate in the self-selected online group (Study 1). Waschull (2001) was not able to establish a reason for this outcome and the small sampling might have contributed to the difference. It is also possible that those who chose the digital form might have had different and unrealistic expectations-such as, it will be easier.

Regulations and Policy in the Telecommunications Industry

Fallah and Ubell (2000) undertook to compare the effectiveness of digital learning with that of conventional in-class events using a "blind" test method. In a blind test, there is no opportunity for instructor influence on the exam outcome. In most research, the instructor is the researcher and is involved to some degree in the conduct of the digital course, particularly in the construction, administration and evaluation of the exams. Blind tests use others to score the exams and exercises, without knowing about the student or the delivery method.

The researchers accomplished this by creating a digital (Web) based version of the same content used for classroom instruction. The content, textbook, reading assignments and homework assignments were identical. The only difference was the lack of an instructor in the digital version. Exams were administered offline to both groups and proctored by teaching assistants. Test results were assigned a number versus a student name so that the proctor had no idea as to whether the course was taken online or in-class.

Results and Conclusions

Though the test results from the two forms of instruction were not significantly different, the average score for the online class was five points (five percent) higher than for the on campus class. Hence, Fallah and Ubell (2000) conclude that even by removing instructor influence and bias, digital learning can be as effective as traditional classroom instruction.

Limitations of the Research

As with the Waschull (2001) study, this research is limited by the number and type of participant.

  • Only seven students participated in the online version of the course and they were all self-selected. Only 12 students attended the on campus class. Hence, the small sampling only points to the possibility that digital learning is effective.
  • By having a purely self-selected group for the digital learning exercise, we cannot know the expectations or motivations of the participants, though the researchers draw some interesting conclusions based on student profiles.

Unexpected Results

Unlike the Waschull (2001) study, Fallah and Ubell (2000) report that the test scores of the online group were generally higher than the on campus group though not by a significant margin. The on-campus group showed a mean exam score that was similar to the online group but, unlike the online students, the on-campus student scores followed a bimodal distribution - some scoring high and some low. From this, the authors made some interesting observations when student profiles and retention were considered:

  • Retention—The number of dropouts in the online class was higher than the on campus group. The authors suspect that the rigor of an online class forced out those who were unprepared or marginal learners, leaving a homogeneous group and a tighter distribution of grades.
  • Population—The course participants were self-selected and were working professionals. Those who registered for the on-campus class were full-time students. The researchers suspect that the digital course population, particularly those who completed the course, were more mature personally and possibly professionally (some held management positions) and this strongly influenced the grades.

Since the researchers did not specifically control for these variables, we have only the observations to work with. Still, it is interesting to note the differences in experience of this study and the Waschull (2001) study. The primary difference seems to be that the student population that performed best with a digital learning course can be characterized as older, more mature and possibly more motivated than Waschull's group of self-selected undergraduate students.

Competency cases

Two cases of competency course research are reviewed here. The single most important reason these courses are considered examples of competency training is the intent of the researcher. In all cases, the researcher approached the course as requiring demonstrated competency before a passing grade could be issued.

Undergraduate Business Communications

Shelia Tucker (2001) conducted a study to compare traditional classroom training to digital learning in an attempt to understand if digital learning is better, worse, or as good as traditional education. Tucker went further than much of the digital learning research. She was interested in understanding differences in preferred learning styles, age, homework grades, research paper grades, case-based final exam, final grades and subject matter knowledge as measured by both pre-test and post-test. It is noteworthy that the combination of all factors and grading elements shows competency, not just comprehension.

Tucker (2001) drew from 47 participants in an undergraduate "business communications" class with 23 being assigned to the traditional classroom experience (mean age of 23) and 24 electing to take the digital course (mean age of 38). Tucker, as instructor and researcher, provided the same content, course materials, assignments and the same period for completing the course to both groups. All were given the same pre and post-test, homework, research project and exams. Additionally, both groups were required to participate in discussions and problem-solving exercises to both develop skills and demonstrate mastery. The primary differences in delivery were that:

  • Online students used technology for discussion groups.
  • Online students had contact with the instructor only though e-mail, discussion group facilities, telephone, and FAX.
  • Online students had access to lectures only through audio links.

Results

Statistical analysis of the measures of learning style, pre and post-test scores, homework grades, research paper grades, case-based final exam and final grades pointed to significant differences in post-test scores, final exams and age. No significant differences were found in the other variables. The following table summarizes the results:

Variable

Digital Learning

Pre-Test

No Difference

Post-Test

Better

Final Exam

Better

Final Grade

No Difference

Age

Better

Homework

No Difference

Research Paper

No Difference

The learning-styles assessment pointed to some interesting and possibly predictive results. Both groups expressed a preference for well-organized courses and an expectation of a moderately high final grade. Both desired meaningful assignments and a logical sequence of activities.

The digital learning group particularly preferred having direct contact with the materials, topics and situations. They least preferred authority and listening. They tended not to like classroom discipline or maintenance of order, nor did they like listening to lectures, tapes and speeches.

Conclusions

The statistics alone point to digital learning as being as effective as traditional classroom events. While those who took the digital learning course did better in particular areas than those in the classroom, we cannot conclude that competency is necessarily better established or more aggressively pursued among digital learners than classroom attendees. All we can suggest is that the digital learners scored higher in three categories than classroom learners.

It is interesting to note that age was a significant factor. Older students not only preferred the digital version of the course, they performed better than those in the classroom did. As with the other research, this suggests that perhaps maturity plays a role in preference and in success. To some degree, this is supported by the learning-styles assessment. With maturity and age, one would expect there to be less interest in the authority and discipline implied by classroom learning experiences and greater preference for independent activities.

Limitations

As with the other research, this study suffers from two limitations: (a) the researcher is also the instructor and we have no information about biases that might have influenced the outcome, (b) though the studied population is somewhat higher than the other studies, it is still small.

Unexpected Results

The research did not specifically focus on those activities that would most effectively assess competency. However, it is interesting to note that the final exam was a series of case studies in which the students were asked to propose and discuss solutions. This meant that comprehension was far less a factor in the grade than critical thinking and the ability to respond to situations with different behavior.

While we cannot make reliable assertions, we can point to the significantly higher performance of the digital learners on this aspect of the course compared to the classroom learners to offer anecdotal evidence that some digital learners might actually demonstrate higher competence than classroom learners given a suitable learning opportunity. Unfortunately, we cannot yet predict reasons for such results. Learner age, maturity, life experience and/or motivation could be the key factors and digital learning might not be a factor at all. We can only watch for trends in other research.

Instructional Design for HR Professionals Johnson, Aragon and Palma-Rivas (2000) conducted research to test the effectiveness of digital learning against an equivalent course taught in a face-to-face format. In particular, the design attempted to determine if properly designed environments that differ on many characteristics (media etc.) can be equivalent in terms of learning and satisfaction. The research was interested in overall performance plus factors that have not been well studied:

  • Student ratings of the instructor
  • Course quality
  • Course interaction, structure and support
  • Competency as assessed through objective measures and student self-assessment of their ability to perform various instructional design tasks.

The researchers designed the experiment and conducted the experiment. Both the online course and the classroom event were taught by the same instructor using the same content, activities and projects. Nineteen students enrolled in the online version. All were pursuing a graduate degree in Human Resource Development (HRD) through a digital learning program at the university. Another nineteen students enrolled in the classroom course. These students were attending the same graduate program only through the traditional face-to-face program at the same university.

Of particular importance in this study, the groups were largely equivalent before the start of the research. Prior experiences with instructional design and/or ISD learning opportunities were statistically equivalent. There were only slight and insignificant differences in age as compared to the other research discussed in this paper. The years of work experience and undergraduate GPAs were also equivalent.

The researchers used two primary forms of data for their conclusions:

  • Observed competency in ISD—exams and a project provided a measure of competency. Each student was required to complete a training package that represented six to eight hours of instruction, including all training materials, instructional materials and student materials. The package had to be sufficiently complete so that another instructor could deliver the course with minimal preparation. Grading was accomplished by a blind review process where three doctoral students independently evaluated each project without knowing the source.
  • Self-assessment-students were asked to rate their level of comfort at performing various ISD tasks.

Results

The following table summarizes the results of the research for course outcomes:

Variable

Classroom Course

Digital Course

Course Interaction

More positive

 

Student to Student Interactions

More favorable

 

Student to Instructor Interactions

Much more favorable

 

Course Structure

 

No difference in perception

Instructor Support

More favorable

 

Course Project

 

No difference in outcome

Course Grades

 

No difference in outcome

The following shows the students' reported levels of comfort at performing the tasks associated with instructional design. Only five of the 29 items on the self-assessment were significantly different.

Item

Classroom Course

Digital Course

Distinguishing among ISD models

 

More comfortable

Preparing a learner analysis

More comfortable

 

Preparing a content analysis

More comfortable

 

Writing goal statements

More comfortable

 

Writing terminal objectives

More comfortable

 

Conclusions

The results of this study show that student satisfaction with their learning experience tends to be slightly more positive for students in the traditional classroom setting than those taking the digital course, though there is no significant difference in the quality of learning that takes place. The blind review process for the projects showed no differences in the quality of the projects. The same claim can be made for the exams. Hence, the researchers conclude that there is no significant difference in outcome of one format versus the other though students taking the digital version of the course are likely to be less satisfied with their experience.

Unexpected Results

The authors included the self-assessment by students to determine their level of comfort with the courses, but they dismissed a potentially important result. The study shows that in only five of the 29 self-assessment questions there is a significant difference in levels of comfort. In four of those five, the digital learning group felt less comfortable with their competence. While statistically this is not a significant difference, the nature of the four is potentially revealing. Here the students seem to be saying that in particularly critical areas of instructional design, they feel less prepared to execute even though they have demonstrated a degree of competence at least equal to that of the classroom students.

We have no information by which to understand this result and the researchers tend to dismiss it. Still, one might speculate that if the demonstrated competence is equal and the perception is not, the students might have had less reinforcement by and acknowledgement from the instructor and/or peers. Again, the data does not directly support this speculation. Rather, there is the application of a received understanding that instructors can have considerably more affect than just communication of knowledge. They can also influence the learner's perception of competence. In this study, the digital learners had no face-to-face interaction with the instructor.

Conclusions

Taking the studies together and ignoring the limitations of the research, it is possible to form some tentative conclusions:

Survey Courses

The following conclusions are offered from the digital survey courses.

  • Digital learning is at least as effective as classroom courses.
  • The motivation and expectations of the learner are critical to retention and successful completion of a digital course. Age and maturity are likely to be significant factors in the successful completion of a digital course.

Competency Courses

The following are conclusions drawn from the studies of digital competency courses:

  • Digital learning is at least as effective as classroom events.
  • The motivation, age, maturity and life experience of the student can be significant determinants to the degree of success. When motivation, maturity and/or life experience are high, digital learners can sometimes show greater competency than classroom learners.
  • Though the evidence is weak, at best, there is a suspicion that the quality of instructor/mentor reinforcement and support for the digital student can affect the student's perception of their competence, even when competence is well demonstrated.

Returning to the question of when and where digital learning is as or more effective than traditional classroom training using equivalent learning content, these studies point to digital learning as being an effective alternative to traditional classroom learning for both survey and competency courses when:

  • Students are motivated and prepared to learn, even in cases where there is minimal human interaction.
  • Preferences for authority, lectures, speeches and/or listening are unimportant or minimally important.

As each of the researchers state in their conclusions, clearly more research is needed to understand when, where and for whom digital learning is most appropriate. This is particularly true for populations of learners in the commercial world. In the meantime, it is safe to suggest that the judicious and well-considered use of digital learning can offer significant advantage to organizations and to those learners prepared to exercise the same discipline and desire to learn that are requisites of all true learning opportunities.

Recommendations

While the research examined here represents a small portion of the growing body of evidence in support of digital learning, using received wisdom and the conclusions stated above, there are particular recommendations that can be made to those organizations contemplating digital learning as an alternative to classroom events or reviewing their digital learning programs to make them more successful.

Determine the Expected Outcome and Design Accordingly

If the expectation is of a change in attitude, beliefs or thinking, design the course to be a survey course. For such courses, limited human interaction, such as normally occurs with self-paced digital designs, is acceptable, as is perhaps the use of the ubiquitous multiple-choice exam. The conditional regarding the use of an exam follows from the question—why bother to test at all.

If the expectation is only increased understanding, why do we need to test? If Saljo (1987) is correct, the most significant predictor of success in a survey course is the fact that the student read the text. The exam is merely a confirmation that the student read that text. Still, if an exam is intended to actually measure and to record one's comprehension, it should be included as part of a formal course. If there is no particular need for such an assessment, forget the exam and rely on learner motivation (whether a function of a requirement or self-interest) and the learner's "need-to-know."

Taking the question one step farther, if we do not need to test, why even bother to place the content in a digital course format? If the content is only based on an organizational need to inform the learner, organizations can save considerable money, and learners can equally benefit from simply making the material available through a digital library of documents such that it is available when the learner determines it is useful. If the student is motivated, learning (a.k.a. change) will occur. If not motivated, memorization and consequent mental dust will prevail.

Another way to answer questions of "test or not" and "format as a course or not" is to take the least costly path by using commercial course libraries to deliver the learning opportunity. Ignore issues such as look-and-feel, exam results, branding and user interface. If the survey course meets the informational requirements of the organization, contract for the course and avoid the cost of development.

On the other hand, if the course is a competency course, it should be custom developed. Here the expectation is that the learner will change behavior in some way that benefits the organization and the learner (new skills, new capability, enhanced performance, etc.).

By earlier definition, competency is measured by observation, not just by exam. Hence, organization subject matter experts or trained instructors must be involved in the design, development, delivery and assessment processes. Competency might be partially achieved through a digital library or a commercial catalog. It might be enhanced by independent research through the Internet. In the end, competency courses require human intervention and a program specifically designed to confirm that the student can perform to the required level of competency.

Rather than attempt to measure the competency of a person, the organization might want to consider measuring the competency of an entire group of people. If a group has achieved a satisfactory level of competence after having taken a course or program, some change in business performance should be observable by the leadership, even if that change is measured by subjective means.

Make it a Requirement

If the organizational expectation is genuinely to effect change, whether through a survey or competency course, make it a requirement for continued employment. Motivation is a factor, perhaps even a significant one, in determining whether a course is ever completed. Making a course a requirement of employment is one of the ways by which to increase that motivation.

Encourage, If Not Demand Interaction

Although less important for survey courses than competency courses, interaction with instructors/mentors and peers can be important to learner satisfaction and confidence. Instructor or mentor interaction can provide the reinforcement and support the learner needs to feel competent. Keep humans involved in the learning process.

Interaction might be achieved using electronic facilities such as synchronous, interactive software, chat rooms or even bulletin board facilities. E-mail is another alternative. For decades, colleges and universities that offered distance-learning programs used telephones. Perhaps one of the most innovative of strategies was conducted on a reservation in Arizona where classes were broadcast over a low-power radio station using a talk-show format. Students called in to interact with the instructor and to debate or ask questions. All of the above are examples of blended-learning using electronic strategies.

Provide Opportunities for Practice

Competency is not achieved by just reading material and viewing pages on a computer display. Provide many opportunities for human interaction through peer-to-peer exercises, discussion groups, practice sessions and similar activities. In many cases the human interaction is possible by electronic means, whether in real-time or asynchronously. In any case, design is important. The activities must be relevant to the course objectives. In some cases, the practice must be physical.

Make the Content Relevant and Timely

It was recently asserted (Bob Allen, EDS, 2002, personal communication) that the best way to teach is to make the needed material an obstacle to getting one's work done. When the material is necessary, relevant and/or important, place it in front of the learner rather than invite the learner to take a course. In short, consider the need for the learning opportunity and push the content to the learner when appropriate. For example, if there is a need to implement a supervisory user log-on to an internal system and post the following message: "New OSHA regulations go into effect today and they alter the way you do business…click here to learn how."

Summary

Much has been said about the conclusions we can extract from the studies and about consequent recommendations. More need not be said here. Rather, three important points need to be emphasized to conclude this review.

Digital learning is as effective as any other learning method when:

  • The organization's expectations are clearly identified.
  • The learner's needs and readiness are properly considered.
  • Business objectives merge with reason and sound pedagogical thought to guide the course design and decision-making processes.

In short, for any learning opportunity to be effective, regardless of delivery means and methods, it must consider these points.

References

Beare, P.L. (1989). The comparative effectiveness of videotape, audiotape and telecture. The American Journal of Distance Education 3, 57-66.

Campbell, K. (2000, March). E-Learning: Fact or Fiction? Dynamic Business Magazine. Retrieved from http://www.smc.org/.

Fallah, M & Ubell, R. (2000, December). Blind scores in a graduate test: Conventional compared with web-based outcomes. ALN Magazine, 4(2), 1-5.

Feuerstein, R. (1980). Instrumental Enrichment: An Intervention Program for Cognitive Modifiability. Baltimore: University Park Press.

Fitzpatrick, R. (2001). "Is distance education better than the traditional classroom?"

Howe, J. (1987). Using cognitive psychology to help students learn how to learn. In J. Richardson, M. Eysenck and D. Piper (Eds), Student learning: Research in Education and Cognitive Psychology (pp. 135-146). Philadelphia: The Society for Research into Higher Education. Open University Press.

Johnson, S., Aragon, N. & Palma-Rivas, N. (2000). Comparative analysis of learner satisfaction and learning outcomes in online and face-to-face learning environments. Journal of Interactive Learning Research, 11, 29-49.

Kerka, S. (1996). Distance learning, the Internet, and the World Wide Web. Washington, D.C.: Office of Educational Research and Improvement. (ERIC Document reproduction Service No. ED 395 214)

Knight, J., Ridley, D.R., & Davies, E.S. (1998, May). "Assessment of student academic achievement in an on-line program." Paper presented at the Association for Institutional Research Annual Forum, Minneapolis, MN.

Lawson, T. (2000). Teaching a social psychology course on the web. Teaching Psychology, 27, 285-288.

McCleary, I. & Egan, M. (1989). Program design and evaluation: Two-way interactive television. The American Journal of Distance Education, 3, 50-60.

Ridley, D.R. (1998, June). "The 1998 assessment report on CNU on-line." Paper presented to the State Council of Higher Education for Virginia, Newport News, VA.

Ruttenberg, A. & Ruttenberg, R. (2000). "Railway Workers Hazardous Materials Tarining Program: Evaluation of On-line Eight-Hour Awareness Training Pilot." Unpublished Paper: George Meany Center for Labor Studies-National Labor College. Bethesda, MD.

Saljo, R. (1987). The educational construction of learning. In J. Richardson, M. Eysenck and D. Piper (Eds), Student learning: Research in Education and Cognitive Psychology (pp. 101-108). Philadelphia: The Society for Research into Higher Education. Open University Press.

Sonner, B. (1999). Success in the capstone business course - assessing the effectiveness of distance learning. Journal of Education for Business, 7, 243-248.

Tucker, S. (2001). Distance education: Better, worse, or as good as traditional education. Online Journal of Distance Learning Administration, 4(4).

Waschull, S.B. (1997, September). "Teaching and learning over the WWW." Paper presented at the meeting of the National Alliance of Community and technical Colleges, Biloxi, MS.

Waschull, S.B. (2001). The online delivery of psychology courses: Attrition, performance, and evaluation. Teaching of Psychology, 28, 143-147.



Comments

  • There are no comments at this time.