ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Synchronous Learning: Is there a future?

By Martin Sivula / September 2013

Print Email
Comments Instapaper

In the last five years or so, society has witnessed a proliferation of electronic devices—smartphones, iPads, tablets, netbooks, etc. This trend has brought more mobility to the learner in terms of what, where, when, and how to learn. In response, many schools and colleges have developed degree programs and courses to accommodate a wide variety of learner types. Although many universities offer opportunities for online instruction, they predominantly offer asynchronous opportunities. However, a great opportunity also exists for the synchronous, online learning environment.

While appealing to the University climate in terms of convenience and cost-effectiveness, asynchronous learning has received some negative comments and criticisms from students. Along with these sentiments lies the "students' attitude" whether to use or not use eLearning: "Who is doing the learning, there is no engagement, this is like an independent study, there is no instruction, he doesn't get back to me, read the book and do the problems by the due date…" and so on. Shi, Du, Jiang, and Bin Saab reported: "In summary, students' perception towards online education is somewhat negative. This is due to the fact that only 29 out of 100 took online courses, and out of those 29 in the sample, only 35 percent believe that the online education is just effective as traditional education" [1]. They went on to state: "Fifty-six percent reported having positive perception towards online education and yet only 22 percent would actually take an online course if given the opportunity."

Many of these critiques may or may not be warranted. However, they negatively affect present or future eLearners' attitudes toward the technology and its actual use. It is a problem affecting graduate students at our university—students who are increasingly mobile due to work and family obligations. In focus group sessions, they reported to prefer face-to-face instructor led courses and stated they would not take an online, asynchronous course. In further group discussions, many students reiterated they would like having face-to-face opportunities on their computer, but they would prefer to be at home, school, or at work, not necessarily on campus. It was apparent that more understanding of the eLearning tools and their applications in training and education was needed. With these considerations in mind, we undertook classroom research and survey research methods using Blackboard's Wimba collaborative software tool.

Classroom Research

Applying the principles of Cross and Angelo [2], we conducted classroom research in a collaborative nature using synchronous (Skype and Yugma) eLearning software. Cross and Angelo characterize classroom research as:
  • Learner-centered "focuses its primary attention on observing and improving learning rather than on teaching."
  • Teacher-directed proposes that, "teachers are capable of conducting useful and valid research on classroom learning."
  • Collaborative "requires the active engagement of students and teachers…benefits from full discussion and participation by all [stakeholders]."
  • Context-specific "is conducted to shed light on the specific questions of an identified classroom"and "involves the teaching of a particular discipline to a known group of students."
  • Scholarly "intellectually demanding and professionally responsible. It builds on the knowledge base of research on teaching and learning. It requires the identification of a researchable question, the careful planning of an appropriate research design, and consideration of the implications of the research for practice."
  • Practical and relevant: "The questions selected…are practical questions that the teacher faces in teaching the class…the primary purpose…is to deepen personal understanding…[and make a] contribution to the knowledge and practice of the teacher."
  • Continual: ongoing, and can "raise new questions, leading to cascading investigations."

Overall, classroom research on eLearning has shown to be successful and positively influencing student learning [3]. Additionally, the Technology Acceptance Model (TAM) provides a framework for students collaborating and discussing (real time) course content. Students determine the ease of use of the software and individually make a decision where or not it is useful for them (personally) for learning. While the Diffusion of Innovations (DOI) model provides background on the willingness of student learners to adopt new software innovations and examines their rates of adoption over time.

Technology Acceptance Model

Emerging instructional technology cannot deliver improved learning effectiveness, if it is not accepted and used by potential users. According to the Technology Acceptance Model (TAM), the easier the technology is to use the more useful it is perceived to be, therefore influencing attitude and one's intention to use the technology [4]. Fishbein and Ajzen's reasoned action model actually informs TAM, where it is stated that to produce a behavior depends upon two basic factors: attitudes toward the behavior and subjective norms [5].

King and He's meta-analysis of 88 published studies of the TAM model confirm the model can be used in a wide variety of contents, and perceived usefulness affects ease-of-use and intent to use the technology [6]. According to Lee, Cheung, and Chen perceived usefulness and perceived enjoyment impact students' attitude toward an intention to use an Internet-based learning medium (ILM) [7]. Saadé, Nebebe, and Tan state students' involvement and participation were most important to the success of eLearning systems [8]. They further recommend students' acceptance behaviors should be assessed, and conclude TAM model's validity could also further extend to the multimedia and eLearning contexts.

Kamel [9] describes two major components addressed through TAM as such:

  • Perceived usefulness (PU) is the prospective user's subjective probability that using a specific application system will increase his or her job performance within an organizational context. Characteristics and attributes include: work more quickly, job performance, increased productivity, effectiveness, makes job easier, and useful.
  • Perceived ease of use (PEU) is the degree to which the prospective user expects the target system to be free of effort. Characteristics and attributes of these constructs are (these were also used in the construction of the survey instrument): easy to learn, clear and understandable, easy to become skillful, easy to use, and controllable.

Park's analysis of 628 students' use of eLearning also showed support for TAM within the university environment [10]. Park recommends educators and managers make a concentrated effort to boost university students' eLearning self-efficacy. Furthermore, institutions should offer a greater variety of eLearning courses and advertise the benefits of eLearning to attract more students. More development of user-friendly and user-oriented content should be contained within the university's learning management system. Such developments might help new perceptions of eLearning and consequently create more satisfaction and use of eLearning. Lastly, Park recommends researchers should conduct similar studies in blended and synchronous eLearning environments employing the TAM as a model.

Diffusion of Innovations

Rogers defines diffusion as "the process by which an innovation is communicated through certain channels over time among the members of a social system" [11]. He goes on to state an innovation is "an idea, practice, or object that is perceived as new by an individual or another unit of adoption." Rogers' Diffusion of Innovations (DOI) theory posits innovations are being communicated through certain channels over time and within a particular social system. Individuals possess different degrees of willingness to adopt innovations and it is generally observed that the portion of the population adopting an innovation is approximately normally distributed over time. Using statistical properties of the normal distribution leads to the segregation of individuals into the following five categories of individual innovativeness: innovators (2.5%), early adopters (13.5%), early majority (34%), late majority (34%), late adopters (16%). Members of each category typically possess distinguishing characteristics:

  • Innovators—venturesome, educated, multiple info sources
  • Early adopters—social leaders, popular, educated
  • Early majority—deliberate, many informal social contacts
  • Late majority—skeptical, traditional, lower socio-economic status
  • Late adopters—neighbors and friends are main info sources, fear of debt

When the normal adoption curve is converted to a cumulative percent curve a characteristic S curve (the logistic function) is generated, which represents the rate of adoption of the innovation within the population The rate of adoption of innovations is impacted by five factors: relative advantage, compatibility, trialability, observability, and complexity [11]. The first four factors are generally positively correlated with rate of adoption while the last factor, complexity, is generally negatively correlated with rate of adoption.

Lee, Hsieh, and Hsu combined innovation diffusion theory with the TAM [12]. The five factors—relative advantage, compatibility, trialability, observability, and complexity—of the diffusion of innovations explained variance (R-squared) of three of TAM's major constructs: PU (42%), PEU (64%), and behavioral intention to use (BI, 51%). This study examined business employees (N = 552) using eLearning systems in Taiwan. One of the principal findings was relative advantage, compatibility, trialability, observability, and complexity of DOI had significant effects on the employees' behavioral intention of using eLearning systems—certainly justifying its use in eLearning environments. Lastly, Lee et. al. conclude combining TAM and DOI models could offer better overall results for future studies [12].

The Wimba Study

Understanding how students perceive and react to various elements of eLearning was the goal of the Wimba study, while also assessing student intentions and beliefs about eLearning. For this study the sample was comprised of graduate students (N = 66) from three sections of research methods course taught at a southern New England University. From this sample, approximately 80 percent were international students. The overall research design was one of participatory observation and action research, evaluation was completed using the survey method. English was a second language for the majority of these participants. There were no random sampling techniques used, and this was a convenience sample.

Procedure

The instructor gave students the option to participate in the Wimba online classroom in one of three ways: complete participant (online from home), participant observer (laptop in classroom), or observer participant (real-time, face-to-face, no devices in classroom). Students who chose to use the collaborative tool electronically received two training interventions: (1) a YouTube video on the Wimba classroom, and (2) a PDF file on Wimba use from Blackboard Inc. Students who participated electronically were instructed to use a "wizard" prior to entering the Wimba classroom. This wizard runs a software diagnostic on a student's computer to check compatibility with the Wimba system.

The classroom sessions were three hours in length with approximately a 10-minute break after each hour of the three-hour session. The instructor completed two sessions with the 66 participants. The instructor used three major components during the sessions: PowerPoint presentations, the electronic pen, and the digital blackboard. Wimba provides audio through the Blackboard system, so in essence the students received "show and tell" presentations. They were allowed to ask questions (audio) from their computers. Students within the face-to-face classroom could also ask questions in real time. The instructor had a microphone attached to his computer, which would pick up questions from the "in-class" students. The in-class students (face-to-face) viewed the PowerPoint slides and any writing done digitally on a Smart Board screen housed within the classroom. The students on computers/laptops observed the same on their personal computer screens. For the most part, we did not use any video at all, except when the instructor had his webcam tuned in and pointed at the real, face-to-face classroom. In a sense reality was somewhat "augmented."

Augmented reality is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data [13]. In the Wimba study, the students had the option to augment their reality related to the classroom. If they chose to sit in the physical classroom, they had the option to view the classroom on their laptop while they were actually in the classroom. In essence we "mixed reality" by mixing virtual, instructional objects with a view of the real world depending on where the student was viewing. Use of an instructor webcam, which was pointed at the students in the physical classroom, augmented the reality of the student participating in the class from his home computer online. If an electronic pen and digital blackboard is used then you further "mediate" the reality of the learner. (We have found through previous classroom research "show and tell" with an electronic pen and either PowerPoint or an electronic blackboard seems to work well with audio of the instructor's voice and interactive questions by the students.) As eLearning expands and has new visions for the future, new software applications such as augmented and mediated reality might provide opportunities for online learning.

Results

The students who participated in the classroom research where surveyed using the Google Docs Form Tool. Eighty-two percent agreed or strongly agreed that Wimba was "easy to learn"and 73 percent stated it was "easy to use." These findings support PEU of the TAM model. Seventy-three percent agreed or strongly agreed they would recommend Wimba to other persons for training, instruction, and learning. Only 31 percent of the students reported some technical problems during the three-hour session, and 55 percent reported no problems. More than 50 percent stated Wimba was favorable or beneficial for their learning and 62 percent agreed or strongly agreed with the following, "all things considered Wimba was a positive experience." This finding is also consistent with the "relative advantages" construct of the DOI model. When asked: "All things considered I would recommend Wimba to other persons for training, instruction, and learning," 59 percent agreed or strongly agreed. However, when asked: "All things considered, I would enroll in an entirely Wimba-based course (term or semester)," only 24 percent agreed or strongly agreed with the largest single group in the middle or undecided (17 percent).

Because the group was largely international similar statements where used with slightly different adjectives and nouns. There was also a deliberate attempt to over emphasize the constructs of "perceived usefulness" and "perceived ease of use." A more detailed graphical analysis of the results follows using Google docs Form Summary tool (see Appendix A).

Wimba provides for voice, video, and data (chat). However there were not many chat messages; audio was the preferred communication medium. Many students were too shy to have their webcam directly on themselves. (Wimba will show a user's webcam when they direct a question to the instructor). Lastly, since most of the students were not native English speakers, the "show and tell" method worked well to help with translation and English problems by giving the same message in more than one modality (such as voice and video).

Discussion

This research was successful overall, and once again TAM is supported (PU, PEU, and behavioral intention to use) as a model that is robust over many software technologies [4]. DOI's "relative advantages" was also supported [11]. The content of the course was primarily delivered through PowerPoint using the Wimba's Collaboration Tool. The electronic pen allowed "inking" on PowerPoint, with audio instruction in conjunction with the provided PowerPoint. This simple instructional dialog seems to work well because students can ask questions and all the users can hear them and also hear the instructor's responses. This particular software/hardware technology has some delay in the audio, which is not really a problem, but might deter from the instructional presentation. Depending upon the type of content delivered you can assign "break out rooms" for small group discussions. Older students who have work and family commitments might be able to attend a class from a remote location in this manner. Instructors might be able to teach their course from remote locations as well.

The model used here (augmented/mediated) reality could be used in virtually any university provided they have some type of collaboration software. A course could be delivered to several hundred students (there may be band-width issues and other technical problems) simultaneously where some of them would be in various locations on campus (with a TA present at each location) for face-to-face instruction. The professor would present on a large screen at each location and students could only view and listen (traditional lecture). Or students could attend with their laptop and view and interact as well. The TA would be the "human component" at each location for personal student issues, etc. Wimba sessions also have the ability to be recorded, so professors could provide absentee students with their lectures, and/or students could replay the lecture for increased instruction.

For the mobile learner and eLearners, in general, the synchronous environment has much to offer. Personal student preferences for learning might be accommodated and universities might attract more students.

Disruptive Innovations-Synchronous-Mobile?

Christensen developed a theory of disruptive innovations where an "innovation transforms an existing market or sector by introducing simplicity, convenience, accessibility, and affordability where complication and high cost are the status quo. Initially, a disruptive innovation is formed in a niche market that may appear unattractive or inconsequential to industry incumbents, but eventually the new product or idea completely redefines the industry" [14]. One of the best examples is from the early 1980s, when the Apple IIE Computer transformed the industry. Back then mini and mainframe computers cost around $200,000.But ever so gradually this "disruption" improved and eventually smaller more personal (and liberating) computers exceeded mini-systems' capacities and the rest is history. Also at this time high-level programming languages provided specific financial and business applications. Then came another disruptive innovation, Lotus 1-2-3, a spreadsheet program. Often described as the first "killer app," Lotus 1-2-3 was exclusive to IBM in the 1980s. In the end Lotus had a near-monopoly of the spreadsheet market, which remained unchallenged for nearly a decade [15].

The point being that we are in the midst of many potential disruptive-innovations depending upon how users adapt, perceive their usefulness, ease of use, and the relative advantage of using them. One might think of extending eLearning to smartphone users. These "mobile-only Web users" rarely use a tablet, laptop, and/or computer to access the Web [16]. In many developing nations, the majority of Web users are mobile only and under 25 years old (a potential source for online learners), estimates of mobile-only users in the U.S. were at 25 percent in 2010. This group has the potential to be a disruptive source of innovation for eLearning. A "turnkey" eLearning system could be marketed to this segment where the smartphone, wireless contract, downloaded instructional software, and specific Web browsers and "apps" all come in one package for one price as long as you are enrolled. A blend of synchronous and asynchronous learning activities would be standard, but all on the smartphone. Such eLearning systems could provide valuable learning opportunities for developing nations as well as mobile only users.

Limitations and Suggestions

This study was limited due to a convenience sample of graduate students (N=66). The main aim of the study was to use survey research to ascertain their thoughts and opinions on collaborative software (Wimba) used for eLearning purposes. Generalization of these results should only be made to similar persons, settings, times, and places. The TAM and the DOI were addressed and seemed to show support through the survey research method. However, no formal hypotheses and/or tests were conducted in this study. Further research should be conducted with an experimental model utilizing experimental/control groups and/or post-test only groups where there is random assignment to each group or at least a control-comparison group if randomization is not practical. The intervention/treatment would be synchronous instruction using some type of software. The control and/or comparison group would be traditional, face-to-face class. Valued outcomes such as test scores, exam grades, and/or final grades might be considered the dependent variable(s). Lastly the new technologies, such as the smartphone as an eLearning tool might prove to be a valuable innovation, disruptive or not.

References

[1] Shi, N., Du, C., Jiang, X., and Bin Saab, H. Online versus face to face college courses. MBA Student Scholarship. Paper 1. Johnson & Wales University, Providence. 2011.

[2] Angelo, T. A., and Cross, K. P. Classroom Assessment Techniques: A handbook for college teachers. (2nd ed.) Jossey-Bass, San Francisco, 1993.

[3]Sivula, M. Hybrid graduate education: Assessing student comfort with technology interventions. Ubiquitous Learning: An International Journal 3, (2010), 1-11.

[4] Davis, F. D. A technology acceptance model for empirically testing new end-user information systems: Theory and results. Doctoral dissertation. MIT Sloan School of Management, Cambridge, 1986.

[5] Fishbein, M., and Ajzen, I. Belief, Attitude, Intention and Behavior: An introduction to theory and research. Addison-Wesley, Reading. MA, 1975.

[6] King, W.R., and He, J. A meta-analysis of the technology acceptance model. Information & Management 43 (2006), 740-755.

[7] Lee, Cheung, and Chen. Acceptance of Internet-based learning medium: The role of extrinsic and intrinsic motivation. Information & Management 42 (2005), 1095-1104.

[8] Saadé, R. G., Nebebe, F., and Tan, W. Viability of the technology acceptance model in multimedia learning environments: Comparative study. Interdisciplinary Journal of Knowledge and Learning Objects 37 (2007), 175-184.

[9] Kamel, S. Technology Acceptance Model. [PowerPoint slides]. The American University in Cairo. 2004.

[10] Park, S. Y. An analysis of the technology acceptance model in understanding university students' behavioral intention to use e-Learning. Educational Technology & Society 12, 3 (2009), 150-162.

[11] Rogers, E. M. Diffusion of Innovations. 4th ed. Free Press, New York, 1995.

[12] Lee, Y.-H., Hsieh, Y.-C., and Hsu, C.-N. Adding Innovation Diffusion Theory to the Technology Acceptance Model: Supporting employees' intentions to use E-Learning systems. Educational Technology & Society 14, 4 (2011),124-137.

[13] Graham, M., Zook, M., and Boulton, A. Augmented reality in urban places: Mcontested content and the duplicity of code. Transactions of the Institute of British Geographers 38, 3 (2012).

[14] Christensen, C. M. The Innovator's Dilemma: When new technologies cause great firms to fail. Harvard Business School Press, Cambridge, 1997.

[15] Lotus 1-2-3. (n.d.) In Wikpedia. Retrieved September 25, 2013, from http://en.wikipedia.org/wiki/Lotus_1-2-3.

[16] Global mobile statistics 2013 Part B: Mobile Web; mobile broadband penetration; 3G/4G subscribers and networks. Retreived September 25, 2013, from http://mobithinking.com/mobile-marketing-tools/latest-mobile-stats/b#mobile-only

Appendix A

Student Evaluation of Wimba (N = 66)

About the Author

Dr. Martin Sivula is a professor of research and statistics in the College of Management at Johnson & Wales University, Providence Campus. He is a former director of academic computing and is a certified data educator (CDE). In the early 1990s he served as a quantitative researcher and data analyst for the Public Education Fund study of the Providence (Rhode Island) Public Schools, which produced the Providence Report on Blueprint for Education (PROBE) Study (1991-1995). From 1994 through 2000 he served as a researcher and grant administrator for the Health Education Leadership for Providence (HELP), an organization to implement technology applications into the Providence Public Schools. Since 1999 he has served as a PT3 grant evaluator for Wheelock College's (Boston, MA) technology implementation and capacity building efforts. He also serves as a Fulbright Scholar Campus Representative. Recent research includes: Sivula, M. W., Hybrid graduate education: Assessing student comfort with technology interventions, Ubiquitous Learning An International Journal 3, 1 (2011), 35-42.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Copyright © 2013 ACM 1535-394X/13/09-2509337 $15.00

DOI: http://dx.doi.org/10.1145/2524588



Comments

  • There are no comments at this time.