ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Learning Efficiency of Video-based Learning

Special Issue: Instructional Technology in the Online Classroom

By Shulong Yan, Emily Baxter / December 2018

TYPE: EMERGING TECHNOLOGIES
Print Email
Comments Instapaper

What role should video play in online learning? How do we design effective online courses for undergraduate STEM-related majors that will help students learn? These are questions we faced when designing an online materials science course for Penn State University undergraduates. The goal of this article is to investigate students' video-based online learning experience. Based on our research using a qualitative research method, we propose a new perspective for looking at video as an instructional tool.

Background

It is increasingly common to deliver online lessons with video, as evidenced by its wide use in teaching subjects including language, information technology, economics, medicine, and mathematics [1]. Video accommodates individuals' learning paces by allowing them to manipulate video functions such as speeding up, stopping, and replaying [2].

While it's true that extensive research on the learning effectiveness of teaching with video has been published [3] , the learning effectiveness of video-vased learning (VBL) as reported in the research is mixed. Results show either significant differences or no difference at all between VBL and other methods of online instruction, such as text-based [3] . The proponents of VBL posit that: 1) VBL increases students' learning effectiveness when visual and audio are present; 2) it improves instructor-student interaction when instructors are present; 3) and it motivates students when their learning needs are met [4, 5, 6, 7]. At the same time, opponents of VBL argue that students in VBL might not engage with the video with the depth that they engage with texts and might lose attention more easily.

Further, we find the common use of quantitative methods in VBL research problematic. Quantitative methods may be appropriate for measuring learning effectiveness through statistical analysis of grades or survey responses; however, this type of analysis doesn't provide a comprehensive review of how online learners perceive their use of videos. We posit that while it is important to evaluate the learning effectiveness of VBL, it is also important to expand our understanding of users' experience of using videos in online learning.

Our Journey

The focus of our research was a 200-level undergraduate materials science course. The course is mandatory for several majors within the field of engineering but may also be selected as an elective for other programs. It has been offered in residence since 1993. An online version was designed and offered, using text as the primary lecture delivery method, in the Summer of 2016. As an alternative to face-to-face instruction, students were assigned to read online content written by the instructor and chapters from a traditional textbook.

Following the initial text-based online offering, students were anonymously surveyed to provide feedback on the course design. Overwhelmingly, students reported dissatisfaction with having to read too much text-based course material. They requested video delivery as an alternative. Comments included the following, in the students' own words:

"…lectures should have been recorded and posted online so students would have a fair shot at learning the material instead of just giving a repeat of the book on a website with some added videos."

"I think that it would be much more helpful to include videos of the lectures than to have the online readings."

"I would suggest to keep away from the online readings. …It would be much better to have lecture videos posted. They are easier to follow and harder to lose interest in."

"I think it is better to record yourself giving a lecture rather than giving a lot of stuff to read, because sometimes it is hard to understand from the online reading or the book."

After reading these comments/requests, we spent several months creating a VBL version of the course comprised of the same material used in the text-based version. We settled upon a series of screencast lectures delivered across twelve lessons. According to Chen and Wu [4], screencast videos might be the most cost-effective method, but they also come with higher cognitive load and lower learning performance compared to either lecture capture or picture-in-picture video approaches. Even though we were aware of the tradeoffs, our decision to use this style was made based on several factors, including available time, cost of video production, level of video expertise, availability of technology, and precedents set in other areas of the institution.

To reduce the cognitive load, we chunked the traditional one-hour resident lecture content into shorter screencast videos which did not exceed 15 minutes in length. Ninety-two total screencast videos were created, spanning 11 lessons and 2 case studies. The average video length was 10 minutes. This series of screencast videos completely replaced the online text. These choices were in line with best practices for video production, including recommendations for chunking longer content into shorter videos, adding visual interest with graphics, and providing students with accurate captions and transcripts [8]. The VBL version of the course was first offered in Summer of 2017, along with a separate section of the original text-based version.

VBL Efficacy Research Study

We were able to offer two sections of the course at the same time: one the original text-based, and one the new video-based. Students enrolled in the course without knowing whether they were registering for the video-based or text-based section. We enrolled a total of 15 students (6 in text-based, 9 in video-based) in Summer 2017. At the end of the semester, we compared students' grades in the text-based offering and the video-based offering. Student grades were based on 4 exams, as well as weekly homework and quiz completion. The results showed that there was no significant difference in students' performance between these two conditions. This result corroborated others' findings [3, 9].

In the fall of 2017, we enrolled 64 students (21 in text-based, 43 in video-based). We again compared students' grades in the two conditions and found no statistically significant difference in students' performance. We also included qualitative methods to gain thorough understanding of how students used the videos in online learning.

We collected data using the following procedures: 1) We invited all students (in both sections) to complete an anonymous survey regarding their attitudes and study habits related to the course, 2) we identified volunteers from the survey who agreed to participate in a follow-up interview, and 3) we conducted four individual interviews and one focus group interview. All of the interview/focus group volunteers were from the VBL-section, since, unfortunately, we did not have any volunteers from the text-based section. Due to some technical issues, we were able to preserve three 30-minute-long individual interview videos out of 4, and one 1-hour-long focus group video. While this number of interviews did not provide extensive data, it did corroborate some survey data as well as provide greater insight into the thought processes of several students. Some of the themes identified in these interview responses led to subsequent surveys and interview scripts used in Spring of 2018. We decided not to include the focus group interview in the data analysis for this paper. The focus group interview was mainly focused on students' perceptions of instructor and TA engagement, as well as their reported use of external support, which are less related to students' experience of VBL.

We used V-Note to analyze the themes that emerged from the individual interviews. We decided to analyze the videos without transcribing them because we were able to abstract meaningful themes by watching the videos. An initial list of codes was generated through the first individual video analysis. As the analysis continued, we added more labels to account for individual differences in responses. After coding all of the individual interviews, we returned to the initial videos with the latest code sheet to see if the new labels applied to those videos.

Figure 1. Using V-Note to code interviews.



[click to enlarge]

Finally, we exported students' access log files from the Canvas Learning Management System (LMS) using Tampermonkey. The access log files provided evidence of students' access to all online materials in the LMS. The log files included student name and access frequency for different materials, including attachments, external resource links, quizzes, videos, lecture slides, and homework solutions.

Figure 2. LMS access log file.



[click to enlarge]

Findings

Four themes emerged from analyzing three types of data: surveys, interviews, and access log files. These four themes that captured students' experience of VBL included: varying video use frequency, balancing time and level of understanding to achieve learning efficiency, manipulation of video functions to achieve learning efficiency, and reliance upon multiple tools.

Varying video use frequency. From the LMS access logs, we found that around 20 percent of students watched less than 50 percent of the videos, around 20 percent of students watched around 50 to 75 percent of the videos, and about 60 percent of students watched more than 85 percent of the videos. Interestingly, the lesson distributions from students who watched less than 50 percent of the videos varied. Some students watched more videos in the first couple of lessons but stopped watching thereafter. Other students watched more videos at the end of several lessons. Some students picked several lessons to watch.

In the interviews, students explained that they adapted their viewing strategies based on their exam scores. The exam scores provided feedback for students to evaluate the effectiveness of their learning strategies. When their score was different than they desired, they often chose to change their strategies accordingly. The strategies varied from increasing to decreasing video viewing frequency based on the problems they identified.

Balancing time and level of understanding. We also found that time consumption and desired level of understanding were the two main factors that students considered when determining how to use video lectures for learning. Students worked to minimize their learning time while at the same time optimize their learning outcome. However, they used different methods to achieve this balance.

One student who didn't watch any of the videos explained that he made that choice because he could spend less time reading the textbook than he would have spent watching the videos to achieve the same level of understanding. Another student who started watching the videos after the first exam explained that the videos helped him understand the content better and also allowed him to distribute his learning time over the week. Finally, another student described how she preferred to learn by watching the videos twice (at different speeds) before the exam. In the week in which the material was initially presented, she watched all of the lecture videos at normal speed. One week before the exam, she re-watched the videos at higher speed to refresh her memory.

Playing with video functions. We also found that students who relied heavily on the videos used multiple video navigation functions such as speeding up, dragging to specific locations in the video, stopping, or replaying. About 85 percent of the students from the survey reported that they sped up the videos at least once. About 83 percent of students reported that they replayed the videos at least once. When asked why, they reported that they replayed the videos to clarify material they didn't understand or to re-watch material they initially missed.

Relying on multiple tools. In the interviews, we found that students who primarily relied on the lecture videos as a learning tool also reported using a variety of supplemental materials including the lecture slides, exam review sheets, textbook, and external resources (websites or YouTube videos). Even though the videos were used as the primary source for course content, students used these supplemental materials if none of the video resources provided in the course clarified their confusion. Students also reported that they used these supplemental resources to create their own self-assessments to monitor learning prior to the exams. Finally, students also stated that they used the lecture slides and exam review sheets to focus their viewing when watching the videos.

Discussion

Our analysis revealed that students attempted to be efficient learners by minimizing time spent learning while simultaneously optimizing their learning outcome. The synergetic relationship of these two goals guided students' decision making when self-regulating their online learning.

The analysis also suggests that there may be differences between the desired design goal and users' actual use of any given tool. These differences arise from students' idiosyncratic methods for achieving learning efficiency in this learning environment. The beauty of this online learning environment was that students were able to access different types of tools that allowed them to regulate their own learning.

These findings have two implications for video-based online learning design. First, both video and text are effective online learning tools, as evidenced by the fact that there was no significant statistical difference after comparing students' grades between the two sections in both summer and fall semesters. Second, and perhaps more importantly, online learning should account for individual learner differences and support autonomy instead of imposing a single method for learning. In our case, even though the lecture videos were designed as the primary learning tool, students' actual use of the videos told a different story. Based on our student feedback and grade data, we do not feel that there is a clear ?winner? when comparing VBL and text-based online content. There is a place for both. We suggest that the decision of which to use should be made based upon a variety of factors, including availability of resources, and instructor preference, as long as the course is pedagogically sound.

Based on these findings, we propose that we should not focus on designing one "perfect" learning tool. Instead, we should pay attention to the affordances of multiple tools we create and consider how they may be used separately and/or together by students to create their own diverse learning paths.

References

[1] Giannakos, M. N. 2013. Exploring the video_based learning research: A review of the literature. British Journal of Educational Technology 44, 6 (2013), E191-E195.

[2] Brecht, H. D. Learning from online video lectures. Journal of Information Technology Education: Innovations in Practice 11 (2012), 227-250.

[3] Yousef, A. M. F., et al. Video-based learning: A critical analysis of the research published in 2003-2013 and future visions. In eLmL 2014, The Sixth International Conference on Mobile, Hybrid, and On-line Learning. IARIA XPS Press, 2014, 112-119.

[4] Chen, C-M., and Wu, C-H. Effects of different video lecture types on sustained attention, emotion, cognitive load, and learning performance. Computers & Education 80 (2015), 108-121.

[5] Borup, J., et al. Improving online social presence through asynchronous video. Internet and Higher Eduction 15, 3 (2012), 195-203.

[6] Hsieh, P-A. J., and Cho, V. Comparing e-Learning tools' success: The case of instructor-student interactive vs. self-paced tools. Computers & Education 57,3 (2011), 2025-2038.

[7] Nikopoulou-Smyrni, P., and Nikopoulos, C. Evaluating the impact of video-based versus traditional lectures on student learning. International Research Journals 1, 8 (2010), 304-311.

[8] Beheshti, M., et al. Characteristics of instructional videos. World Journal on Educational Technology: Current Issues 10, 1 (2018), 061-069.

[9] Lang, G. The relative efficacy of video and text tutorials in online computing education. Information Systems Education Journal 14, 5 (2016), 34-41.

About the Authors

Shulong Yan is a Ph.D. student in the Learning, Design, and Technology program and also the research assistant in the John A. Dutton e-Education Institute in the College of Earth and Mineral Sciences at The Pennsylvania State University. Her areas of expertise include: interaction analysis; design and evaluation of human-centered challenges for elementary school children; and evaluation of learning design to support online learning.

Emily Baxter is a learning designer in the John A. Dutton e-Education Institute in the College of Earth and Mineral Sciences at The Pennsylvania State University. In her role, Emily collaborates with college faculty to design online courses for a wide range of learners, including Penn State undergraduates and returning adults in the fields of geography, meteorology, and materials science.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

© ACM 2018. 1535-394X/18/12-3236701 $15.00



Comments

  • There are no comments at this time.