ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Helping learners as they construct knowledge: How can instructors leverage research findings

By Viswa K. Viswanathan / March 2020

TYPE: HIGHER EDUCATION
Print Email
Comments Instapaper

Education scholars today accept that learners do not merely receive knowledge. Instead, learners construct knowledge by assimilating new information into their current knowledge base [1, 2, 3, 4, 5]. As such, we can delineate three main steps in the “learning-chain”: instruction design, instruction delivery, and knowledge-construction (KC), with the last phase occurring primarily inside learners’ brains, and hence the hardest one to observe or control. This paper draws on research findings in cognitive and educational psychology and suggests concrete ways for instructors to enhance their support of the KC phase. Although instructors can implement the crux of these suggestions using existing software products, we are in the process of developing software to make the process more seamless and effective.

We use the term “meaningful learning” [2] to refer to learning in which the learner acquires new knowledge and is able to apply it in novel contexts.

This paper addresses the following:

  • Research has found that mastery of factual knowledge precedes meaningful learning [4, 5, 6]. This paper provides concrete guidelines for practitioners to help students master foundational factual information by tightly integrating spaced repetition [9, 10, 13] into the fabric of a course.
  • We now have conclusive evidence that practice testing and distributed learning are among the most effective strategies for meaningful learning [7, 8, 11]. This paper suggests methods to adapt spaced repetition to the realm of meaningful learning.
  • Considering the value of practice testing for meaningful learning, this paper suggests a new type of question that harnesses the benefits of both multiple-choice and constructed response questions and can potentially be used in contexts involving large numbers of students without the associated grading burden.

Helping Students Master Important Factual Information

To achieve meaningful learning, learners need first to firmly establish important facts in long-term memory (via rote learning), and be able to recall them effortlessly [4, 6]. This kind of “automaticity” [14]—performing tasks without conscious attention to the low-level details— requires rote memory and, though much-maligned, is crucial for meaningful learning. Closely related is “chunking,” where people store large amounts of related information as a single chunk in long-term memory and can recall the whole chunk easily on cue.

Having important information in long-term memory allows learners to devote more of their working memories [5, 15, 16] and other cognitive mechanisms to internalizing new concepts.

To support learners in mastering factual information, instructors can do the following:

  • For each learning unit, explicitly enumerate the important facts that students need to commit to long-term memory, and
  • actively support students in the process of memorizing these facts.

Explicitly enumerating facts to be memorized. Today, instructors routinely specify learning outcomes for course modules. In a similar vein, they can also enumerate the essential factual information from each module that students need to commit to long-term memory.

Actively supporting students in the process of memorizing facts. Research has established that retrieval strengthens retention and recall [8, 13, 17]. Figure 1 shows the Forgetting Curve [17]. In the figure, some learning occurs at time 0. The leftmost curve shows the retention over time if the learning is not subsequently retrieved. The next curve shows the decay if the memory is retrieved after one day, and so on. The rate of decay drops with each successive retrieval.

Figure 1. The Forgetting Curve [31].


[click to enlarge]

Bahrick [12] also found better retention when retrievals are spaced further apart in time than when close together or evenly spaced—that is, distributed retrieval is better than massed practice. Dunlosky et al.’s meta-analysis [7] evaluated ten study strategies and found distributed practice to be one of two having the highest impact. Figure 2 based on data from Bahrick [12], (also plotted in Dunlosky et al. [7]) shows the power of distributed practice. It shows that, in the long run, repetition spaced 30 days apart was more effective than shorter intervals, even though this was not the case in the short run.

Figure 2. Impact of distributed practice, based on data from Bahrick [12].


[click to enlarge]

 Applying the above findings to commit hundreds of facts to long-term memory (as might be needed across all the courses that a student takes) can be very complicated. Manually scheduling fact retrieval for hundreds of facts can consume too much effort and be counter-productive.

Wozniak’s SuperMemo algorithm [9] addresses this problem. SuperMemo was the first spaced repetition algorithm and was the basis for the SuperMemo software-based memorizing system. Spaced repetition systems help us to remember things that we have already learned.

Spaced repetition software like SuperMemo, allow learners to create decks of questions with an answer attached to each question. Armed with a deck, the software manages the scheduling of question presentation. They present questions to learners who try to recall the correct answer and then check with the stored answer—akin to turning over a flashcard. The learners then rate the level of difficulty in retrieving the answer. Based on the response, the system adjusts the current e-factor (easiness factor) of the question (for the specific user). The system uses the e-factor to automatically determine when the question will be shown next. This eliminates the burden of scheduling from the learner and frees up the learner to focus only on retrieval.

Scheduling is guided by the principle that there is little to be gained by presenting a question that the user is able to easily answer. To enhance the strength of long-term retention, it is better to present a question when the learner is likely to have some difficulty with retrieval—that is when the learner is close to forgetting it.

The SuperMemo algorithm is quite involved and addresses other finer details [26]. Wozniak has developed several versions of the SuperMemo algorithm, with the latest, SM-17, released in 2016. Wozniak 19] provides guidelines on how to codify knowledge for use with a spaced repetition system.

Spaced repetition has been shown to be very effective [12, 20, 21, 22], and mature software- implementations are available [30]. We suggest three alternative means for instructors to integrate spaced-repetition into their courses:

  • Instructors create and provide fact decks for each module of a course and students only use these.
  • Students build the decks. Instructors only highlight important facts to be memorized and let students extract and encode knowledge. Pedagogically, this option would be ideal since the process of encoding knowledge itself powerfully aids retention. However, there is a risk that students might not create effective decks.
  • Instructors provide decks and students can also add their own items to the decks.

Under all of the above options, in order to ensure that students retain facts from the entire course, the decks will cumulate across course modules. For example, the deck from module 2 would build on top of the deck from module 1 and so on. This way, when a student reviews during the middle or end of the semester, they are still drilled on important facts from earlier course modules.

Implementing this will require course designers and students to be trained in knowledge-extraction and encoding for spaced repetition.

Practitioners can adapt standalone spaced-repetition software to implement the above recommendations. However, a custom implementation that is tightly integrated into a Learning Management System (LMS) might be beneficial and we are working on this.

Adapting Spaced Repetition for Meaningful Learning

In meaningful learning, learners build neural connections between several related units of knowledge and create neural structures that enable them to retrieve learned information through multiple pathways [7, 23]. Spaced repetition can be useful in promoting meaningful learning too, by reinforcing the neural pathways related to conceptual understanding.

The meta-analysis by Dunlosky et al [7] showed that practice testing and distributed learning are the most effective learning strategies among the ten strategies that they evaluated. They use the term “practice testing” to refer to low-stakes or no-stakes formative assessments conducted by the instructor and to any self-evaluation that students might engage in.

We are poor judges of when we are learning and when we are not [4, 24]. Therefore, any objective evaluations of learning ought to be useful, and practice testing can help with this aspect of learning as well.

Although spaced-repetition is generally used only for rote memorization, instructors can adapt it for meaningful learning. Subrahmanyam [20] has shown a way to adapt the SM-2 algorithm for this purpose.

In using spaced repetition for fact memorization, we are interested only in whether the learner correctly recalled the answer. However, raw recall bereft of understanding would not be relevant for meaningful learning, as the learner will be unable to apply this knowledge in novel contexts.

To adapt spaced repetition for meaningful learning, instead of just testing for fact retrieval, we test for learners to answer questions that require reasoning. However, having cards that require reasoning to answer correctly might not help with meaningful learning as it would require learners to use reasoning only on the first trial. On subsequent trials, they can fall back to answering from recall rather than reasoning. We propose the following way to address this issue.

As mentioned earlier, spaced-repetition systems use an individual fact as the unit of scheduling. To use spaced-repetition to aid meaningful learning, we recommend the use of “concept” as the unit of scheduling and attach multiple questions to each concept.

For a particular concept, the system will keep track of the questions that the student was shown in prior attempts. While testing the same concept again, the system will choose a different question attached to the same concept. This way, the student will need to answer the question by reasoning afresh rather than from recall. Only after cycling through all available questions for a given concept will the system start repeating questions for the same concept. Clearly, the more questions we have for a concept, the more effective the approach will be.

Off-the-shelf spaced repetition software systems do not currently support this feature. However, a slight modification that allows an individual card to have many alternative questions and for the system to automatically cycle through these when the corresponding card is scheduled will suffice. We are in the process of developing such a system.

Enhancing the Utility of Multiple Choice Questions

Practice testing is among the most effective strategies to assist meaningful learning. This finding supports providing many practice questions—alas with the concomitant grading effort. To ease the burden of grading, instructors routinely deploy automatically graded question types (like multiple choice questions, MCQ). Well-designed MCQ can test for conceptual understanding but have the shortcoming that learners only need to recognize correct answers instead of generating them. Stanger-Hill [25] taught the same course to two sections of students. One was assessed with MCQ and the other was assessed with both MCQ and constructed-response questions (CRQ). The latter format was correlated with significantly more cognitively active study behaviors and with significantly better performance on the cumulative final examination.

To harness the convenience of MCQ while also reaping the benefits of CRQ, this paper suggests a new type of question that we call Constructive MCQ or CMCQ, which combines a CRQ with a corresponding MCQ, both of which test the same conceptual understanding. Answering a CMCQ works in two phases. In the first phase, the student answers a normal CRQ, thereby eliciting generative thought. After the student submits the answer to the CRQ, the second phase administers the corresponding MCQ. It might appear that this is nothing more than having a CRQ followed by a normal MCQ, but it can be much more, as we show below. We suggest a few alternatives for the deployment of CMCQ that avoid the need to manually grade the CRQ portion:

  • Do not grade the student’s answer to the first phase (CRQ) and only grade the MCQ portion. This runs the risk that students might not take the first phase, or CRQ seriously.
  • Utilize natural language processing and machine learning to determine if the student provided a serious answer (whether correct or not) to the CRQ phase and provide appropriate feedback. This requires a critical mass of sample “serious” and “frivolous” answers, which can be primed when a question is first created and fine-tuned as more answers become available. The score for the MCQ portion can be suitably adjusted based on this determination.
  • Once the above reaches a certain level of accuracy, we could consider the option of showing the MCQ portion only after the system determines that a student has submitted a serious answer to the CRQ.

Note on the Role of Learning Management Systems

By themselves, learning management systems (LMS) [27, 28, 29] offer nothing new from a pedagogical perspective. However, they can support all the steps in the learning-chain. The recommendations in this paper can all be implemented using standalone components, but in our view, integrating these tightly into the context of an LMS would be beneficial and we are in the process of developing such software. Tight integration with LMS can help in the following ways:

  • We could integrate spaced-repetition into the assessment features of the LMS so that creation and maintenance of the decks become seamless for course developers. For example, just tagging a question can add it to a deck.
  • The accumulation of the decks across course modules can occur automatically. In fact, we can even integrate the decks from all the courses that a student might be taking concurrently and assist the student in interleaved practice, which also has been shown to be very effective [7].
  • The results of a student’s spaced repetition practice can be made a part of the overall performance dashboard that the LMS provides.
  • The new question type CMCQ that we propose can also be treated as just another question type within the LMS.

Conclusion

Research findings indicate pedagogical approaches that strongly facilitate meaningful learning. Based on these, this paper has suggested for instructors:

  • Ways to utilize spaced repetition to help students master factual information that is a pre-requisite for meaningful learning.
  • A method to adapt spaced repetition for meaningful learning as well.
  • A new type of question called CMCQ that can harness the benefits of both constructed-response and multiple-choice questions.

References

[1] Piaget, J. (1971). The theory of stages in cognitive development.

[2] Ausubel, D. P., Novak, J. D., and Hanesian, H. (1968). Educational psychology: A cognitive view.

[3] Novak, J.D. (2009). Learning, Creating, and Using Knowledge: Concept Maps as Facilitative Tools in Schools and Corporations (2nd ed.). Routledge. ISBN 9780415991858.

[4] Brown, P. C., Roediger, H. L., and McDaniel, M. A. (2014). Make it stick. Harvard University Press.

[5] Willingham, D. T. (2009). Why don't students like school?: A cognitive scientist answers questions about how the mind works and what it means for the classroom. John Wiley & Sons.

[6] Gross, P. R. (2009). Learning Science. American Educator, 35.

[7] Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., and Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4-58. Ebbinghaus, Hermann. "Memory: A contribution to experimental psychology." Annals of neurosciences 20.4 (2013): 155.

[8] Roediger III, H. L., and Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological science, 17(3), 249-255.

[9] Wozniak, A. P. (1990). Optimization of learning. Master’s Thesis, University of Technology in Poznan

[10] Rawson, K. A., and Dunlosky, J. (2011). Optimizing schedules of retrieval practice for durable and efficient learning: How much is enough?. Journal of Experimental Psychology: General, 140(3), 283.

[11] Roediger III, H. L., Putnam, A. L., and Smith, M. A. (2011). Ten benefits of testing and their applications to educational practice. In Psychology of learning and motivation (Vol. 55, pp. 1-36). Academic Press.

[12] Bahrick, H. P. (1979). Maintenance of knowledge: Questions about memory we forgot to ask. Journal of Experimental Psychology: General, 108(3), 296.

[13] Kornell, N., Hays, M. J., and Bjork, R. A. (2009). Unsuccessful retrieval attempts enhance subsequent learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(4), 989.

[14] Bargh, J. A. (1994). The four horsemen of automaticity: Awareness, intention, efficiency, and control in social cognition. Handbook of social cognition, 1, 1-40.

[15] Miller GA (1956). "The magical number seven plus or minus two: some limits on our capacity for processing information." Psychological Review. 63 (2): 81–97.

[16] Ericsson, K. A. and Kintsch, W. (1995). "Long-term working memory". Psychological Review. 102 (2): 211–245.

[17] Murre, J. M., and Dros, J. (2015). Replication and analysis of Ebbinghaus’ forgetting curve. PloS one, 10(7).

[18] Supermemo, https://www.supermemo.com/english/ol/sm2.htm. Retrieved on 9/25/2018

[19] Wozniak, P. https://www.supermemo.com/en/articles/20rules, 1999. Retrieved on 9/25/2018.

[20] Subrahmanyam, S. (2017). Retain: building a concept recommendation system that leverages spaced repetition to improve retention in educational settings. M.S. Dissertation, University of Illinois at Urbana Champaign.

[21] Godwin-Jones, R. (2010). Emerging technologies from memory palaces to spacing algorithms: approaches to second-language vocabulary learning. Language, Learning & Technology, 14(2), 4.

[22] Carpenter, S. K., Cepeda, N. J., Rohrer, D., Kang, S. H., and Pashler, H. (2012). Using spacing to enhance diverse forms of learning: Review of recent research and implications for instruction. Educational Psychology Review, 24(3), 369-378.

[23] Carpenter, S. K. (2009). Cue strength as a moderator of the testing effect: The benefits of elaborative retrieval. Learning, Memory, 35(6), 1563–1569.

[24] Butterfield, B., and Metcalfe, J. (2001). Errors committed with high confidence are hypercorrected. Journal of Experimental Psychology: Learning, Memory, and Cognition, 27(6), 1491.

[25] Stanger-Hall, K. F. (2012). Multiple-choice exams: an obstacle for higher-level thinking in introductory science classes. CBE—Life Sciences Education, 11(3), 294-306.

[26] SuperMemo. General Principles of SuperMemo, https://www.supermemo.com/english/princip.htm, retrieved on November 1, 2018.

[27] Ellis, R. K. (2010). Field guide to learning management systems, 2009. ASTD Learning Circuits.

[28] Palmer, E. J., and Devitt, P. G. (2007). Assessment of higher order cognitive skills in undergraduate education: modified essay or multiple choice questions? Research paper. BMC medical education, 7(1), 49.

[29] Paulsen, M. F. (2002). Online Education Systems: Discussion and definition of terms. NKI distance education, 202.

[30] Spaced repetition.Wikipedia. Retrieved on June 20, 2019.

[31] Forgetting curve. Wikipedia. Retrieved on June 20, 2019.

Acknowledgments

The author gratefully acknowledges helpful suggestions from the anonymous referees.

About the Author

Dr. Viswa Viswanathan, Associate Professor of Computing and Decision Sciences, is a Stillman School faculty member within the Department of Computing and Decision Sciences at Seton Hall University. He has played an active role in growing the course offerings in business analytics. He has conducted research in several fields including operations research, intelligent tutoring systems and software development. His current research interest is to explore the role that IT and analytics can play in enhancing online learning. He was awarded his Ph.D. in operations research from the Indian Institute of Management, Calcutta, India.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Copyright is held by the owner/author(s). Publication rights licensed to ACM. 1535-394X/2020/03-3369843 $15.00

https://doi.org/10.1145/3369843



Comments

  • There are no comments at this time.