ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

The Importance of Setting the Stage
Maximizing the benefits of peer review of teaching

By Glenn Johnson, James L. Rosenberger, Mosuk Chow / October 2014

TYPE: MANAGEMENT
Print Email
Comments Instapaper

The motivation for engaging in peer review of teaching vacillates between providing opportunities to improve teaching and to evaluate teaching performance. The first is an exercise that is formative in nature where improvement of teaching is the purpose. The latter is a summative report used for accountability purposes. Both purposes for peer review of teaching have the potential to create productive opportunities for faculty to focus on quality teaching and learning, but when conducted in isolation it is less likely to create a culture of learning and continuous quality improvement. For this reason it is important to find ways to maximize the benefits of peer review of teaching so valuable opportunities for peer collaboration and curricular coordination are supported, regardless of which purpose is targeted. To remedy this, the graduate online program in applied statistics asks faculty in the Department of Statistics at Penn State University to participate in a range of information gathering activities, including mid-semester and instructor/student surveys, concept mapping, curricular reviews, and faculty development events in order to maximize the benefits of our peer review of teaching process. And, while individually each of these mechanisms has their own singular purpose, as a whole, they can also be effectively leveraged to involve online faculty members in thinking critically about the quality of online instruction, i.e., to "set the stage" for peer review of teaching. Once set, our peer review of teaching process promotes synergistic thinking around teaching and learning and creates a rich collaborative context for on-going faculty discussions about the quality of online learning and our overall program goals.

What Do We Want to Accomplish?

At Penn State, our online program includes instructors who are tenure-track faculty, for whom teaching evaluations are important. In addition, there are also other instructors who do not have a research directive and for whom focusing attention on teaching strategies that are effective with online learners is a higher priority. As with other higher-education faculty, our instructors rarely have certifications related to teaching. While "teaching as they were taught" might work for face-to-face instruction on campus, these same strategies are not always successful in an online learning environment.

Both purposes for peer review of teaching, formative and summative, are important for helping to ensure quality teaching. However, whether arguments are made to use peer review of teaching to support either purpose (or both), the reality of how this exercise is typically implemented diminishes the success of either goal. Bernstein's (report highlights this lamentable circumstance: "Historically, the peer review of teaching has typically meant only that a faculty member has watched a colleague lead a class. An observation of an hour in the life of the course yields a letter describing the performance of a teacher, and that letter becomes the peer-review component of the professors teaching evaluation" [1]. If faculty are required to spend time visiting classrooms and reviewing class interaction, how can we make better use of this time?

Hutchings provides a comprehensive review of the research and practices of peer review of teaching [2]. Though this article was published a while ago, her characterization of the challenges that all teachers face has not changed: "...teaching is exceedingly difficult to learn alone. Recent research on what good teachers know and can do indicates that teaching is a highly complex situated activity which is learned largely and necessarily through experience" [2]. Hutchings' recommendations should also be heeded today, especially as they relate to online learning environments: "What is needed, rather, is thoughtful, ongoing discussion by relevant groups of peers, attempting to say ever more clearly what constitutes good teaching, putting forward powerful images and examples of it, and working toward definitions that can guide and be guided by concrete acts of judgment" [2]. This is our challenge. How do we establish a culture of inquiry that centers on the practices of effective online teaching?

"Setting Up" the Peer Review of Teaching

Initially, our online program replicated the peer review of teaching process that is in place in many of the academic departments at Penn State, but we found this "snapshot" perspective lacking-a more comprehensive assessment of teaching was needed. In looking for ways to improve the process we found engaging our faculty in a range of information gathering activities that relate to what they might see in an online course made sense. What follows are short descriptions of the various information gathering activities faculty engage in prior to participating in our peer review of teaching process. In essence, the following activities "set up" the peer review of teaching process.

Defining Overall Program Goals. While each of our online courses has articulated learning goals and objectives for what students should know and be able to do, the foundation of our program evaluation is the establishment of overall program goals. These broad program goals involve higher-level learning objectives that span the student's experience throughout their entire program of study. For example, in our Masters of Applied Statistics program, "drawing conclusions from data in the presence of uncertainty," "developing confidence in applying statistical analysis," and "being a proficient user of statistical software" are examples of overarching goals that are supported throughout the program. While course objectives are often assessed in exams, these overall program goals are often overlooked; however, they provide cohesiveness to the program and are considered in conversations between colleagues involved in peer review of teaching.

Mid-Semester Surveys. At the halfway point in the semester, a mid-semester survey is deployed to collect information and feedback from students about their course experience. Information about the number of courses students are taking, the number that are online and how much time they are spending on the course materials are an important way for our instructors to understand their students. Questions about the student's perceptions of the timeliness and quality of the communication are flags that alert instructors to how responsive they are in meeting student needs. Input from students about what they would change and what they would not change about the course is also collected. While the primary purpose of these surveys is to provide instructors with information so that they might better support student learning as the course progresses, these results are also combined across all of the courses in the program. In this way instructors can compare their course results with program averages. This cross-program tally gives the administration and faculty a broader view of the characteristics of the students. For example, if students are spending much more time on one course than is typical in the program, or if students are reporting that response time or the quality of feedback is lower in one course than what is found in the program overall, these areas would foster discussion in the peer review process.

Instructor Satisfaction Survey. Every fall an anonymous survey is distributed to all instructors asking for feedback about departmental, instructional design, and university support structures as well as inquiring into their overall satisfaction with teaching online. Instructors provide feedback regarding the support they feel they need or resources that may be lacking. Is teaching online developing their professional skills? Are they satisfied with the learning that their students are able to achieve? Are they able to use the available online tools to adapt or enhance the online materials to support the learning needs of their students? These are all questions that get the instructors to think about what it is they do as teachers and the impact they have on student learning. As with all of our surveys, these submissions are tallied across the program so that everyone can review the perceived status of support and the comments for improving the program.

Rating Teaching Effectiveness. Every semester a Penn State survey is distributed by the university's central administration to all courses that gather student input on teaching effectiveness. Students are asked to provide feedback on the teaching and learning process from their perspective as learners. These results are accessible only by the instructor and the Director of Online Programs, who requests a copy. While there is no mechanism in place to combine these comments across the program in order to provide an additional comparison point, the results are another opportunity for an instructor to gain feedback about his/her teaching and may serve as a talking point during instructor review meetings with the Director of Online Programs or the department head.

There are several other less formal ways that we generate discussions about our online curriculum and teaching online among online instructors.

Creating a Concept Map. In our experience, developing a concept map makes it relatively easy to provide the program with another opportunity to talk about what we are doing and why. We printed the syllabi from all of the courses in the Master in Applied Statistics program, cut out the topics that were listed as being covered in each of these courses, and with the topics on small slips of paper scattered them on a table and then asked faculty and instructors: "How do all of these go together?" After sifting through the slips of paper, groups of related topics began to appear, which were then transferred to concept-mapping software so that all of the linkages could be made explicit. It is then easier to ask questions such as, "What nodes of the concept map are covered in the course you teach?" "Where is there overlap?" "Are there any gaps in what should be covered?"

Establishing a Process for Course and Curriculum Review. Each semester a few courses are selected for a comprehensive review that involves an examination of all of the instructional aspects of the course. This review includes the course content, the approaches to content presentation, the plan for interaction with students, and assessment. An instructional designer facilitates regular discussion throughout the semester between faculty members who are currently teaching the course or have recently taught the course and are assigned as reviewers. Ideally, one of these faculty members might also be teaching this same course in residence. At some point in the semester additional faculty are also assigned as "outside reviewers." They look for gaps in the curriculum, such as topics that were supposed to be covered in prerequisite courses or topics that extend into the courses that follow. ecommendations about where these topics exist or should exist in the curriculum and the degree to which these links are supported are added to the recommendations for the course. These recommendations from the course reviewers and the outside reviewers are then used to generate a list of development and revision tasks for the course content. Whenever possible, these tasks are implemented immediately. However, when the amount of revisions required is beyond the normal expectation of teaching faculty, supplemental contracts for faculty development of these items are offered. In addition, a list of concepts, methods, and/or procedures are developed to articulate what a student must know or be able to do as a result of participating successfully in the course. Consequently, these concept lists become the basis of coverage for the problems that are given on the final exam. Involving a group of faculty and current instructors in generating these course changes lends coherence and promotes consistency throughout the curriculum.

Engagement in Faculty Development Events. Throughout the academic year we either host or participate in events related to teaching and learning online beyond our online instructor meetings. These events cover everything from reviews of new technologies related to teaching and learning, to topics of mutual concern related to quality of teaching and learning. The most recent three sessions included: "Cheating in the Online Classroom," "Pushing Students to go Beyond without Shooting Yourself in the Foot!," and "Getting to Know Piazza." In general, our professional development focus over the past year has been specifically on interaction between students and instructors. We then examined instructor performance on this topic using our mid-course survey (n = 186). Results indicated that 81 percent of students responded "agree" or "strongly agree" to the statement: "The e-mail replies I have received from my instructor are of high quality and helpful." Seventy-one percent of students responded "agree" or "strongly agree" to the statement: "The activity and engagement between students and between students and the instructor in the course's online discussion forum(s) are of high quality and helpful."

Having been engaged in thoughtful and ongoing discussions with their peers up to this point makes the peer review of teaching activity much more meaningful and worthwhile. And thus, it is against this backdrop of activities that we situate the activity of peer review of teaching. In essence, we have set the stage by building on an array of information gathering activities that keep instructors continually involved in discussions about quality of online instruction and overall program goals. Furthermore, within our peer review of teaching process there are assignment decisions and support structures that are put into place to guide this process. Whereas in resident courses, you might watch a lesson, look at a syllabus, and write an assessment of what you saw, in an online classroom you can see so much more, from the content presented to the day-to-day interactions that occur online. Helping faculty to make sense of and evaluate the massive volume of information that is available in online learning environments is a necessary part of the peer review of teaching process and a big part of the reason why "setting the stage" in this manner is important.

Peer Review of Teaching: A Guided Process

The process begins with the Director of Online Programs assigning instructors a course to review. There are many strategies the director uses in making these assignments. Instructors may be assigned to review another section of the same course they teach, a course they have not taught but are interested in teaching, or a course that precedes or follows the one that they teach. In each of these scenarios, the idea is to motivate instructors to see connections between courses. Similarly, a weak instructor may be assigned to review a strong instructor, a good communicator assigned to review a good user of technologies, or resident faculty may be invited to review an online course. All of these arrangements bring with them strategies that benefit both the individual instructors being reviewed and the reviewers, as well as the program in general.

Before a reviewer goes online to review the assigned course, the instructor fills out and shares a short form that provides the reviewer with information about the course. This includes the locations where materials and interactions may be found as well as a place for instructors to add notes to highlight items that may be of interest to the reviewer or areas where they are looking for suggestions.

With input from the instructor the reviewer begins a note-taking stage using a version of the "Peer Review Guide for Online Teaching at Penn State" document that has been adapted to meet the needs of the Department of Statistics' online courses. This guides instructors through the review process. Initially, there is a short checklist that targets "Penn State's Quality Standards for Online Learning," where the reviewer checks for evidence of consistent navigation, a proper syllabus, and other items that would help flag basic deficiencies within an online course. Beyond this, the bulk of the "Peer Review Guide for Online Teaching at Penn State" is structured to feature the Seven Principles for Good Practice in Undergraduate Education [3], which is a summary of 50 years of higher education research that addresses good teaching and learning practices. Several of these principles have been topics of discussion in activities that led up to the peer review and during the review, and within the document is a page devoted to each principle. Thus the reviewer can add comments where appropriate. To assist the reviewer, possible course locations are suggested in the document about where to find materials related to each principle.

Upon completion of this note-taking stage, both the reviewer and the instructor find a time to discuss the instructor's involvement in the course using the notes from the "Peer Review Guide for Online Teaching at Penn State" as the basis for this conversation. Most agree that this conversation is the most beneficial stage of the process because it is at this time that topics of interest emerge and are discussed between the paired colleagues. Additional questions are asked, suggestions made, and advice is given. The peer reviewer summarizes this conversation in a single document which, along with a copy of the completed "Peer Review Guide for Online Teaching at Penn State" form is submitted to the Director of Online programs.

A Simple Conclusion

Peer review is one component of a comprehensive program to help faculty enhance their understanding of teaching and learning. Our goal is to ensure that peer review of teaching does not take place in isolation. We have been intentional in making sure that there are ample opportunities for instructors to be immersed in activities and discussions that focus on the quality online teaching. And, through this increased level of engagement within this richer context about teaching and learning online, it is our hope that the peer review of teaching experience builds and capitalizes on the continued professional development of our instructional faculty and the quality of our online program before, during, and after the peer review of teaching. In coming full circle, our peer review process has provided our instructors with a rich source of topics for future investigation. The ideas and strategies that have been generated out of our peer review conversations often serve as the grounds for implementing changes in what we do.

References

[1] Bernstein, D. The review and evaluation of the intellectual work of teaching. Change March/April 2008 (2008), 48-51.

[2] Hutchings, P. The peer review of teaching: progress, issues and prospects. Innovative Higher Education 20, 4. (1996).

[3] Chickering, A. and Gamson, Z. Seven principles for good practice in undergraduate education. American Association of Higher Education Bulletin 39, 7 (1987).

Resources

Peer Review of Online Teaching, Resources for Online Courses website, Department of Statistics, Penn State University.

Course & Curriculum Review, Resources for Online Courses website, Department of Statistics, Penn State University.

Penn State Quality Assurance e-Learning Design Standards, WebLearning @ Penn State website, Penn State University.

About the Authors

Glenn Johnson is an instructional designer in the Department of Statistics. He has been involved in the development and support of online learning efforts at the university, college and department levels at Penn State since 1995. While primarily involved in the transformation of course materials to meet the needs of students learning at a distance, he values his role leading faculty in re-examining their thinking about teaching and learning.

James L. Rosenberger is Professor of Statistics and Director of Online Programs as well as Director of the Statistical Consulting Center for the Department of Statistics. While he was department head he initiated the development of courses for online programs, and has been involved in developing online learning since 2002. He has developed and taught online courses in addition to statistical courses that he teaches on campus.

Mosuk Chow is a Senior Research Associate and Associate Professor in the Department of Statistics at Penn State and is also the Director of Master of Applied Statistics Program for both the resident and online programs. Mosuk developed and taught the department's first online course in 2002. She has taught courses online along with in-residence courses for over 10 years and is heavily involved in the curriculum and student advising of Master of Applied Statistics program. In addition to an interest in Statistical Education, her research interests include biostatistics, sampling methods and statistical decision theory.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Copyright © 2014 ACM 1535-394X/14/10-2673801 $15.00



Comments

  • There are no comments at this time.