ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Don't Just Teach to the Metrics

By Michael Feldstein / September 2002

Print Email
Comments Instapaper

The world of corporate training can pretty much be divided into two categories: "teaching to know" and "teaching to do." "Teaching to know" means that you care about whether the student has internalized (or, in many cases, memorized) certain information. The classic case is the certification course. The government, your industry, or your company may mandate that workers need to demonstrate a certain minimum level of knowledge about critical issues like the handling of hazardous materials, the kinds of advice that a stock broker can and cannot give to a client, or the rules surrounding patient privacy at a hospital. In these cases, the certification exam is what counts. The course exists only to help learners pass the exam. On the other hand, "teaching to do" means that you care about how the student performs on-the-job. You really don't care about what she knows, per se. If giving the person a calculator will get the job done faster and better than training her to multiply six-digit numbers in her head, then you give that person a calculator.

Of course, this is an over-simplification. Most organizations giving certification tests really do care if their employees act on their knowledge after passing the test. Likewise, often the best way to improve performance is to improve a person's knowledge. It's really just a question which on-the-job performances you care about. In a certification environment, the test itself is an on-the-job performance.

Why does this matter? It matters because the kind of performance(s) you are trying to improve dramatically have an impact on the way you design your courses. In fact, course design should start with the performances that you are trying to improve. Every instructional design decision follows from those goals. Unfortunately, the instructional designers often lose sight of this first principle of design because the on-the-job performance they would like to improve is often difficult to measure. Test results, on the other hand, are very easy to measure. So we often end up focusing on the test. Sometimes we certify competencies because we have given up on measuring performance. And because we know our success will be measured by the test scores, we teach to the tests. What gets measured gets managed.

Sound Familiar?
This is exactly the same trap that we fall into in our public education system. It's impossible to come up with a standardized measure of, say, how well kids write essays. But we want to have some measure of how our schools are doing, so we give the kids multiple-choice questions instead. That's OK; it's important to try to measure our progress even if we know that our measures aren't very good. The trouble comes when we degrade our teaching practices and goals to improve scores on the imperfect metrics rather than simply acknowledging that we can't do a good job of measuring what's important. Teachers spend more and more time teaching to the standardized tests, which leaves them less and less time to teach how to write a good essay. Likewise, instructional designers spend more and more energy designing to their Level II evaluations, which leaves them less and less time to focus on actually improving performance.

In a live classroom, students (who are almost always smarter than we think they are, no matter what their age) can sometimes find ways to get around our shortcomings as instructors. They will ask questions. They will want to know how the content is useful—why they need to know it. In self-paced e-learning, though, there is nobody to ask. (Also, in a live classroom in a training environment, trainers often give up even on Level 2 evaluation, counting seat time as good enough to get credit. Paradoxically, this frees them up to be more responsive to the students' questions about how the course content might be useful in the real world.) As a result, it's critical that instructional designers of e-learning not fall into the school trap and teach only to the metrics.

So this month I'm going back to basics. My latest article raises some common-sense questions about instructional goals that we should all ask ourselves every time we design a course. It also shows some of the more common ways that the specific answers to those questions can affect course design, success metrics, and even how you set up access to the courses. I hope the article can serve as a reminder of some of the more essential foundations of what we strive to do in this profession.


  • There are no comments at this time.