ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

What Do Training Awards Tell Us?

By William Ellet / May 2010

TYPE: OPINION
Print Email
Comments (4) Instapaper

People around the globe love ratings, from "Car of the Year—Slovakia" and "Los Grandes De Nuevo Mexico Music Awards" to Cook's Illustrated's ratings of catsup.

Corporate trainers are no exception. We pay attention to awards contests for training products and often make choices based on them. The contests can serve a valuable function. They help make sense of the sprawling training industry, which is awash in products but has few well-established authorities to guide buyers' choices.

Contests are also popular with companies that run them. The reason is simple: they make money.

Spurned Award
Take a recent contest to identify the most influential individuals in training. Anyone could vote on the web, and the controls were so lax that companies could stuff the electronic ballot box for their CEO.

It wasn't a surprise that some of the individuals who made the list worked for companies that are paying sponsors of the organization running the contest. An award is an effective way to ensure a sponsor's continued loyalty.

I am not saying the people who won weren't worthy. I'm saying the contest wasn't worthy of them. There was no transparency, and the results had the appearance of a conflict of interest. In fact, one of the more prominent individuals the contest recognized refused the award.

Annual Product Awards
The corporate training industry has a variety of product award contests and events, many focused on e-learning. Understandably, people tend to pay attention to the results, not the process that led to the results. When you do take a close look at the evaluation process, you encounter red flags that make you wonder how credible the competitions are.

Vendors usually pay a fee for each product they submit. Because award competitions attract attention, vendors feel pressure to participate. To enter multiple products, a single vendor can easily run up a tab of thousands of dollars. With many vendors participating, the entry fees alone can make the contests lucrative for the sponsoring company. You may notice that contests tend to have quite a few categories. These may correspond to well-defined uses of technology, but they also invite more submissions and thus more money for the contest vendor.

It's an irresistible business model; there's only one catch. If vendors pay a significant sum of money to enter, they tend to expect something in return. The contest vendor has to walk a fine line between keeping one set of customers (vendors) happy while not alienating the other set of customers (training professionals). Contests don't disclose the number of submissions (a trade secret?) so you can't determine how selective the awards are. It does seem, though, that contests are generous with the number of products they recognize. That may be the implicit quid pro quo between contest vendors and product vendors.

To have a profitable contest, you have to keep overhead low. The biggest potential cost is judging. If you are going to run a competition with many entrants, you're going to need a large number of judges. The preferred solution is volunteer judges who do the work for free.

There is no perfect solution, but the downside of volunteers is that you have no idea how competent they are or how uniform their application of the contest criteria is. Those uncertainties plus annual turnover in judges raise the question of how comparable the contest results are year to year.

In addition, the evaluation criteria can be very broad because they have to be to cover all the different types of products in a contest. Judges are given wide discretion in interpreting the criteria so inevitably you're going to get differences in what they think the standards mean. That would be less of a problem if the decisions were explained. I haven't seen a training industry contest in which the judges explain in any detail why they chose a product for an award.

And I have yet to see a critical word about winners. Every technology product involves tradeoffs and compromises that suit some users but not others. You don't get that kind of information from contests.

Another judging model is for the contest vendor to use in-house people. This could be an excellent solution, assuming that employees have the requisite expertise. It also seems a better bet to ensure year-to-year consistency. You're going to pay the employees anyway so there's only an opportunity cost to worry about.

One well-known competition uses this model. The red flag is that the company does consulting for companies in the industry, and there's no Chinese wall between the consultants and the contest judges. They are the same people. In a recent year, six out of 10 award-winning companies also appeared on a list of consulting clients of the contest vendor. In another category over 50 percent of the winners were clients. Note: the publicly available client list was partial; it's possible that some of the winning companies are clients but aren't listed.

A third approach is to dispense with judges altogether and run the contest as a poll. In this approach, the contest sponsor has no role in deciding the winners. The weakness, of course, is how representative voters are of the population relevant to the product being voted on. There all kinds of bias possible in a pool of self-selecting voters. Is it more likely that people who love a product will take the trouble to vote than people who aren't crazy about it? Do you know if voters have much hands-on experience with a product—or none at all?

What Is Actually Assessed
Have you ever wondered how contest judges evaluate a large number of complex e-learning products in a short period of time?

In the case of one leading contest, the answer is that judges don't evaluate the actual product. The contest rules state that vendors submitting products can furnish a recorded demo, document, PowerPoint deck, or a combination to judges. In other words, the contest is actually a marketing competition!

To be fair judges can test drive trial versions of an application. Still, trial versions often disable key features. It also doesn't seem quite fair to have some judges using trial software for one product while other judges view a slick Flash demo of another. For this reason, if I were a vendor I wouldn't take a chance on evaluators drawing their own conclusions—I would submit a marketing presentation that furnishes conclusions for them.

You also have to wonder how many unpaid judges with busy schedules take the marketing pitches as the path of least resistance. I'm not criticizing judges who do that because I can see myself taking that option.

Worthwhile Results?
I don't begrudge people making a buck, and many product contests make a lot of them, year in and year out. I'm not charging contest operators with running rigged competitions.

I've heard vendors complain in private that they feel compelled to participate in some high-profile awards programs even though they're queasy about the process and are well aware of the conflicts of interest. I sympathize with them, but they do have the option of not entering contests they're uneasy about. If they didn't enter one year, would they begin to lose sales? I'm skeptical that the contests have such power in the market.

Some training product contests are poorly designed and allow potential conflicts of interest and bias. The training profession deserves better. Training contests provide information that might be valuable or might not be. The trouble is that you and I have no way of determining the value. Transparency is a well-traveled word these days, but it is what training competitions need.

Models exist that could make them more useful. CINE, a nonprofit film and TV trade group with a professional staff, has been giving out awards for more than 50 years. It relies on volunteer panels that look at specific types of productions, and there is a two-level process to guard against bias and poor judgment.

The key to change is the training profession. As long as we're uncritical about awards, the status quo will prevail because contest vendors have no incentive to change. If we don't demand transparency, then we probably aren't going to get it.



Comments

  • Tue, 05 Oct 2010
    Post by Craig Howard

    I can agree with everything you said, write an article in the same vein as this one-- it's a couple issues back at this point-- and still I would say there's more to this question than either of us, or the two of us combined, can even allude to. Schools are relatively new human creations, but learning is not. We're shortsighted if we think schools are just about learning. We can try and work out the economics of it all, but my hunch is that there will inevitably be a number of factors we missed. My own personal soapbox try to tell educators that the learning we're not paying attention to enough is the personal changes we get from complex interactions with people we respect. We've got plenty of evidence that the virtual schools aren't going to take market share from F2F ones; the value of being immersed in community of strong forward thinkers will never be replicated. The greatest teachers in my own education simply could not be fully comprehended without their gestures, nuance, and the experience of seeing them think. Online students don't get these experiences, and as a teacher, I can feel it when I talk to them. We don't have enough techniques to reach the online community with real, transformative learning. The popular media has not ignored it either. But are prospective students watching new like this:

    http://www.pbs.org/wgbh/pages/frontline/collegeinc/view/

    I am not so sure. The Long Tail is surely here, as you said, but i would just hope it will get a little better at the ends, though I don't see it happening any time soon.

    Nice article. Craig

  • Sun, 03 Oct 2010
    Post by Tom Worthington

    In "Degrees, Distance, and Dollars" Marina Krakovsky provides a down to earth analysis of experience with distance education for universities, relevant not just to the USA. The one major flaw in the analysis is that it does not address education outside the USA. Online courses open up the possibility of US students studying at universities outside the USA, or with overseas based staff of US universities. Already I have had a North American student (admittedly from Canada, not USA), in my Green ICT course offered by the Australian National University: http://cs.anu.edu.au/Student/comp7310/

    Australia and New Zealand have high quality universities, compatible education standards to the USA and a similar cultural outlook and good English skills. Australia and New Zealand also has sophisticated use of online education, being the origin of the Moodle Learning Management System and the Mahara e-Portfolio system. They could therefore credibly offer online courses to students in the USA. Perhaps the one thing stopping this is that Australian universities are busy addressing the market for Chinese and Indian students.

    India has shown how it could provide first basic telephone call centre services and then increasingly sophisticated accounting, software and engineering services online. It would seem a small step to providing courses for students around the world to English speaking students.

  • Tue, 11 May 2010
    Post by Simon Egenfeldt-Nielsen

    Indeed awards often seem to be nothing more than marketing. In Denmark there are similar situation where you can document that the judgets have logged into the web-site. I think at the least it qualifies for getting your fee back. But legally I think it would actually mean that the entire competition is void, and do not live up to the laws set up for competition that are usually quite heavy regulated in most countries.

    In general one should focus on awards where there are not economic interest. It seems that these are more far between in the elearning space compared to other areas. It would be interesting to make a list of 'green-ligthed' awards that do not have a financial incentive (or at least a 'very' limited one.

  • Sun, 09 May 2010
    Post by Dale Ludwig

    Thanks for writing What Do Training Awards Tell Us? You struck a chord because none of the judges for the USDLA awards this year logged in to look at the learners' portal I nominated. Fool me once...