ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Desperately seeking software simulations

By Michael Feldstein / May 2004

Print Email
Comments Instapaper

When it comes to industry buzz, where there's fire, there's smoke. Lots of it. This is certainly the case in enterprise application simulation training, a perennially hot topic that seems even hotter than usual this year. At a time when there are more simulation authoring tools available than ever, it's good to ask the question, "Just want do we want our simulations—and simulation authoring tools—to do?" As usual, we can get some pretty good answers to this question simply by focusing on the learning task or, as I put it in a previous e-Learn article, on e-learning usability.

Let's look at three learning use cases and see what we can deduce. After that, I'll discuss the various types of authoring tools that are currently available, make note of how well each type matches up against the requirements of the use cases, and suggest some other general guidelines for evaluating various authoring products.

Use Case #1: Awareness Training

Bernice the stock broker just got a slick new Metatrade Xtreme workstation installed in her office. As busy as she is taking care of her clients, her manager still wants her to take just a little time out to learn the new features of her workstation, since some of those features are intended to help her serve her clients better. Bernice grudgingly agrees to sit down for a short lesson. She spends about five minutes clicking through it, identifies the two or three features that are important to her, and then puts it away.

The key word here is "grudgingly." Very few (sane) people get excited about software application training. They want you to give them the minimum that they need as quickly as possible and then get out of their way. So what is the minimum that Bernice needs? If I were writing a learning objective for this lesson, it would go something like this:

By the end of this lesson, you will be able to remember key features of the workstation that will help you serve your clients better.

There are two main elements to this learning objective: the features and the benefits of those features. This is know-that training rather than know-how training. In fact, it's closer to marketing than it is to traditional training. As such, Bernice doesn't need to remember every feature that you highlight in the lesson in order for the intervention to be successful. If Bernice thinks to herself, while working on a client's portfolio, "Hey, wasn't there a risk analysis tab in the new system that might work here? Maybe I should check that out" then you have accomplished your goal. Likewise, if Bernice finishes the lesson saying to herself, "Wow, that Monte Carlo analysis tool looks powerful; I should learn how to use it," then you win. The role of awareness training is to spark the person's interest. If you are a fan of adult learning theory, then you might think about it as the learning object at the beginning of a module or lesson that gives adult learners the motivation to learn their application. It provides the rationale. In fact, it's important to think about it this way (i.e., as a learning object at the beginning of a course) even if Bernice won't typically take any other training for quite a while (if ever) after taking this introduction. If you make the mistake of treating your awareness training as a full-fledged course module rather than fulfilling the function of an introduction, then you will be tempted to wrap in all the trappings that you would usually put in a course module, like its own introduction and motivational piece, reinforcement, tests, and so on. Bernice does not need any of that and she probably won't sit still through it, either.

Use Case #2: Full Training Course

Having completed the awareness training, Bernice moves on to spend an hour on the full training course. During that time, she learns how to use the new features in detail, practices using those new features in a simulated environment, and takes a test to see how well she has learned them.

No matter what software package you are teaching, and no matter who your audience is, there is a standard three-step best practice for application training. For me, the most intuitive way to understand and remember this practice is to imagine teaching my father how to use email:

  • Step 1: "Dad, watch me send an email. I'm going to walk through it step by step and describe the important details of what I'm doing as I do it."
  • Step 2: "OK, dad, now you take the mouse. I'm going to tell you what to do, step by step."
  • Step 3: "You're doing great, dad. I'm going to go to the kitchen now and pour myself a cup of coffee. By the time I get back, I want to see that you have sent me an email."

So the three steps could be roughly described as "show me how," "let me try," and "test me." Your online training should follow the same basic pattern:

  • Step 1: Animation. The learner watches the task through a screen cam-like interface. Each step is described in text (and possibly audio, though I personally find audio to be much less useful than you would think) as the task is being performed.
  • Step 2: Simulation. The learner sees the text prompt for each step and then is expected to perform the action. Ideally, when the learner performs the step incorrectly, the training should allow the learner to get a hint, try again, or have the software show (through animation) the correct way to perform the step.
  • Step 3: Test. This is basically the simulation without the step descriptions. There are different approaches to giving feedback on incorrect answers and, frankly, I haven't seen any usability testing that shows one approach to be superior to the others. Be sure to review the testing feedback options when you evaluate your simulation authoring packages and decide which approach is most consistent with your instructional philosophy and learning use cases.

Notice that our use case assumes that Bernice is learning an upgrade to a software package that she already knows, which means that she is mostly learning how to use the software to perform business tasks that she already understands rather than learning new business tasks altogether. In this scenario, you'll want to keep your expository prose to a minimum. There's nothing more frustrating than sitting through a long explanation (especially an audio explanation that you can't skim) of how and why to do something that you already know well. However, even in a situation in which some of your learners are new to the tasks, (for example, Brad, the new broker, fresh out of college), I strongly recommend that you author your courses for the experienced user first and then add separate learning modules that give more rationale and more business task details to your new learners whenever you can. This will enable you to give attention to both audiences while also allowing you to re-use as much of your work as possible. Naturally, there are exceptions to every rule; if the tasks that you are teaching are complex and absolutely require a lot of narrative at each step, then you may have to build your courses twice.

Use Case #3: Performance Support

Bernice is using Trade Station Xtreme to perform her first Monte Carlo portfolio analysis for one of her clients. She has managed to get most of the way through, but she's stuck on one step. She can't seem to remember what to do in order adjust an important variable. She pulls up the help files and watches an animation. About two thirds of the way through, she sees the step that she's stuck on. Having been reminded of what to do, she closes the animation and finishes her analysis.

Good performance support tools are often an afterthought and considered highly secondary to training (especially certification training). This is unfortunate because, in the majority of cases, it is exactly the opposite of the best order of priorities. Given that most users are now reasonably sophisticated at figuring out new software applications, much of the time all they need is that one little hint about a tricky step (even if they haven't taken prior training). Why train and certify them on the 90 percent of the training that they can figure out on their own, wasting their time and productivity in the process, when a well-placed hint on-the-job may be all that they need?

A good performance support tool typically consists of two elements. The first looks very similar to the "show me how" animation in the full training. The goal is to enable the performer to watch the steps of the task being performed until that one frustrating missing step shows up, as it did for Bernice in the example above. These animations should be as clean and clutter-free as possible. Again, avoid the instructional designer's reflex of wrapping them in learning objectives, self-tests, reinforcements, and so on. Your goal is to enable your user to perform a business task correctly as quickly as possible. Learning is a secondary outcome and training is nowhere on the map.

The second element of a good performance support tool is a printable job aid. There are two reasons why this format is appropriate. First, some people work better with paper. Second, a document is often the most compact way of enabling users to find information that can't be picked up strictly from watching somebody click around a screen, e.g., proper formatting of particular input fields, trouble spots to look out for, and so on. While it is possible to cram these details into an animation, it is generally not efficient from the perspective of the performer's time. Even here, though, try to keep the document to as few pages as possible. Resist the temptation to write a treatise. This is a quick reference document that should take less than a minute to scan, use, and put away in most cases.

Three Types of Authoring Tools

Given the range of instructional tasks outlined above, it would be nice if there were one authoring tool that could accomplish all of them well. Unfortunately, if such a tool exists, I haven't found it yet. I'm not even sure that it could exist in principle. Engineering, including software engineering, is all about trade-offs. As we'll see, there are three different basic approaches to software simulation authoring tools, all of them have their strengths and weaknesses and they are, for the most part, mutually exclusive.

The first type of authoring tool is the one that will feel most familiar to instructional designers who have used traditional authoring packages in the Hypercard tradition, such as Authorware and Toolbook. In these packages, designers import screen grabs, assemble them along timelines, and add interactions by hand. Of the three types of tools, this first kind offers the most power and flexibility in terms of being able to customize the user experience. They also take far more time to author and create content compared to the other two authoring tool types. The best of these packages have lots of clever shortcuts, like the ability to import scripts from Microsoft Word and the ability to re-use interface widgets. Some of the standard authoring packages, such as Click2Learn's Toolbook Instructor 2004, have added capabilities designed to optimize simulation authoring. However, there are also some excellent dedicated packages, such as Knowledge Quest's Xpert Author, that are designed exclusively for production of software simulations. This type of tool may be appropriate for your needs if you are focused primarily on traditional training such as the one outlined in the second use case scenario, have a robust and experienced instructional design team, want particularly elaborate training with lots of exposition and custom-designed exercises built in, and don't have especially tight deadlines to produce the material. Expect to pay anywhere from $1,500 to $3,000 per seat for these packages.

The second class of authoring tool is in the tradition of Lotus ScreenCam. These tools essentially capture what the author sees on her video monitor while she performs the business task, either by making an actual movie or by taking screen grabs every time the user clicks or the screen changes and then animating things like cursor movements in between. The granddaddy of this class of tools is probably TechSmith's Camtasia, but recently strong competitors such as Macromedia's Robodemo and Qarbon's ViewletBuilder have appeared on the scene. With this class of tool, the authoring process is very quick. The author records the business task in real time, adds text and other elements, and publishes. There is no lengthy process of writing scripts or capturing and importing screens one at a time, as you have with the Hypercard-like tools. The price you pay for this speed is power and flexibility. These tools are generally weak on creating rich simulations or other customized interactions relative to the other two tool types. They are best used for creating awareness training. Prices for these packages generally run from $200 to $700 per seat.

The third class of authoring tool is relatively new on the scene. Although these tools look a lot like the ScreenCam-style tools at first glance, they are far more sophisticated under the hood. Instead of just taking pictures of the video monitor, they actually have the ability to understand the actual interactions between the user and the software. So, for example, they understand that the user has "clicked" on a "menu" and are able to grab the names of the menu and the item that was clicked on, enabling the authoring software to automatically generate sentences like "Click on 'cut' in the 'edit' menu." This is a huge time saver. Furthermore, these tools are able to automatically generate "let me try it" and "test me" simulations, and even documentation, in addition to the ScreenCam-like animations. You can typically generate rough drafts of all of these assets in minutes and final versions within hours rather than the days or weeks it can take with more traditional authoring packages. The main price that you pay for this efficiency is, well, money. These packages are much more expensive than the other types of tools, ranging from $5,000 to $15,000 per seat, with the upper half of the price range being more typical. Secondarily, they are usually somewhat less flexible than the Hypercard-style tools in terms of generating highly customized lessons, although I frankly find them to be powerful and flexible enough that I have only occasionally missed the extra authoring power. They work very well for producing both traditional training and performance support. The packages that are currently on the market tend to produce somewhat cluttered and bandwidth-heavy output for use as awareness training, but since this limitation isn't really inherent in the authoring model so much as it is just a consequence of where the vendors have put most of their energy, it may be something that the vendors can fix in future versions. The most mature packages in this class of authoring tool are Epiance's epiplex and Knowledge Products' OnDemand, though new entrants to this field seem to be coming on fast.

Naturally, these are just broad generalizations. Each of the vendors learns from their competitors (across all three tool categories) and tries to shore up competitive weaknesses as quickly as possible. You'd be wise to take these descriptions as a starting point rather than gospel truth.

Evaluating Your Options

There are many factors involved in choosing the right simulation tools for your needs—so many, in fact, that it is usually very important to have somebody on your team who has lots of simulation development experience to help with the evaluation. However, here are some general guidelines to help you get started:

  • Think about how important authoring speed is to you. If you need to produce large volumes of training or performance support on enterprise systems that still being updated with relatively few people, then money spent on the higher-priced authoring tools will show a very strong return on investment very quickly. On the other hand, authors who only have to deal with a small handful of course modules and have at six or eight weeks to do significant design and development will probably do just fine with the Hypercard-style tools.
  • Evaluate each tool's underlying instructional design philosophy. More often than not, course-building tools save time by making assumptions about instructional design goals and desired outputs. Since it's never productive to fight against your own tools, be sure to pick ones that have default outputs that are as close as possible to what you would want your finished product to look like. Ask each vendor what his or her company believes are the most important features in quality simulation training. If the salesperson doesn't have a good answer, then walk away.
  • Don't get hung up on pre-conceived fine details of how the training should look or work. We all have design habits or preferences. However, many of these are just exactly that—habits or preferences. Before you assume that a simulation package's navigation menu or feedback presentation method is inferior just because is different from what you've done before, test it out with users who don't have the strongly-developed opinions and habits that you have. If they don't care, then you probably shouldn't either.
  • Evaluate each tool's underlying workflow philosophy. In addition to making assumptions about how simulation training should work, toolmakers also make assumptions about how simulation training authors should do their work. Again, ask the vendors how their customers usually organize their design and production teams. Try to find tools that best match your team's configuration, size, and levels of expertise.
  • Think about the trade-off between plug-in requirements and bandwidth requirements. Simulations tend to come in three different flavors in terms of run-time requirements: those that will run in any modern browser using dynamic HTML, those that require standard browser add-ins such as Java or Flash, and those that require proprietary plug-ins. In general, the dynamic HTML simulations require much more bandwidth—possibly ten or fifteen times more—than the plug-in-based simulations. In theory, the Java or Flash solutions should be a good compromise. In practice, however, the implementations I have seen to date tend to be weak on interactivity. Again, the field of products is changing rapidly, so don't take this rule-of-thumb for granted. The point is that you should talk to your IT staff and ask them whether keeping bandwidth down or avoiding plug-ins is more important in your environment.
  • Above all, focus on your use cases. A feature in an authoring package is only useful if it offers a benefit that you actually need for your particular situation. Try to develop a fairly clear vision of how your users will and will not interact with the training in their work environments before you evaluate the tools. Tell yourself short stories about your users, as I have done in the first half of this article. Then try to keep an open mind about what specific implementations might meet those interaction requirements. Be prepared to tell your vendor the "what" and to let the vendor tell you the "how."


Comments

  • There are no comments at this time.