- Home
- Articles
- Reviews
- About
- Archives
- Past Issues
- The eLearn Blog
Archives
To leave a comment you must sign in. Please log in or create an ACM Account. Forgot your username or password? |
Create an ACM Account |
[A] high level of digital interactivity is necessary to enable international civil society to reach political maturity. This level of interactivity will support rich participation and mutual engagement by nascent global citizens, progressively and responsibly transcending traditional national borders, and bringing us closer to the vision of a true global society.
—Hardin Tibbs, "Interactivity and the Open Society"
If we are to believe much of what is being written online these days, the world is on the verge of a utopian age of digital dialogue. Whether the topic under discussion is the classroom, the office, or even democracy itself, there is a sense of optimism in the air that weblogs and other personal Internet publishing tools will lead us to a better world by enabling a massive peer-to-peer online learning network.
It's an appealing vision: If you give the participants enough opportunities to get to know each other and discuss the issues in-depth, then good ideas will naturally tend to emerge. On the surface, it's hard to argue with that vision. Any good teacher knows that class discussions can produce remarkable results. Any good manager knows that good people bring good ideas. (Apple Computer CEO Steve Jobs is purported to once have said, "It does not make sense to hire smart people and then tell them what to do. We hired smart people so that they could tell us what to do.") And anyone who has participated in a successful democratic exercise, whether it's a jury, a town hall meeting, or a board of education meeting, also knows the power that can come from well-functioning free exchange of ideas. It's easy to believe that communications technologies such as weblogs can amplify this positive dynamic to produce more and better results faster.
But funny things can happen on the way to utopia. In this particular case, we are taking human-to-human communications that have evolved over millions of years and placing them in an entirely new medium of a global electronic network. And as we are beginning to learn, dramatically increasing the number of parts (in our case, people) interacting in a network doesn't necessarily mean that you get much more of the same thing you had with just a few parts. Networks can change things in unexpected ways.
Fortunately for us, a growing number of researchers from a wide range of disciplines is coming to study these "greater-than-the-sum-of-its-parts" phenomena. The community of online learning professionals has an opportunity to participate in this effort and contribute to it. As a small and early step in that direction, this article examines a phenomenon that behavioral economists call an "informational cascade," which is essentially a kind of logical trap that leads groups of perfectly bright, sensible people to participate in illogical "herding" behaviors such as stock-market bubbles and fashion crazes.
In the first section of the article, I describe these cascades and show how they can happen within a learning community to harmful results (even when that community is observing common-sense best practices regarding communications). In the second section, I show how increasing the communications among participants can actually make the informational cascade worse rather than better. In the third section, I describe various situations in which we are likely to encounter informational cascades in everyday life. In the fourth section, I suggest strategies for stopping or preventing cascades. And in the final section, I discuss some further implications of informational cascades for various kinds of formal and informal learning communities.
The Root of the Problem
I'm going to draw on Kathleen Gilroy's recent article here in eLearn to show how common-sense best practices for cultivating online learning communities often do little or nothing to prevent the problem of informational cascades. Ms. Gilroy suggests a three-part strategy to foster a learning community:
This framework has several virtues for our purposes. First, it is concise and easy to understand. Second, it can apply equally well to formal learning communities such as classes and informal learning communities such as work groups or the citizenry of a government. And finally, all three strategy points are fairly uncontentious and widely accepted best practices. There is nothing particularly unusual or specially flawed about Ms. Gilroy's framework, which is why it is a good example. As we'll see, the reason that it fails to prevent informational cascades is because the very same conditions that foster networked social learning also foster the cascades themselves.
The easiest way to explain informational cascades is with an example. Imagine that a corporation faces a choice between implementing Project A and Project B. Imagine further that the corporation decides to let each branch manager make an independent decision about which project his or her branch will implement. The corporate leadership makes significant efforts to ensure that the branch managers get to know and respect each other through an off-site kick-off session (thus "activating the social network") and set up a "multi-limbed" weblog so that each branch manager can share developments and progress with the group. The branch managers are required to post their respective decisions on Project A versus Project B on the weblog, and a running count of how many branches have chosen each project is maintained online along with links to the appropriate blog entries (thus "displaying feedback and pattern recognition").
Suppose the first branch manager (who we'll call Jane) decides to go with Project A. She posts her decision on the blog, along with four or five reasons why she thinks Project A is a good bet. The second branch manager (call him Amit), reads Jane's post. After thinking about her decision and adding in his own, private evaluation of the alternatives, he decides that he agrees with her. He posts to the weblog, linking to Jane's own post:
I agree with Jane (especially her second point). I'm going with Project A too.
The third branch manager, Carlos, reads Amit's and Jane's posts. Carlos, having delayed a bit in making his decision, doesn't have much time to consider the options. He trusts both Jane and Amit, so he decides just to go with their judgment. In a rush, he posts:
I'm going with Project A. See Jane's post for the reasoning.
The problem has started. In an informational cascade, individuals make decisions based on the decisions of others, discounting (or failing to develop or address) their own private information or judgment. Amit added information about Jane's judgment to his own to make a decision. Carlos, however, simply relied on the judgment of the others. Note that this is a perfectly rational decision for Carlos to have made, given his circumstances. Unfortunately, his decision has negative consequences for all of the branch managers who decide after he does. In order to see the harm that comes of this we have to examine what happens with the next person in the decision chain.
Sasha happens to believe that Project B is likely to produce better results. But he knows and trusts Jane, Amit, and Carlos. Seeing that all three of them seem to think that Project A is better, Sasha lets the wisdom of three other experienced branch managers overrule his own judgment. Except that, unbeknownst to Sasha, he really only has information about the judgment of two branch managers; Carlos has no real opinion. If Sasha had known this fact, he might have been more inclined to trust his own instincts. Now the next person who follows Sasha will see that four branch managers have decided that Project A is the best when, in fact, only two have so decided.
But what if Project B would have been more profitable than Project A? Sasha might have discovered that fact and shared the information with the group had he not been swayed by the informational cascade. But since he didn't try it, each branch manager that follows what she believes to be the unanimous judgment of the group makes it less likely that somebody else will try Project B and that the group will learn from the new information. It's entirely possible that nobody will ever try Project B, leading the corporation to falsely conclude that Project A is the best choice. Economists call this an "inefficient cascade" because the group never converges on the optimal answer. All the while, the team of branch managers believes that they have been benefiting from the collected wisdom of all participants when, in fact, they have only benefited from the wisdom of the first two. The informational cascade fools them into thinking that they are getting more information than they actually are.
Of course, it doesn't have to happen this way. If Jane had chosen Project B instead of Project A, then the group would have made the right decision. However, that doesn't change the fact that they would still have an informational cascade. In other words, the process of group conversation would have done nothing to test whether either decision was the right one. Interestingly, the shorthand and highly hyperlink-dense nature of blog discourse would seem increase the likelihood that cascades will occur. It's perfectly normal to link to somebody else's post as a shorthand way of expressing agreement. In fact, it's considered polite to do so as a way of crediting the original poster with the idea. The very same hyper-linking impulse that makes it easy to pass along an idea with a minimum of effort also makes it easy to appear as if I'm agreeing with the post I've referenced when, in fact, I'm just deferring to it.
It's Worse Than You Think
But how often does something like this happen? Are informational cascades common or are they just occasional flukes? To answer that question we need to do some more rigorous analysis. Luckily for us, H. Henry Cao and David Hirshleifer have done that analysis in their paper "Conversation, Observational Learning, and Informational Cascades." Using a scenario that very closely resembled the one that I just described, they conclude that "cascades/herding occurs with probability one." In other words, to the degree of accuracy that their statistical model allows, informational cascades are inevitable in the kind of informed serial decision-making described above. Because human beings weight their decisions based on the decisions of people who have gone before them, if several people in a row make the same decisions (often as few as two), eventually enough weight will accumulate to overwhelm any private reservations of the people who follow them.
But maybe this scenario isn't entirely realistic. In the story I told in the previous section, each branch manager makes only one short post relaying his or her decision and, perhaps, the reasoning behind that decision. But blogs allow for periodic updates. What if we change the scenario to allow for each branch manager to post the results of the project implementation in his or her branch? Wouldn't that improve the quality of the group's decision-making?
Not necessarily. Cao and Hirshleifer took up this question as well. They found that under certain conditions the additional information actually makes the group's decisions worse. The problem is that it's the wrong kind of information. Suppose that Project A doesn't produce a bad outcome; it's just not as good as Project B. When Jane reports her results that her branch made $125,000 on Project A, that sounds pretty good to Amit. In fact, maybe it's good enough that he figures he can't go wrong by choosing Project A. It's now a "no-brainer." Since Amit doesn't feel like he needs to weigh his own judgment against Jane's in order to make a decision, the informational cascade actually happens faster. Unfortunately, what Amit still doesn't know is that Project B would have earned his branch $200,000.
Again, it doesn't have to happen this way. If several branch managers have marginal results, or if one has a complete disaster, then the new information could be sufficient to cause the next branch manager in the chain to stop and think carefully. But the point is that, as Cao and Hirshleifer put it, "the ability to observe past payoffs can reduce average decision accuracy and welfare." Having the information about outcomes actually encourages informational cascades and leads to less optimal decision-making.
Cascades Are Everywhere
We don't have to rely on hypothetical situations to find evidence that informational cascades can happen all too easily; plenty of these phenomena can be spotted "in the wild." In "A Theory of Fads, Fashion, and Cultural Change as Informational Cascades," authors Bikhchandani, Hirshleifer, and Welch suggest that the medical practice of pro-actively removing children's tonsils was the result of an informational cascade. This practice was pervasive in the United States during much of the Twentieth Century despite the fact that there was little medical research supporting its value. Doctors removed healthy tonsils because they knew that other doctors were doing it. Similarly, in "Information Cascades and the Adoption of New Technology," authors Walden and Browne suggest that corporate IT departments tend to rely heavily on information about what other corporations have done and that cascading is common in the adoption of new technologies (though the authors also find that cascades that lead to the wrong choice tend to die out more often than not).
Another arena where we see informational cascades quite a bit is politics. Ever since Jimmy Carter employed the successful strategy of focusing on winning the early Iowa caucus to show voters that he was the front-runner, United States presidential candidates have essentially tried to actively create informational cascades as part of their campaigns. In fact, given the obsessive media coverage of the "horse race" aspect of the campaigns and the infamously closed circles of so-called "media elites" and "Washington insiders," it's possible that we experience informational cascades almost perpetually during political campaigns.
For example, informational cascading is a plausible explanation for the widespread surprise when Howard Dean lost so badly in the 2004 Iowa caucus. Remember, even if Dean had gotten 100% of Gephardt's votes, he still would have placed third in Iowa, trailing John Kerry by double digits. This occurred at a time when everyone from Wolf Blitzer to Al Gore was virtually anointing Dean as the likely winner. In order for everyone to be so wrong, one of two things had to happen. Either there was not enough information available to any of the pundits for them to correctly foresee Dean's likely defeat, or the pundits who had information suggesting that Dean would lose ignored that information in favor of the previously expressed opinions of others. Given the number of eyeballs watching the election and the amount of polling data that was available, the latter explanation seems far more plausible than the former. And if it is true that most observers ignored the facts in front of them because they fell victim to an informational cascade generated by the Dean campaign, then the campaign itself ultimately suffered from it. They spent most of their campaign funds before Iowa and did little to lower expectations about their performance there. When they lost, they had very little money to continue and very little room to argue that the outcome in Iowa was not essential to their plan to win the primary. Despite their strategy of a grass-roots decision-making process (or perhaps because of it), the Deaniacs set a fatal trap for themselves.
It's possible that Democratic voters then moved into another informational cascade as soon as the Dean cascade broke. Slate columnist William Saletan argues that John Kerry benefited hugely from an informational cascade following Dean's defeat:
OK, maybe Dean wasn't the most electable guy. But in the states that followed, voters applied the same theory to other candidates, padding Kerry's delegate count and aura of inevitability. They figured the guy who had won Iowa and New Hampshire was a winner. So, they voted for him, proving themselves right. The biggest delegate prize on Feb. 3 was Missouri, where Kerry beat John Edwards 2 to 1, filling the airwaves with talk of a juggernaut. How did Kerry thrash Edwards so badly? He won "agrees with you" voters by 10 points-a healthy but not awesome margin, largely attributable to the fact that Kerry was the candidate the media were talking about, since he had just won New Hampshire. No, the people who gave Kerry his enormous vote tally in Missouri-and nearly two-thirds of the state's delegates-were the "can defeat Bush" voters, who went for Kerry over Edwards by a ratio of more than 3 to 1.
A decisive number of the voters who chose Kerry ignored their own candidate preferences in favor of the information they were getting from other voters (who in turn, Mr. Saletan argues earlier in the article, also ignored their own preferences at least as far back as the New Hampshire primary). This is a classic example of an informational cascade.
Saletan goes on to argue that, unfortunately for the Democrats, the informational cascade that put Kerry on the top of the Democratic ticket may be an inefficient one:
Are these "can defeat Bush" voters correct? Is Kerry the most electable Democrat?
It's a hard question to answer, because most of the evidence is circular. If people support Kerry because they think he's electable, he goes up in the polls, which makes him look more electable. The best way to filter out this distortion is to focus on the voters least likely to make their decisions in November based on electability. These happen to be the same voters who hold the balance of power in most elections: independents, conservative Democrats, and moderate Republicans. They aren't principally trying to figure out which Democratic candidate can beat Bush, because they don't necessarily want the Democratic nominee to beat Bush. They're trying to decide which Democratic candidate, if any, would be a better president than Bush.
How well has Kerry done among these voters? In absolute terms, well enough. But in relative terms, the numbers show a disconcerting pattern. By and large, the closer you move to the center and center-right of the electorate, where the presidential race will probably be decided, the worse Kerry does. The opposite is true of Edwards.
By ignoring their own sensibilities about which candidate they find preferable, it's possible that many individual voters ignored the very information that would lead to choosing the most electable candidate. It's also possible that the Kerry campaign chose Edwards as a running mate in part because they, like Mr. Saletan, were able to read into the details of the polling numbers and see the possibility that the voters were caught up in an informational cascade. Whether or not this is actually the Kerry campaign's reasoning, it suggests that there may be strategies for recognizing, stopping, and even preventing informational cascades.
Breaking the Cascade
If Mr. Saletan is right it is because he was able to recover the information that was lost in the cascade. When a voter decides electability based on the response of previous voters and ignores his or her own preferences, that voter's vote no longer contains information about his or her preferences. As a consequence, that vote also no longer gives information about how future voters with similar preferences might decide. Fortunately, the vote is not the only piece of information that we have about preferences. Mr. Saletan was able to sift through additional polling data to recover voter preference information that was (allegedly) not contained in the vote. Likewise, the same recovered information could have led the Kerry campaign to conclude that John Edwards is a more effective running mate than the primary voting numbers would indicate. (Conventional wisdom, based mainly on the distribution of votes that Edwards received in the primaries, is that he does not have the ability to help Kerry carry a single state.)
In general, then, we can stop informational cascades by recovering the specific information that is likely to be lost when participants start to disregard their own private information and judgment. This is entirely consistent with Ms. Gilroy's strategy of "displaying feedback and pattern recognition." However, unless facilitators specifically look for possible cascades and consciously plan to stop them, the feedback and pattern recognition that they elicit probably won't be the kind that the group needs. For example, in the hypothetical situation described earlier, the most natural question to ask a branch manager is what are the advantages of the project that he or she chose. However, the information that is lost in the cascade is the merits of the project that the branch manager did not choose. If Sasha had listed the reasons why he suspected Project B might be preferable even though he chose Project A, then the person who followed him might be more inclined to trust his own intuition and try Project B, knowing that Sasha sees the same advantages that he does. Because informational cascades happen as a result of information loss rather than any fundamental irrationality of human beings, giving people even just a little additional information is often enough to cause them to doubt the illusory consensus and break the cascade-if it is the right kind of information.
Even better than stopping a cascade once it has started would be preventing it from happing in the first place. You can do that by simply not giving the participants the chance to hear other people's answers before they respond to a question. For example, the branch managers could have conducted a poll at the very beginning of the initiative, asking participants to identify which project they prefer and the reasons for their preference. Since the polling results would not be released until after everyone has voted, the results could only reflect each individual's preferences unbiased by the decisions of others. Again, what matters is the information contained in those preferences. As finance journalist James Surowiecki puts it in his book The Wisdom of Crowds,
If you ask a large enough group of diverse, independent people to make a prediction or estimate a probability, and then average those estimates, the errors each of them makes in coming up with an answer will cancel themselves out. Each person's guess, you might say, has two components: information and error. Subtract the error, and you're left with the information.
This is exactly how efficient-market theory claims that the stock market (among other markets) works, and it explains why the vast majority of professional money managers tend to under-perform stock indexes on a regular basis. Groups of people "know" much more and "judge" much better as an aggregate than most of the participating individuals do most of the time. Paradoxically, this mechanism only works reliably when the individuals are not relying heavily on information about the judgments of peers in their network. On this point, Surowiecki is unequivocal: "Organizations...clearly can and should have people offer their judgments simultaneously, rather than one after another....One key to successful group decisions is getting people to pay much less attention to what everyone else is saying."
It's also fairly easy to combine simultaneous polling with deliberate probing for potentially lost information into a belt-and-suspenders approach. A facilitator for our group of branch managers could have done something like the following:
In this way the branch managers still benefit from the data of knowing their peers' choices and outcomes without creating an informational cascade that devalues that data by stripping much of the information value out of it.
Implications
As the approach outlined in the previous section suggests, informational cascades can be prevented but generally only with deliberate and specific intervention. This, in turn, suggests that groups with active moderators who have the authority to direct the group's information-sharing activities have a much better chance of avoiding harmful cascades than groups with a laissez-faire approach. In classroom settings (online or otherwise), this isn't difficult. In less formal learning communities, the problem may be more difficult to solve. One of the areas where we would appear to have the least control over informational cascades is that of politics and government. This is why political scientist James Fishkin has proposed what he calls "deliberative polls" in which a representative group of voters is gathered together to discuss issues for two days before they are given a poll. The theory is that facilitated discussion of issue nuances will help break any informational cascades that are discouraging voters from giving full weight to their own assessments of candidates and issues. Short of that kind of heavy intervention, it's hard to see how informational cascades wouldn't be the norm in politics as well as other grassroots group decision-making processes.
The voter choice problem also suggests that informational cascades are problematic in cases where there is no single objectively correct answer. Whether or not there is one "right" choice for an elected official, we want to make sure that election results reflect the true considered judgment of each and every voter. Cascades can prevent that from happening. Likewise, they can be counter-productive in constructivist classrooms where the teachers are encouraging students to think creatively and independently. Following somebody else's lead is often a rational shortcut; I don't have to bother reading movie reviews and deciding if the new flick is worth my $10.50 if three of my friends have already seen it and loved it. But because imitating the decisions of others is a natural instinct, my tendency to trust the judgment of my peers may cause me to tune out my own inner voice. Whether your goal is to elicit the best group judgment or simply to elicit more diverse and authentic judgments from every individual, informational cascades are your enemy.
In closing, I'd like to emphasize that while weblogs and other social information-sharing technologies are not the ultimate cause of informational cascades, they generally don't prevent them and often can amplify and accelerate them. As weblogger Peter Merholtz "explains" when giving his reasons for taking time off from blogging:
I was...growing increasingly frustrated with the echo chamber effect of weblogs. A meme drifts out there, and then 38 different people post their take on that meme, and they all link to each other, and, as a reader, you bounce from post to post, the semantic feedback growing until it's deafening. I needed to remove myself from that for a while. To prune a tree. To look on as my g/f and another friend weeded my garden. To get licked in the face by a dog. To prepare my taxes. To work out while watching TeeVee.
An echo chamber is an apt metaphor for informational cascades. And there is no echo chamber larger than the World Wide Web.
To leave a comment you must sign in. |
Create an ACM Account. |