ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Learning 2.0: How digital networks are changing the rules

By Mélanie L. Sisley / March 2011

Print Email
Comments Instapaper

Learning 2.0: How digital networks are changing the rules


March 8, 2011

"In the dawning knowledge age, how well we live will depend on how well we learn," said the 24th Governor General of Canada, Ray Hnatyshyn, in the 1991 Speech from the Throne. Little did we know that with the arrival of Web 2.0 this statement would become the silent concern of every knowledge worker on the planet. It's not for nothing that Time magazine named YOU, person of the year in 2006 and that Wired magazine claims we are in the age of new socialism. Web 2.0 seems to be placing the power in the hands of the individual, while being self-propelled and digitally-savvy may the most important meal-ticket of the future. The endless flow of connective knowledge is picking up strength, and finding our place in the chaos can be disorienting, noisy, tiresome and may even leave us feeling a little obtuse. So how can we harness the power of digital networks and Web 2.0 tools to thrive in the possibilities?

Ask a 10 year old who Catherine de Medici was and they will be able to Google it within minutes. But more than easy access to information, the evolution of Web 2.0 is providing a stage for anyone to express a digital presence and contribute thoughts and opinions. Not unlike the Renaissance, the technological revolution is making the individual the central element of importance in his or her own development. But are we changing our learning strategies as a result? And if we are not, should we?

Is Web 2.0 Turning Our Brains to Mush?

"Your brain is evolving right now" is the title of the first chapter in Gary Small and Gigi Vorgan's iBrain Surving Technological Alteration of the Modern Mind. The authors claim daily exposure to "high technology" (computers, smart phones, video games, and search engines) is altering our neural pathways at a speed like never before. Along with UCLA neuropsychology and neuroimaging experts Susan Bookheimer and Teena Moody, they hypothesized the stimulation generated in new media is altering the brain's neural circuitry in an important way.

They studied both computer-savvy and computer-na�ve subjects to find that the while performing Google searches, the computer-savvy subjects used a specific network in the left front part of the brain, known as the dorsolateral prefrontal cortex. This is the part of the brain that allows us to make decisions and integrate complex information for a short period of time. The computer-na�ve subjects showed minimal activity in that part of the brain, but after five days of Google surfing, also showed activity in the same brain region.

According to Small and Vorgan, the high-tech revolution is putting us in a constant state of "partial attention." This differs from what is commonly known as multitasking in that partial attention does not have a productivity goal. It simply puts us in a state of constantly scanning for any type of contact at every given moment. The authors surmise there is a sense of self-worth and control in maintaining constant partial attention and it can even lead to a hormonal boost of energy levels and short term memory. However, unrelenting digital connectivity can actually impair cognition and create a "brain strain" that can leave people fatigued, irritable and distracted. Despite the dangers in overuse, the authors contend the digital revolution is improving our ability to process large amounts of information and rapidly decide what is important and not.

However, researchers at the San Diego State University found teenagers are losing the ability to recognize emotions. The lack of exposure to body language, tone and facial expression seems to be pruning the excess synaptic connection that normally occurs during puberty and develops empathy. What's more, researchers at Tokyo's Nihon University found avid video game players appear to suppress frontal lobe activity, which is the section of the brain associated with reward, attention, long-term memory, planning, and drive. Thus on the one hand, exposure to interactive media is improving the speed with which we process information, but on the other may be affecting our ability to retain information for long periods of time.

Nicolas Carr, author of The Shallows: What the Internet Is Doing to Our Brains, is also skeptical about our ability to cope with digital networks' chaotic knowledge environment.Because of the speed and distractions caused by digital connectivity people are loosing the ability of concentrate, focus and understand. He says people who read text studded with links understand less than those who read traditional linear text, people who watch busy multimedia presentations remember less than those who absorb information in a calmer more focused manner. Carr adds constant distractions (emails, instant messages etc.) are chipping away at our ability to concentrate. In other words, multitasking is making us less productive and less effective learners. We're becoming, in a word, shallower" says Carr. In addition, Carr believes Web 2.0 is creating "endless streams of mediocrity, eroding cultural norms about quality and acceptability, and leading to increasingly alarmed predictions of incipient chaos and intellectual collapse."

On the other hand Clay Shirky, author of Cognitive Surplus: Creativity and Generosity in a Connected Age, eminds us: "In the history of print, we got erotic novels 100 years before we got scientific journals, and complaints about distraction have been rampant; no less a beneficiary of the printing press than Martin Luther complained, 'The multitude of books is a great evil. There is no measure of limit to this fever for writing.'"

It stands to reason that the Web 2.0 world is taking learning and epistemology beyond the traditional paradigms. Mixing learning, communicating and socializing, the classroom equivalent of a 2.0 environments could be like holding a cocktail party in an English literature class. Just how much about English literature is being learnt? The answer, I would venture, depends greatly on whom you are inviting to the party.

Thus whether we are simply in a transition stage with the Internet in which we will eventually weed out the more frivolous content (or at least find a proper place for it) is left to be seen, but history would indicate that there is a distinct possibility that we will self-regulate for the better good.

The Five Minds of 2.0 Learners

Psychologist Howard Gardner integrates cognitive science, neuroscience, history, anthropology, politics, economy and human values to offer five states of being that he speculates will be necessary in the new age of learning and work. In his book Five Minds for the Future, Gardner admits the skills he sees relevant for the future are skills that have been relevant in past. However he hypothesizes the success of an individual in the digital age will greatly depend on their ability to embody these states.

In researching secondary school and college students, Gardner found the students were unable to make conceptual connections with notions that were abstractly related to things previously learned. For Gardner, they had not learned to think with a "disciplined mind." With more and more knowledge that lies in the "spaces between," Garner claims the ability to make connections through disciplined thinking is a vital tool in learning.

Next on Garner's list is the "synthesizing mind". He turns to Nobel Prize-winning physicist Murray Gell-Mann who that the most premium mind of the 21st century is the mind that can synthesize. With the explosion of information and its tip-of-the-finger availability, synthesizing massive amounts of data whilst keeping a vision of the big picture is a key learning skill. Some examples of synthesis include the ability to transfrom learnings into narratives, taxonomies, concepts, general rules, metaphors, theories and metatheories. A little like Michelangelo's depiction of Christian history on the Sistine Chapel, impactful individuals have the ability to unify massive amounts of information in a way that is meaningful to their audience.

The third mind for the future is the "creating mind." He explains "in our global, wired society, creativity is sought after, cultivated and praised." Although it may seem like Gardner's claim is self-evident for any generation, he reminds us creativity was not historically valued (e.g. Galileo was imprisoned and Giordano Bruno burned at the state). The pace of innovation is blinding and those who can tap into pressing needs will have a viral effect, says Gardner. However, creativity is not a skill given to all, thus how can the "in-the-box" thinker stay a valuable contributor? Gardner qualifies the concept of creative thinking by recalling that many of our recognized creative icons (Mozart, Picasso, and I would venture Steve Jobs) were successful not necessarily because they had a creative gift (though some did), but because they were able to risk failure and pick themselves up and try again when they failed. According to Gardner, the creative mind should be able to envision possibilities that are different from the current reality and should be able to inject just the right amount of critical analysis to not stifle visionary thinking. Gardner also contends that creativity can be cultivated in groups. Thus the ability to work collaboratively may palliate a lack of individual creativity. Charles Leadbetter, a leading authority on innovation and creativity, concurs with this idea in saying "the more connected we are, the richer we should be, because we should be able to connect with other people far and wide, to combine their ideas, talents and resources in ways that should expand everyone's property."

The fourth state of being that requires conscious consideration is the "respectful mind." Albeit unquestionable, Gardner points out although we strive to respect each other, we have a distinct proclivity to band together against one another. There is a constant tension between altruism and antagonism that irks the human mind and according to Gardner it is the work of every generation to strive for respect. Increasingly relevant in digital environments where the absence of body language, facial expression and tone can leave more to interpret in non-verbal communication, the respectful mind in Gardner's view is not so much political correctness, but a flexibility and an openness toward differences.

Lastly the "ethical mind" is among Gardner's 21st century skill that refers to the ability to recognize and strive for what he calls "good work." Essentially, through the ethical mind, Gardner is stressing the importance of citizenship and contributing to society through our talents.

Conclusion

We can't all make Leonardo de Vincis of ourselves, that much we know, but one thing that has changed from the humanism of the Renaissance—maximizing the possibilities of autonomy through Web 2.0 environment, which is now intimately tied to our economic and social potential. But with monumental possibilities comes monumental responsibilities, which in turn may be creating large social gaps between those who can tap into the power of Web 2.0 and those who use it for more trivial or narcissistic purposes. How many of us feel lost in the sea of information and influences and struggle in teaching ourselves how to stay current and competitive? Conversely, how many of us feel like we finally hold our destiny in our own hands?

Albeit free and available to everyone, is the technological revolution yet another way of creating elites? We have seen that indeed our brains and our social abilities are changing as a result of our digital life. We may be better at scanning through masses of information, but the ability to empathize and use our long-term memory may be taking a hit. More than ever before, we are responsible for making the best of all the resources available to us—which takes maturity, discipline and a sharp critical mind.

To date, educators and policymakers have been considering how to integrate new media into education, but with the speed of innovation and the rapid adoption of the newer generation, the population may be better served if they were reflecting on how to prepare people to live with new media in purposeful way. In other words, simply learning how you can use the Internet to learn about the Renaissance is not enough, we must learn to paint a Sistine Chapel of our own.

About the Author

M�lanie L. Sisley has been working in the field of education and new media for more tahn 17 years. An educator by trade, she has designed numerous learning applications for corporate, health, entertainment and education sectors. She has collaborated on projects with Bell Canada, Alcan, Yellow Pages Group, Future Electronics, M.D. Anderson Cancer Institute, the Qu�bec Bar, McGill Universtiy, Elitis Pharma, Scholastics, PBS, National Geographic, Warner Bros and l'Agence Science Presse. Now completing a master's degree in educational studies, she is focusing her research efforts on the effects of digital networks on the learning patterns of knowledge workers. More specically, she is concerned with the socio-economical impacts of digital learning. Aside from her studies, she works as a consultant in techno-pedagogy and designs eLearning applications through her company Neurobox .



Comments

  • There are no comments at this time.