I have been thinking about what one can do in virtual team meetings conducted in virtual worlds to make participants feel and react as if they are face-to-face with other participants. Part of me says that this is an “ideal” that we may never reach. At the same time, I am aware of research (more) indicating that it is possible to create situations in which individuals react to computers as if they are reacting to other humans. So it may not far fetched to think that one day we will be able to create a virtual world environment in which participants interact with avatars of other participants as if they are face-to-face with them.
Recently I came across an article on BBC News that may be relevant for our discussion on how we could create more realism in virtual world meetings. This article, Massage Illusion Helps Amputees, reports a study by Dr. Vilayanur Ramachandran of University of California, San Diego, with ex-soldiers whose one limb had to be amputated. In one experiment, the amputees put their remaining hand in front of a mirror in a device called as the mirror box (image). This device tricks an amputee’s brain into thinking that the mirror image is actually another working limb. When the normal hand of the amputees was touched, they felt the sensation of being touched on the missing hand. In a second experiment, amputees experienced a stroking sensation arising from their missing limb when they watched someone stroking a volunteer’s hand.
This fooling of the brain into thinking that the lost limb is being touched or massaged is due to the presence of mirror neurons in our brain. These neurons fire up when we perform an action (e.g., pick up something) as well as when we observe someone else perform that action. In the case of amputees, their mirror neurons and the mirror box trickery combined to create the sensation that they reported. Mirror neurons were discovered by accident in Italy during early 1990s when scientists observed that certain brain cells (i.e., those involved in planning and execution of motor action) in monkeys fired up not only when the monkeys brought a peanut to their mouth but also when they observed a human or another monkey carry out that action (see New York Times article titled Cells that Read Minds). Since then, we have discovered that humans too have mirror neurons. Moreover, there are different types of mirror neurons in humans and they are smarter, more flexible, and more highly evolved than those found in monkeys (see PBS’ informative video on mirror neurons).
The mirror neurons in humans are believed to help in learning and social cognition. Scientists claim that the system of mirror neurons in humans enables social interaction by helping individuals understand the actions, intentions, and emotions of others. In the New York Times article linked to above, Dr. Rizzolatti, the Italian neuroscientist whose team discovered the mirror neurons, says “We are exquisitely social creatures. Our survival depends on understanding the actions, intentions and emotions of others. Mirror neurons allow us to grasp the minds of others not through conceptual reasoning but through direct simulation. By feeling, not by thinking.” Thus, when people say “I feel your pain,” it may be because their mirror neurons that usually fire up when they experience pain are firing.
The above model of mirror neurons implies that if for some reason our mirror neurons don’t fire when we observe something, then we will not reach an understanding of what we are observing. Indeed, a study conducted in Dr. Ramachandran’s lab with autistic individuals supports this idea. Autistic individuals are known to have difficulties with empathy and social interaction. The study found that the autistic subjects had a dysfunctional mirror neuron system — their mirror neurons responded only to their actions and not to the actions of others. Bottom line: We reach an understanding of what we are observing because our mirror neurons resonate as if we are engaged in what we are observing.
How is all of this about mirror neurons related to leading and managing virtual teams? Ideally, we would like to know what being face-to-face with others means in a neurological sense. Once we know that, we can begin thinking about how to recreate that neurological stimulation in a virtual environment. Knowledge about mirror neurons may be relevant because the firing of mirror neurons is likely to be a part of stimulation in face-to-face meetings; such meetings not only involve taking motor action (e.g., shaking hands with others, making nonverbal gestures, using the mouth to speak) but also observing actions of others. If we could create the equivalent of the mirror box in a virtual world to create the illusion of a face-to-face meeting and stimulate the mirror neurons to fire the way they fire when we are in a face-to-face meeting, then we may be able to recreate the feeling of being with others face-to-face.
How can we create the equivalent of the mirror box in a virtual world? What we have in the mirror box is an example of mixed reality – both virtual and real. The way the mirror box is constructed, one sees both the existing limb as well as a virtual limb in place of the amputated limb. The real arm is used to trick the brain into thinking that the virtual arm is real. A mixed reality virtual world, like the one created at Sun Microsystems and reported in an earlier LeadingVirtually post, might be one way to create the mirror box equivalent. In a mixed reality environment, one sees both the video of some participants (the real thing) and avatars of others (the virtual thing) who are unable to transmit a video of themselves. Another way might be to create an immersive virtual world using a head mounted display and letting the user interact using a haptic interface (see a recent Popular Mechanics article on haptic interfaces). A haptic interface translates a user’s physical actions into computer commands. With an ideal haptic interface, a meeting participant would take the same physical actions that s/he would normally take in a face-to-face meeting. The haptic interface would then translate those physical actions into appropriate actions in the virtual world. For instance, a user of a haptic interface would get up from her/his chair and walk a few steps in order to shake hands with another participant who is only present virtually. By mixing the real with the virtual and stimulating the mirror neurons to fire not only when we take a real action but also when we observe virtual action (e.g., someone extending her/his hand to you), we may be able to trick the brain into believing that we are in a face-to-face meeting. At this stage, these ideas are speculative and it will be worth researching if they are successful at inducing a sense of reality in virtual world meetings.
The ability to create a sensation of reality even though we might be immersed in a virtual environment should not be discounted. The New York Times article mentioned a study that observed children watching a violent television program. This study found activation of mirror neurons and of parts of the brain involved in aggression, thereby increasing the probability that these children would behave violently. At Hadasit, an Israeli company, scientists are employing virtual reality systems to let stroke patients, who are unable to move their arms physically, move them virtually on an LCD screen with the help of small movements of a mouse or joystick. According to the scientists, the virtual experience activates the mirror neurons and induces a therapeutic effect on the patient’s brain. Preliminary results have shown increased arm functioning. Similar improvement in arm functioning have been found in another study that employed virtual reality systems with stroke patients.
While what I have talked about might benefit us in the future as far as leading virtually is concerned, research on mirror neurons does offer us something that we might be able to use right now when we collaborate in virtual teams. A study by Lisa Aziz-Zadeh and colleagues showed that subjects’ mirror neurons responded to someone reading descriptions of certain hand, foot, and mouth actions (e.g., biting the peach) in the same way as when watching videos of those actions. If we accept that the firing of mirror neurons is critical in our reaching an understanding of something, then a corollary of the finding by Lisa Aziz-Zadeh and colleagues is that evocative language that re-enacts action may convey the same meaning or understanding to us as the observation of that action. In a virtual team, if we want others to reach the same understanding as they would reach when observing what we are describing, we should employ language that re-enacts what we are describing. For instance, to praise someone, consider using evocative words such as “I salute you for your efforts” or “Your efforts are worthy of a big hug!” How’s that for instant gratification from futuristic research on mirror neurons?
2 Responses
Please comment with your real name using good manners.
[…] Neurons and Virtual Teams April 15, 2008 — dusanwriter LeadingVirtually posted a useful overview of what mirror neurons are – essentially, those neurons that allow humans to […]
[…] posted a useful overview of what mirror neurons are – essentially, those neurons that allow humans to […]