We’re Programmed to Form Teams (Right Now, That’s a Problem)

We like to believe that we form unbiased opinions, without letting our political affiliation affect our perspective. But the research overwhelmingly tells us . . . we don’t.

David Giardino
The Startup

--

Seeking membership in a group is an innate tendency in human beings. So how does our propensity to team up affect our perception of the world? (Photo: Nevada Policy Research Institute)

This is a story about bias and division. About the surprising ways the tribes we form can hinder reason and thought. It’s a story that is both timely, and timeless. Timely, if you’ve wondered how our opinions of a global health crisis are still largely shaped by party lines. And timeless, in that this type of behavior is, actually, nothing new.

Many among us seem to believe that partisanship is the problem. That polarization is the disease, and more education is the cure. With each person on the opposite end of the political aisle, we tend to think that we’re one article away from changing their mind. Once they see this statistic, or that study, they’ll agree with us, we posit.

But we’re wrong. In fact, a 2015 meta-analysis found that education led to an increase in polarization. Yes, you read that right: the more educated the conservative, the more likely he or she was a climate-change denier. And the more educated the liberal, the less likely he or she was willing to concede to consensus opinions of safe storage of nuclear waste.

No, right now we need to go beyond our opinions and beliefs.

We need to understand how they were formed in the first place.

In the early 1970s, psychologist Henri Tajfel conducted an experiment that aimed to study biases between groups. But what made this study so fascinating was that he didn’t choose groups divided by religion, or politics, or even sports. In fact, the groups didn’t really exist.

Participants were brought in to look at a picture containing clusters of dots, and were then asked to estimate how many individual dots were contained within the picture. The subjects submitted their guesses, and were told that based on their guess they’d be placed in a group of “over-estimators” or “under-estimators” (or so they were told; in fact, they were grouped randomly, regardless of what they guessed).

Next, participants were separated and brought into a private room. They were asked to distribute funds to their fellow participants in the study — they could not distribute funds to themselves. And what Tajfel found was that participants would consistently distribute more funds to those that were in their group (the “over-estimators” or the “under-estimators”). These subjects didn’t know each other before the study. They couldn’t have been grouped in a more trivial way. And yet, once they were told that they were part of a team, they immediately sought to reward their team at the expense of the other group. In some versions of Tajfel’s study, participants chose to penalize the other group — even if it meant their group would not receive as large of a reward.

Tajfel’s research begs the question: if we’re willing to prioritize the interests of a group we were placed in for the most insignificant of reasons, what does this say about intergroup dynamics in matters of consequence?

The idea of social identity theory, first advanced by Tajfel in the late 1970s, suggests that we have an innate tendency to categorize ourselves into groups, which in turn build our identity on the basis of membership, thereby enforcing boundaries with other groups. And we do this so often, it’s difficult to tally all of the disparate groups we form. Fans of sports teams. Members of political parties. Music or artist preferences. The states (or the neighborhoods) we’re from. The company we work for. Membership in these groups builds our self-esteem and, in many ways, offers us an identity. We’re quick to mention the groups we belong to that project a positive or unique image of ourselves. For better or worse, they help us signal to others: “this is my place in the world.”

But here’s the problem: we often fail to recognize the unyielding power that forming teams has in dictating what we believe or how we act.

Take, for example, a 2004 study by researcher Drew Westen that used fMRI scans to understand how liberals and conservatives processed information. Westen showed conservatives a series of statements that appeared to lead to a criticism of Republican President George W. Bush; he did the same for liberals and John Kerry (the Democratic challenger who ran against Bush in 2004). But the final statement in each series exonerated the politician. For example, conservative participants were first shown a quote of George W. Bush praising former Enron CEO Ken Lay (“I plan to run the country like [Enron],” he once said). But then, the final statement said that Bush “felt betrayed by Ken Lay, and was genuinely shocked to find out [about the corruption at Enron].”

Here’s what happened: when conservatives were shown the statements that threatened their perception of Bush, the areas of their brains associated with emotion — not reason — were activated (the same was true for liberals and the Kerry statements). And when that final statement of exoneration for their favored politician appeared, the area of the brain associated with reward was activated. In animal brains, this flash of pleasure is activated when the animal does something important for its survival. And what about the areas of the brain known to be responsible for reasoning? During this fMRI study, these regions of the brain did not see any uptick in activity.

Westen’s findings are a big deal. They suggest that once we’ve chosen a side, our brains respond to information differently. When we’re presented with something that threatens our original views, it’s not as if we’re reasoning our way into supporting our team, rationalizing the facts in a way that defends our perspectives (which would be bad enough) — instead, our emotional brain simply switches reasoning off.

What are the byproducts of this growing, emotional divide? Consider this: in 1960, only 4% of Democrats and 4% of Republicans said that they would be disappointed if their child married someone from the opposite political party. In 2018? 45% of Democrats and 35% of Republicans reported they would be unhappy if their child did the same. In politics these days, we look more like Red Sox and Yankees fans than unbiased voters. And psychologists will call it motivated reasoning, but I just refer to it as “sports brain.” Just as in sports, our allegiances make us more accepting of information that supports our views, and more critical of information that contradicts them.

OK, so favoritism or bias towards our team can shape our attitudes or opinions without us even being aware of it. Now, this doesn’t mean we’re inherently bad people. In the earliest days of humanity, group formation was often necessary for our very survival. Unfortunately, in modern times, we’ve simply applied it in ways that are unnecessary, or worse, regressive.

And it’s also easy to stop at the problem. Does this research mean it’s impossible to change someone’s mind? Of course not.

The solution simply does not come by the method we always resort to — the method of education that I referenced in the opening of this piece. Look, if we’re speaking with someone who considers themselves on the “opposing team,” no amount of studies, or statistics, or articles will help.

The correct first step is not what is being said, but to consider who is saying it. We’re far more likely to listen to those that we perceive to be as part of our group. So in the case of politics, don’t make it about conservatives or liberals — prime your audience to think about the common groups you may share membership in (mothers, perhaps, or union workers, or heck, Americans). The same is true at the office: when teams disagree, don’t highlight your differences (such as marketing versus sales), but instead reframe the group (such as a shared organization). Demonstrate that you’re part of the same team before you try to resolve your differences.

Next, recognize that influencing a change of opinion is often an act of incremental progress. You won’t typically change someone’s mind in one conversation; but if you’ve established a shared group membership, and then stay consistent with your message, you may begin to nudge your audience’s views. Want an example? In one famous study by social psychologist Serge Moscovici, participants were asked to read aloud the color on a slide. In truth, the slide was always blue, but two “plants” in the participant group would uniformly shout “green” half the time. The result? By the end of the study, one-third of all participants declared a slide “green” at least once. Once you’re in the group, consistency matters.

Finally — and this is, perhaps, the most challenging takeaway — recognize that group membership doesn’t just influence how others think. It influences how you think, too. Consider how some of your opinions may have been shaped by the group you support and the social incentive you receive from supporting it. If we’re willing to stake our very identity on the groups we belong to, the least we can do is try to understand how they shape what we think, and how we feel.

And, hey, the next time you get into a spirited debate with a friend of yours on the other end of the political aisle, just tell them that their emotions are getting the best of them. At a neurological level, you wouldn’t be wrong.

--

--

David Giardino
The Startup

Writing at the intersection of culture and psychology.