Uncovering the Hidden Prejudice That Shapes What We See, Think, and Do

By Jennifer L. Eberhardt, PhD

We all have ideas about race, even the most open-minded among us. Those ideas have the power to bias our perception, our attention, our memory, and our actions – all despite our conscious awareness or deliberate intentions. Our ideas about race are shaped by the stereotypes to which we are exposed on a daily basis. And one of the strongest stereotypes in American society associates blacks with criminality.

(Eberhardt 2019, 6)

Confronting implicit bias requires us to look in the mirror. To understand the influence of implicit racial bias requires us to stare into our own eyes – much as the undercover police officer who found that he had been trailing himself had done – to face how readily stereotypes and unconscious associations can shape our reality. By acknowledging the distorting lens of fear and bias, we move one step closer to clearly seeing each other. And we move one step closer to clearly seeing the social harms – the devastation – that bias can leave in its wake. 

(Eberhardt 2019, 7)

We learn what’s important – the faces we see every day – and over time our brain builds a preference for those faces, at the expense of skills needed to recognize others less relevant. That experience-driven evolution of face perception skills remodels our brains so they can operate more efficiently. 

Scientists see the other-race effect as a sign that our perceptive powers are shaped by what we see. That cringe-worthy expression “They all look alike” has long been considered the province of the bigot. But it is actually a function of biology and exposure. Our brains are better at processing faces that evoke a sense of familiarity. 

(Eberhardt 2019, 14)

Race is not a pure dividing line. Children who are adopted by parents of a different race do not exhibit the classic other-race effect. For example, researchers in Belgium found that white children were better at recognizing white faces than Asian faces. But Chinese and Vietnamese children who’d been adopted by white families were equally good at recognizing white and Asian faces. 

Age and familiarity with various age-groups can also be factors. In England, a study of primary school teachers found that they were better at recognizing the faces of random eight- to eleven-year-olds than were college students who spent most of their time around other college students. And scientists in Italy found that maternity-ward nurses were better at telling infants apart by looking at their faces than were people from other professions – a proficiency that helps to ensure “mix-ups don’t happen in the nursery,” the researchers suggest.

Our experiences in the world seep into our brain over time, and without our awareness they conspire to reshape the workings of our mind

(Eberhardt 2019, 14-15)

The fusiform face area, known as the FFA, is widely thought to be both primitive and fundamental to our survival as a species. Affiliation is a basic human need. Without the ability to track the identity of those around us, we are left alone, vulnerable, and exposed. 

The FFA has been studied extensively, yet despite decades of research there has been little attention paid to whether race might influence FFA functioning. From the narrow perspective of brain science, the primary function of the FFA is to detect faces. Race, most scientists felt, should have nothing to do with that. 

(Eberhardt 2019, 17-18)

The sort of categorization that allows such broad generalizations to somehow seem reasonable is a product not only of our personal experience and social messaging but also of our evolution as human beings. Categorization – grouping like things together – is not some abhorrent feature of the human brain, a process that some people engage in and others do not. Rather, it is a universal function of the brain that allows us to organize and manage the overload of stimuli that constantly bombard us. It’s a system that brings coherence to a chaotic world; it helps our brains make judgments more quickly and efficiently by instinctively relying on patterns that seem predictable. 

But categorization also can impede our efforts to embrace and understand people who are deemed not like us, by tuning us to the faces of people who look like us and dampening our sensitivity to those who don’t.

Our awareness of racial categories can determine what we see, and not just in the research laboratory but in the settings we find ourselves in every day. 

(Eberhardt 2019, 23-24)

To form categories is to be human, yet our unique cultures play a role in determining what categories we create in our minds, what we place in them, and how we label them. A fair-skinned person could be considered white in Brazil but black in the United States. People from Japan and China are lumped together as Asian in the United States but seen as distinctly different elsewhere. In some countries, people consider religion or social class a more important way to sort people than race. And even within one country, the rules for who is in what social category can change across decades. 

In the United States, racial categories are so significant that knowing a person is black or white, for example, can shape how we see that person’s facial features. Some years ago, my colleagues and I got interested not only in categorization but in the lay theories people use to explain others. From decades of research conducted by Carol Dweck and others, we know that some people believe human traits are fixed (people are either smart or dumb, they are responsible or irresponsible, they are mean or nice) whereas other people believe these traits are malleable (over time, a mean person can become nice). My colleagues and I wanted to know whether people’s theories about others might affect how they perceived not only personality traits but physical traits as well.

If you are presented with a face that is racially ambiguous – the face could be that of a black or a white person – does knowing that the person identifies as black change how you see that person’s face? And how might your own theories about others influence what you see?

(Eberhardt 2019, 26-27)

Although we tend to think about seeing as objective and straightforward, how and what we see can be heavily shaped by our own mind-set.

(Eberhardt 2019, 28-29)

But at the same time, categorization is a fundamental tool that our brains are wired to use. And the categorization process applies not just to people; it works on all things. Just as we place people into categories, we place other animals into categories. We place food into categories. We place furniture into categories. And we fill every category we develop with information and imbue it with feelings that guide our actions toward it.

Take the category “apples.” This category contains our beliefs about how apples grow, where they grow, what varieties exist, what colors they come in, how large they are, what they feel like, what they taste like, when we should eat them, whether we should cook them or eat them raw, how healthy they are for us, and so on. We also may like or dislike apples, depending on our experience with them and what we’ve been told about them. And this feeling, along with the beliefs we have about apples, can dictate whether we will eat an apple that is offered to us, buy an apple in a grocery store, or pick an apple off a tree. Simply seeing one apple can bring to mind the feelings and thoughts associated with the entire category. In fact, the stronger those associations are, the faster those feelings and thoughts are brought to mind.

The categories we have about social groups work in a similar way. But in this instance, we label the beliefs we have about social groups “stereotypes” and the attitudes we have about them “prejudice.” Whether bad or good, whether justified or unjustified, our beliefs and attitudes can become so strongly associated with the category that they are automatically triggered, affecting our behavior and decision making. So, for example, simply seeing a black person can automatically bring to mind a host of associations that we have picked up from our society: this person is a good athlete, this person doesn’t do well in school, this person is poor, this person dances well, this person lives in a black neighborhood, this person should be feared. The process of making these connections is called bias. It can happen unintentionally. It can happen unconsciously. It can happen effortlessly. And it can happen in a matter of milliseconds. These associations can take hold of us no matter our values, no matter our conscious beliefs, no matter what kind of person we wish to be in the world.

The concept of stereotypes dates back to the time of Plato, whose dialogues explored the question of whether one’s perceptions correspond to the actual state of affairs. But the term didn’t enter the popular discourse until the 1920s, introduced not by a scientist but by a journalist concerned that the news coverage of important issues was being filtered through the “preconceived notions” of both reporters and the public – a problem we still wrestle with today. 

Walter Lippmann was considered one of the most influential journalists of the twentieth century. He spent more than fifty years as a newspaper columnist in New York and Washington, D.C., chronicling war, politics, social upheaval, and demographic change.

He applied the term “stereotype’ to what he called the “the pictures in our heads” – impressions that reflect subjective perceptions but stand in for objective reality. The word comes from the old typesetting process, in which a mold of a message is cast on a metal plate and replicated in the printing process again and again – mimicking the unchecked spread of ideas that we only presume to be true. Those ideas then dictate how we interpret what we see. 

(Eberhardt 2019, 31-32)

Lippmann understood the role and influence of stereotypes. “For the most part we do not first see, and then define, we define first and then see,” he wrote in his 1922 book, Public Opinion. “In the great blooming, buzzing confusion of the outer world we pick out what our culture has already defined for us, and we tend to perceive that which we have picked out in the form stereotyped for us by our culture.”

His work led him to worry that Americans might make rash and illogical civic and political choices if stereotypes blinded them to information that didn’t conform to what they already believed. And that is exactly what is happening now.

Psychologists today dub what worried Lippman “confirmation bias.” People tend to seek out and attend to information that already confirms their beliefs. We find such information more trustworthy and are less critical of it, even when we are presented with credible, seemingly unassailable facts that suggest otherwise. Once we develop theories about how things operate, that framework is hard to dislodge. 

Confirmation bias is a mechanism that allows inaccurate beliefs to spread and persist. 

(Eberhardt 2019, 33)

Lippman was not concerned with the idea of stereotypes as a precursor to prejudice nor as a rationalization for it. In fact, the attitudes he expressed toward racial and ethnic intolerance would brand him a bigot today. He seems to have been a hostage of his own stereotypical thinking: In 1919, he belittled upwardly mobile blacks who aimed to blend into white America, labeling them victims of “the peculiar oppressiveness of recently oppressed peoples.” He advocated for the “mass evacuation and mass internment” of Japanese Americans in California after the bombing of Pearl Harbor. And his advice to other Jews wrestling with anti-Semitism was to lie low, blend in, and not call attention to their own “sharp trading and blatant vulgarity.” The son of German Jewish emigres, Lippmann was a Phi Beta Kappa graduate of Harvard who would later applaud a plan limiting Jewish admission and suggest that “too great a concentration” would be “bad for the immigrant Jews as well as for Harvard.”

(Eberhardt 2019, 34)

The elements of that simpler model tend to rest on concepts of “us” and “them” and are driven by cultural, political, and economic forces to protect the status quo. Stereotypes help prop up the existing social order by providing us at least with the illusion of “an ordered, more or less consistent picture of the world,” Lippmann observed. It may not be the actual world, but we are comfortable there.

So comfortable that we ultimately adapt to and embrace stereotypes, rooting them so deeply that they’re passed along unquestioned to each new generation, over decades and centuries. Without our permission or even awareness, stereotypes come to guide what we see, and in so doing seem to validate themselves. That makes them stronger, more pervasive, and resistant to change.

The “fictions and symbols” they represent are the thought paths that lead to expressions of implicit bias. Yet, as Lippmann contends, we continue to “hold to our stereotypes when we might pursue a more disinterested vision” because they have become “the core of our personal tradition, the defenses of our position in society.”

Just like categorization, the process of stereotyping is universal. We all tend to access and apply stereotypes to help us make sense of other people. However, the content of those stereotypes is culturally generated and culturally specific. In the United States, blacks are so strongly associated with threat and aggression that this stereotypic association can even impact our ability to accurately read the facial expressions of black people. For example, a black man who is excited might appear angry. Fear can be misread as outrage. Silence taken as belligerence.

(Eberhardt 2019, 35-36)

In this case, researchers found that the more antiblack bias the parents exhibited on the survey, the more antiblack bias their children exhibited on the IAT. But only for children who identified more closely with their parents – children who reported that they frequently do what their parents tell them to do, want to grow up to be like them, want to make them proud, and enjoy spending time with them. As it turns out, their parents are not just sharing their time, love, and resources with their children; they are also sharing the bias they carry around in their heads.

Even dogs are exquisitely attentive to the behavior and emotions of the families they live with. Dogs are considered “best friends” to humans because of their unique ability to connect to us. They register the reactions of their owners to figure out how to read the social environment. Consistent with this idea, canine researchers in France found that dogs seize upon the subtle movements of their owners to determine how to react to approaching strangers. The researchers instructed the owners to take three steps forward at the sight of the stranger, take three steps back, or remain in place. When the owners stepped back, the researchers found that the dogs behaved in a more protective manner: They looked more quickly at the stranger, hovered around the owner more, and were more hesitant to make contact with the stranger. With three small steps, the owners were telegraphing a message to their dogs: Beware.

Well-meaning human adults can also be influenced by the non-verbal behavior of others. Let’s take media as an example. People typically assume that having black characters play more powerful, positive roles on television and in the movies will curb bias. Yet researchers have found that even in popular television shows that feature black characters playing such roles, white actors tend to react more negatively to black actors than to other white actors on-screen. This bias is exhibited through subtle, nonverbal actions – a squint, a slight grimace, a small shift of the body – yet it still has impact. It leads those viewers who tune in to those shows to exhibit more bias themselves. 

(Eberhardt 2019, 40-41)

Decades of research have shown that across a variety of professions people care as much about how they are treated during the course of an interaction as the outcome of that interaction. In the policing context, this suggests that people stopped by police care as much about how police officers treat them as they do about whether they got a ticket. In fact, both research and real-life experience have shown that if officers act in accordance with four tenetsvoice, fairness, respect, trustworthiness – residents will be more inclined to think of the police as legitimate authorities and therefore be more likely to comply with the law. 

(Eberhardt 2019, 83-84)

Perhaps the most famous demonstration of selective attention was developed by two cognitive psychologists named Daniel Simons and Christopher Chabris. The demo involves asking people to watch a silent, thirty-second video clip of two teams of people (one in light-colored shirts, the other in dark-colored shirts) passing around a basketball. Unsuspecting viewers are asked to count the exact number of passes made by the team in the light-colored shirts. People are so focused on accurately counting the number of passes that more than half of them completely miss the gorilla in the room: someone in a gorilla suit enters the scene on the right, pauses in the middle for a chest pound, and then exits the scene on the left. Their attention is so focused on the task at hand that their brain records the gorilla as irrelevant. The effect is so strong that those who miss the gorilla are shocked later by the realization that they never saw the giant animal enter the scene. 

(Eberhardt 2019, 85)

Racial disparities are woven through almost every layer of life on the outside after a prison term. Even the decline in the marriage rate among African Americans can be attributed in part to racial disparities in the era of mass incarceration. As Stanford legal scholar Ralph Richard Banks pointed out in his book Is Marriage for White People?, blacks and whites in the United States married at the same rate in the 1950s, but the black marriage rate has dropped dramatically over the last forty years as more black men were sent to prison and sentences increased. 

The disappearance of those men and the burden on their families have destabilized entire black communities. The ripple effects of mass incarceration on children are particularly unforgiving. Some five million children – roughly 7 percent of all children living in the United States – have a parent who is currently or was previously incarcerated, according to data from the National Survey of Children’s Health. 

Those children tend to grow up with limited social and economic resources. They might be handed off to relatives or wind up in foster care. They are likely to have poor grades and display behavioral problems at school and to experience mental and physical health issues like anxiety, depression, and asthma. And they are significantly more likely than other children to wind up behind bars themselves. 

(Eberhardt 2019, 115-116)

After crude measurements of skull size fell out of favor as a way to certify intellectual interiority, psychologists brought a new tool to the table – the intelligence quotient test. In the early years of the twentieth century, the IQ test became an instrument that institutionalized bias as it was widely applied to a range of disfavored groups. 

By 1910, American scientists had begun administering tests they believed could quantify the mental shortcomings of blacks and natives, relative to whites. Ultimately, that tool was also unleashed on newly arriving immigrants from Europe in the form of a wooden jigsaw puzzle. Those who failed to assemble it quickly and correctly could be labeled “feebleminded.” By 1915, deferral law required that any immigrant who failed the test be turned away.

The puzzle was a categorization tool for “the sorting out of those immigrants who may, because of their mental makeup, become a burden to the state or who may produce offspring that will require care in prisons, asylums, or other institutions,” explained Ellis Island physician Howard A. Knox, the puzzle’s designer. He called it “our mental measuring scale.” But it was also a symbol of the sweeping influence of the eugenics movement, which aimed to weed our newcomers from countries that might pollute the American gene pool. 

Scores on the timed test were used to demonstrate the superiority of the Nordics and advance the case for selective immigration. Immigrants from southern and eastern Europe – Italians, Hungarians, Jews, Slavs – were considered undesirables, a drain on the system that could bring down the entire country. Their low scores, relative to the scores of people from Nordic nations, were used to demonstrate their inferiority. With the passage of the Immigration Act of 1924, the immigration of undesirable groups to the United States was drastically reduced. 

For decades, IQ testing helped to map and tally supposedly inherent differences between ethnic groups – that is until Hitler’s “Final Solution” exposed the ultimate evil of sanctioned racism. 

(Eberhardt 2019, 142-143)

Our brains are constantly being bombarded with stimuli. And just as we categorize to impose order and coherence on that chaos, we use selective attention to tune in to what seems most salient. Science has shown that people don’t attend willy-nilly to things. We choose what to pay attention to based on the ideas that we already have in our heads. 

That makes attention a mechanism for reaffirming what we already believe to be true about the world. As William James, widely considered the “founder of modern psychology,” famously noted in 1890, 

Attention creates no idea; an idea must already be there before we can attend to it. Attention only fixes and retains what the ordinary laws of association bring “before the footlights” of consciousness.

(Eberhardt 2019, 143-144)

In Europe, immigration is being framed as a security risk as waves of people from Africa and the Middle East pour into once largely homogeneous nations. Perspectives on immigrants are shifting across the European continent, and hate crimes aimed at Muslims are rising dramatically, fueled by the same concerns about social disruption and notions of inherent inferiority that have fed the separation of races in America. 

Newly arriving immigrants are often described in terms that suggest pollution: “dirty,” “filthy,” “diseased.” Those perceptions can spontaneously trigger a cascade of protective impulses – even among people who support immigration and welcome newcomers. Yale social psychologist John Bargh and his colleagues have studied the connection between immigration status and fear of disease. It is tighter than we realize. 

(Eberhardt 2019, 163)

Just as physical space can change how we look, the particulars of a specific space can influence how our minds work and what judgements we make. 

(Eberhardt 2019, 167)

In many ways, this is how bias operates. It conditions how we look at the world and the people within it, despite our conscious motivations and desires, and even when such conditioning can put us in harm’s way. Just as drivers are conditioned by how the roads are constructed in their native land, so too are we conditioned by racial narratives that narrow our vision and bias how we see the people around us. 

(Eberhardt 2019, 170)

Research shows that talking about racial issues with people of other races is particularly stressful for whites, who may feel they have to work harder to sidestep the minefields. Their physical signs of distress are measurable: Heart rates go up, blood vessels constrict, their bodies respond as if they were preparing for a threat. They demonstrate signs of cognitive depletion, struggling with simple things like word-recognition tasks.

Even thinking about talking about race can be emotionally demanding. In a study of how white people arranged the physical space when they knew they’d be in conversation with blacks, the arrangements varied based on the subject of those chats. When the study participants were told they’d be talking in small groups about love and relationships, they set the chairs close to one another. When they were told the topic was racial profiling, they put the chairs much farther apart. 

(Eberhardt 2019, 186-187)

When it comes to everyday practices that teachers are encouraged to try in school settings to address racial issues, empathy, wise feedback, affirmation, and high-quality contact tend to get short shrift. Instead, one of the most common practices schools foster is the strategy of color blindness. Try not to notice color. Try not to think about color. If you don’t allow yourself to think about race, you can never be biased.

That may sound like a fine ideal, but it’s unsupported by science and difficult to accomplish. Our brains, our culture, our instincts, all lead us to use color as a sorting tool. And yet the color-blind message is so esteemed in American society that even our children pick up the idea that noticing skin color is rude. By the age of ten, children tend to refrain from discussing race, even in situations where mentioning race would be useful, like trying to describe the only black person in a group.

Our adult discomfort is conveyed to our children and our students. When we’re afraid, unwilling, or ill equipped to talk about race, we leave young people to their own devices to make sense of the conflicts and disparities they see. In fact, the color-blind approach has consequences that can actually impede our move toward equality. When people focus on not seeing color, they may also fail to see discrimination. 

(Eberhardt 2019, 217-218)

There’s a tendency for textbooks and teachers to shrink or sanitize a subject that stains our nation’s legacy. That shields students from the true horror of the institution. But it also deprives them of the opportunity to explore both the brutality of oppression and the bravery of endurance, and to understand how the legacy of slavery still shapes our country’s racial dynamic, influencing us in ways we don’t even recognize. “Teachers – like most Americans – struggle to have open and honest conversations about race,” the survey confirmed. “How do they talk about slavery’s legacy fo racial violence in their classrooms without making their black students feel singled out? How do they discuss it without engendering feelings of guilt, anger or defensiveness among their white students?” 

(Eberhardt 2019, 221)

Both the biased and the target of bias are forced to dwell in the roles they play.

(Eberhardt 2019, 225)

Universities, by their very nature, have long been both drivers and reflections of broad social change. Idealistic young people – unchained from parochial views and exquisitely attuned to injustice – are not afraid to challenge authority and eager to change the world. 

The civil rights movement leaped onto the national agenda when four black college students in Greensboro, North Caroline, walked into a local Woolworth’s in 1960, sat at a “whites only” lunch counter, and refused to leave. That sparked months of protests and led to Freedom Rides and voter registration drives that drew a multiracial coalition of college students from all across the country to battle discrimination in the South. 

The antiwar campaign that helped force an end to U.S. involvement in the Vietnam War was rooted in campus activism. When four students were shot to death by National Guard soldiers during a protest at Kent State University in 1970, the nation could not turn away. The wave of demonstrations that followed engaged four million students and shut down more than four hundred college campuses. At New York University, a window banner proclaimed “They Can’t Kill Us All.”

That sense of agitation is still alive today. In fact, college students’ commitment to activism and civic engagement is higher than it’s been at any time in the last fifty years, according to surveys of freshman attitudes. And more students rate themselves “liberal” today than at any time since 1973.

But the 2016 election of Donald Trump galvanized liberal students and emboldened right-wing fringe groups, turning college campuses into battlegrounds over whose rights deserve protecting and whose voices are heard. 

The clash of values came to a head at the University of Virginia on August 12, 2017, as hundreds of white nationalists, carrying guns and Confederate flags, rallied in downtown Charlottesville, Virginia, to protest the city’s proposal to do away with a monument of the Confederate general Robert E. Lee. That march turned into a maelstrom of brawls and beatings; one woman died when a white nationalist plowed through a crowd of counterprotesters in his car. 

The night before, more than a hundred torch-bearing neo-Nazis had paraded boldly through the heart of the UVA campus, in a clear challenge to egalitarian norms on race that have been developing in the United States over the last half century.

College  is a place where young people discover and reinvent themselves, where the norms that guide our thinking and govern behavior are set and challenged. What starts on campuses migrates out into the larger culture. That makes universities an incubator for nascent social movements and a barometer that can measure where our country is headed. 

(Eberhardt 2019, 228-229)

But we all have multiple selves that we carry around inside us. Which self dominates – to guide our thoughts, feelings, and actions – is, in part, a function of the situations we find ourselves in. The self that emerges at any given moment is not entirely under our control. 

(Eberhardt 2019, 236)

Moving forward requires continued vigilance. It requires us to constantly attend to who we are, how we got that way, and all the selves we have the capacity to be. 

(Eberhardt 2019, 250)

But you can condemn what people say without condemning their legal right to say it. That’s intrinsic to the success of many history-making campus movements. And many students felt there wasn’t enough support for those who resisted and tried to douse the marchers’ rhetorical fires. It was as if the moral high ground belonged to those who exercised their free speech rights and not to those who acted to protect the rights of others to live with dignity. 

University administrators and leaders across the country are trying to find a balance, but often seem more concerned about emphasizing the value of the legal standards than the value of the lives that are being diminished, demeaned, and dehumanized. There is a rush to protect the law but more foot-dragging when it comes to protecting egalitarian norms on their campuses. 

I understand why professors are conflicted about weighing in and why universities don’t want to fan the flames or encourage uncivil disputes. But numerous students told me that what they perceived as biased or insensitive comments made by their classmates often went unchallenged in class. When those comments are ignored, an opportunity for educating everyone is missed. 

(Eberhardt 2019, 252-253)

The research findings and the experiences of those students run counter to views on race that have been percolating in this country for years: Affirmative action has tilted the playing field so that minorities have the advantage on everything. White people are losing ground economically because of our society’s infatuation with diversity. 

More than half of white Americans – 55 percent – believe there is discrimination against white people in the United States today, according to a 2017 survey by Harvard University’s School of Public Health, the Robert Wood Johnson Foundation, and National Public Radio. “If you apply for a job, they seem to give the blacks the first crack at it,” one middle-aged white man from Ohio told interviewers. “It’s been going on for decades, and it’s been getting worse for whites.” He was disgruntled because a black man made the finals for a promotion he didn’t get. The job wound up going to a white man many years younger than he.

(Eberhardt 2019, 270-271)

But implicit bias can be layered and complicated. It’s simple to explain, but not so easy to see or to rectify. And the value of training, with all its variables, is often hard to quantify. The vast majority of implicit bias trainings are never rigorously evaluated, in part because measuring their worth is hard. There are no agreed-upon metrics developed by scientists for evaluating training effectiveness. Should the training lead to an immediate reduction in implicit bias? That’s a tall order considering that these implicit associations have been practiced over a lifetime. What would a reduction in implicit bias even look like? Should the training lead to better employee decision making? Should it lead to improvements in customer satisfaction? And how would we measure and parse blame or credit for any of that? 

Although this is an area that seems ripe for scientific discovery, many researchers are hesitant to get involved. They are concerned that the trainings are not evidence based, that they overpromise, and that they could even leave us worse off. From this perspective, everyone needs to just slow down until we can get the science right.

I have a different perspective. It’s not that social scientists are too fast to act; we are too slow. There is so much concern over the prospect of acting before we know enough about a phenomenon that we never get around to taking action. And because the scientific enterprise is iterative, we never seem to get to the point where we think we know enough. Social scientists fret so much about the purity and precision of science that we rarely throw ourselves into the messy problems of the world. From my perspective, engaging in the world, tackling thorny problems, can open the way to scientific discovery. If we don’t know enough as scientists to shed light on a problem, sometimes it is because we simply aren’t close enough to it. 

As a scientist, I don’t arrive on the scene with the answers. I arrive with questions. And my goal is to engage practitioners and encourage them to get involved in the business of putting the puzzle pieces together. 

Even beyond the difficulty of evaluating the effectiveness of training are the high financial stakes involved with declaring success of failure. Bias training is a fast-growing for-profit business, and finding fault with results could affect the bottom line of the trainers. Better to just check the box that says “Yes, we’ve trained our employees” and call it victory.

Most trainers in the business today are not scientists trying to solve the mysteries of the mind but entrepreneurs trying to deliver a message and sell a product that is in high demand. In fact, given the stakes, it may be simpler not to know whether the training works or why it may be less likely to work under certain conditions. 

(Eberhardt 2019, 279-280)

Understanding how implicit bias works and what it impacts is a good first step. But the real challenge – for both companies and individuals – is learning how to keep bias in check. Bias is not something we exhibit and act on all the time. It is conditional, and the battle begins by understanding the conditions under which it is most likely to come alive.

Among those conditions, speed and ambiguity are two of the strongest triggers of bias. When we are forced to make quick decisions using subjective criteria, the potential for bias is great. Yet more often than not, these are the very conditions under which hiring managers make initial decisions about job candidates. Due to the volume of applications and inevitable time constraints, managers may spend an average of only six seconds reviewing each resume they receive. That may lead them to resort to hunches and rely on familiar patterns to assist them in making rapid judgments about the applicant’s fit for the job. And when something as basic as a candidate’s name triggers unfavorable unconscious stereotypes, the bar to employment can rise. 

(Eberhardt 2019, 285)

Increasing diversity has long been hailed as the conventional remedy for implicit bias: all that working together will surely counter the power of stereotypes and erase our outdated ways of thinking about one another. 

It turns out that diversity itself is not a remedy for, though it may be a route to, eliminating bias. But we have to be willing to go through the growing pains that diversity entails. We’ve learned that diverse groups are more creative and reach better decisions, but they aren’t always the happiest group of people. There are more differences, so there is apt to be more discord. Privilege shifts, roles change, new voices emerge.

Success requires us to be willing to tolerate that discomfort as we learn to communicate, get to know one another, and make deeper efforts to shift the underlying cultures that lead to bias and exclusion.

It doesn’t just come down to “Am I a bigot,  or am I not? Can I or can I not get trained out of this?”  Bias is operating on a kind of cosmic level, connecting factors and conditions that we must individually make an effort to comprehend and control. And it deserves a cosmic response, with everyone on board.

When it comes to combating bias, it’s not enough to be alarmed by white supremacists and Nazis, but ignore the ways we rely on stereotypes to marginalize the unfamiliar “others” among us.

The battle against explicit bias is taking place on a very public stage, where icons are being brought down for what might have been indulged a generation ago. 

(Eberhardt 2019, 291-292)

But with new appreciation, I also recognized that the capacity for growth comes from our willingness to reflect, to probe in search of some actionable truth. 

(Eberhardt 2019, 301)

References

Eberhardt, Jennifer L. 2019. Biased: Uncovering the Hidden Prejudice That Shapes What We See, Think, and Do. N.p.: Penguin Publishing Group.

ISBN 978-0-7352-2493-3




Leave a comment

Design a site like this with WordPress.com
Get started