Radiolab spun off a podcast series on Supreme Court cases. The most recent one was on racial discrimination in jury selection. It’s excellent, have a listen.
When it highlighted some relevant statistics about blacks being struck off juries by prosecutors, I thought “yep, in the aggregate, this is clear evidence of bigotry and prejudice”.
But upon reflection, I’m less sure.
Jury selection requires attorneys to be selective and discriminate: they want to choose the juries that will increase their chances of winning.
They will use many factors to evaluate good picks: answers provided, background, behavior, and more.
If an attorney is bigoted, he will prioritize his prejudice for skin color (or some other irrational factor) ahead of increasing his chances of winning. He will waste precious strikes that he could have used better. His selection will be worse.
So, assuming that excluding black juries changes the perspectives represented on the jury and affects the outcomes trials, then being bigoted must hurt his win-rate.
This would hurt his career relative to more rational attorneys or firms.
Given that trials are highly rivalrous and competitive and that jury selection is an important part of the trial, it is hard for me to assume that attorneys are so incompetent and are granting the opposing party such an easy advantage. This makes the bigotry thesis less likely in my opinion.
The alternative thesis is that the observed outcome is driven by rational decisions. The podcast explained a few possible such reasons.
Taking a step back, what kind of evidence would let us evaluate and distinguish those two theses?
Although those may not be conclusive data points either (given how hard it is to tell bigotry apart from rational choice), here are some I’d be curious about:
Do prosecutors tend to reject black juries more commonly than defense attorneys do?
Assuming they are less bigoted (which is not obvious), do black attorneys tend to reject black juries less, on average?
Do attorneys that keep blacks on their juries win their cases more, on average?
In an experiment, if you train some attorneys with this knowledge, do their win-rate improve?
Aside from sharing those thoughts, I want to mention some related problems which the episode illustrates: how to define and prove instances of discrimination (beyond aggregate and general evidence), and how people adjust their behavior to specific anti-discrimination rules (it’s not clear that you can control/reduce bigotry, even more so in a monopoly service which “customers” can’t avoid).
I watched some relatively recent movies about obedience to authority and the corruption to power. All three are quite chilling if not outright disturbing (not recommended for children). They show how far people can go (and how easily) when guided by “authority” or granted authority themselves.
The most obvious question is what factors (if any) shield individuals from such influence. But we know little about that, as ethical considerations have limited the pursuit of such studies.
How will you respond to such knowledge? Such studies and others show we are mistaken to think ourselves and people around us immune, even after learning of those results.
Rigorous statistical studies have little effect on the worldviews of people who learn about them, whereas people tend to integrate anecdotes better (as Veritasium’s Derek Muller recently discussed in Why Anecdotes Trump Data). Hopefully, seeing those experiments come to life as movies will be impactful in that way.
Experimenter (2015)
Experimenter depicts the famous Stanley Milgram experiments: unknowing participants are set up in a fake teacher-learner experiment where they are asked to shock the learner (a conferedate with a recorded performance who fails to learn on purpose) with increasing voltage. The question is whether the participants in their roles as “teachers/zappers” will go all the way to the apparently fatal shocks.
It is probably the better established result of the experiments I’ll cover, due to its robustness (multiple variants producing similar results) and reproducibility (although very few have been attempted due to ethical concerns of the possible psychological effects on participants). Milgram was trying to understand how the attrocities of nazi Germany could happen.
Read more in Milgram’s Obedience to Authority.
The Stanford Prison Experiment (2015)
The Standord Prison Experiment tells the story of Philip Zimbardo’s experiment at Stanford. The synopsis: “Twenty-four male students out of seventy-five were selected to take on randomly assigned roles of prisoners and guards in a mock prison situated in the basement of the Stanford psychology building”.
The original experiment was interrupted early, as things turned bad very fast. I don’t know that this experiment was repeated. The movie was very disturbing.
Find out more in Zimbardo’s The Lucifer Effect: Understanding How Good People Turn Evil.
[Update 2018-07-15:] It turns out much of the experiment was manipulated, so should not be relied on.
Compliance (2012)
I won’t go into much details to spoilers, but Compliance describes a scam that was perpertrated on multiple fast food joints over 70 times.
In short: “When a prank caller claiming to be a police officer convinces a fast food restaurant manager to interrogate an innocent young employee, no one is left unharmed”.
Obviously, this is the most sketchy and unethical, as far as scientific rigor, but it was conducted in real-world conditions as opposed to a lab with volunteers.