Fn 16: American as Apple Pie: How Racism Gets Baked into Technology

"In the context of criminal justice, our past forms of racial profiling and policing get encoded in new systems because that history forms the dataset, the training data that then is used to make judgements." — Dr. Ruha Benjamin

Anil speaks with Dr. Ruha Benjamin, Associate Professor in the Department of African American Studies at Princeton University and author of Race After Technology, about design discrimination. They discuss how systemic racism is replicated in the technology we use and how tools like artificial intelligence, machine learning and software used in the criminal justice system are shaped by racial bias.

Then he speaks with James Cadogan, VP of Criminal Justice, and Kristin Bechtel, Director of Criminal Justice Research, of Arnold Ventures, a non-profit that funds the Public Safety Assessment, a tool used by judges that predicts a person's likelihood to reoffend or return to court if released.

Big thanks to LinkedIn for supporting the second season of Function.

Function | About


Transcript

Anil Dash: Welcome to Function. I'm Anil Dash. The last few episodes of Function, we've been talking about the ways that bias can get baked into the technology that we use. Sometimes it's a subtle influence, sometimes it is something that really changes the way the tech works completely. And the common denominator in all of these discussions about bias in technology is that it doesn't actually matter how good the intentions are of the people creating the tech. Everybody has good intentions. There's not some evil bad guy trying to make racist software, but it still ends up happening. So why does it happen?

Anil Dash: We can set out to build an app that's accessible to everyone so we can help everybody and make their lives easier, but end up with a tool that doesn't work for everyone or worse sometimes it actually causes harm. And all of that can happen simply because we haven't taken the time to challenge our own biases. Unless we're actively challenging ourselves as creators to think about what we might be getting wrong, or what harms we might cause, or what prejudices we're bringing to the table when we create technology, sometimes that tech won't have the good effect that we're hoping that it would. So today we're going to have a conversation about how systemic racism gets built into the technologies we use every day.

Anil Dash: In tech we have a saying, it's real simple, "Garbage in, garbage out."

Dr. Benjamin: We could revise that to say, "Racism and racism out."

Anil Dash: That's Dr. Ruha Benjamin, she's a professor at Princeton, and the author of the book Race After Technology.

Dr. Benjamin: Historic patterns of discrimination and inequality are the input for many automated systems that then reproduce it.

Anil Dash: Now we're seeing those patterns of inequality get replicated into the tools that are supposed to be bringing us to this better, safer, more inclusive future.

Dr. Benjamin: We often think of technology as neutral objective, as a social kind of floating above society rather than a reflection of what has existed and what exists today.

Anil Dash: And there are lots of examples of this. Take artificial intelligence. This can mean a lot of things in technology, but basically these are systems that learn by being trained on data. That data is captured from the real world and it is shared and created by people who are humans with all the flaws that humans have. So what you end up with is a dataset that people are being trained on, or worse, the technologies they're being trained on that reproduces all the biases, all the historical injustices that humans have inflicted upon one another. The end result is software, and technology, and devices that the same blind spots that all of society does.

Anil Dash: One of the most popular categories of artificial intelligence software these days is facial recognition software, and there's variants of it for just skin recognition and things like that. Most of the time these days, those systems have been created in Western countries or trained on populations that are mostly white people, and they do a better job of facial recognition for white people than they do for those of us with darker skin. They're particularly bad at recognizing black people's faces and black people's skin.

Anil Dash: I had an experience with this myself, which was not nearly as serious as these things could be, but it was a hand dryer in a public bathroom. And whatever system they were using in that hand dryer and I don't think it was the most cutting edge technology, it didn't get my skin. It didn't pick me up. And so I actually had to use my sleeve, which was a lighter colored shirt and that made the hand dryer turn on and then I could dry my hands. Now fortunately we always have the option just like wiping our hands on our jeans when the hand dryer doesn't work, but think about things that are far more serious.

Anil Dash: Like right now, almost every big tech company is working on self-driving cars, and they use artificial intelligence to try and detect people in front of them so they know when to stop. Now, if that's a system that is doing a bad job of recognizing black people's faces, who's going to be most in danger when a self-driving car's on the street?

Anil Dash: Let's take an even more serious example, one that's happening today. Across the country, many different law enforcement jurisdictions are adopting software that they use to give what's called a risk assessment. This is judging whether a person is going to return to court or whether they're likely to offend again. The software often uses an algorithm that's based on historical training data, which is to say data that has that same skew, that same bias that the criminal justice system in America has always had. Studies have shown that these type of risk assessments are often biased against black people.

Anil Dash: Later in this episode, we're going to have a conversation with people who make one of these risk assessment systems. And honestly, I went in thinking I knew exactly what I was going to hear and I had my preconceived notion of how I was going to feel about it, but the conversation surprised me. Their perspective was that they were trying to help people and keep them from being exploited by the cash bail system, especially because these are people who haven't even been convicted of any crime. But before we get into all that, we're going to hear from Dr. Benjamin again, because she's going to explain how racism gets baked into our technology in the first place.

Anil Dash: Ruha Benjamin, thank you for joining us on Function.

Dr. Benjamin: It's a pleasure to be here. Thanks for having me.

Anil Dash: Discriminatory design, this idea that software and technology have values baked into it. I wanted to get started with, what are some of the ways that shows up in the algorithms or in the data that are collected around us and that can turn into building racist systems?

Dr. Benjamin: Discrimination or racism can get encoded in our automated systems at a number of points from the decisions that programmers make in terms of actually deciding what codes to include. So if you think about in the context of hiring algorithms, deciding who makes a good employee or not, that decision is not objective but it's subjective and it's often based off of historic patterns of employment. And so if you have a workplace that's predominantly male or predominantly white, then in training the algorithm how to judge for a good employee, you're necessarily baking in that history of discrimination, gender and race basis discrimination, into the future predictions of who makes a good employee. The same in all kinds of contexts, whether it's in finance, deciding who to loan money to or not, whether it's in healthcare, what treatments to administer.

Dr. Benjamin: And then certainly in the context of criminal justice, our past forms of racial profiling and policing get encoded in new systems because that history forms the dataset, the training data that then is used to make judgements.

Anil Dash: So it's almost like we can just capture whatever we see around us in this pattern recognition and turn it into software, good or bad.

Dr. Benjamin: And the danger is that unlike with a human being that we see to be racist or prejudice, with the algorithms, with the automated systems, we assume that they're more objective. And so we're less likely to question them. And that's where the danger lies. It's that the discrimination is shrouded in this veneer of objectivity, like it rises above human society. And what we need to do now is pull it down to earth and open these black boxes and think about how assumptions, biases, values are getting built into systems that are not at all objective.

Anil Dash: So there's sort of the sense of like, "Oh these are just tools and it's not their fault and they're neutral." Or even, "The robot is on our side." Right?

Dr. Benjamin: Exactly. Many people feel like technology is going to save us from all of our social problems and issues, and many people are worried that technology is going to slay us, the robots are coming to get our jobs, they are going to sort of devour us. Both of those are different versions of what we like to call techno determinism. So it assumes that human beings are just receivers of technology rather than shapers and designers. And so I think whether it's dystopian or utopian, both assume that technology is what's in the driver's seat rather than humans behind the scenes, behind the screens, who are making the decisions that then impact the rest of us.

Anil Dash: And some of that techno determinism comes from like what we're taught by Psi Phi or what we see in popular culture images of tech being this solution to everything or the machine makes the choice not a person.

Dr. Benjamin: Absolutely. And it's also kind of symptom of our faith in science as an objective arbiter of reality. That it's just a straight forward reflection of what exists rather than as a shaper and reflection of human concerns. And those concerns are also biases and values and assumptions that then get viewed as neutral and objective. And so technological determinism grows out of a sense that science is in a bubble and is not impacted by social processes.

Anil Dash: One of the things I think that's sort of implicit in what you're saying is almost everybody involved in the decision, for example, you're a company that's doing a lot of hiring and you're saying, "Well, we want to be able to handle all these applicants and also we want to be fair to them." You can come in actually pretty well intentioned if you didn't know how the technology works and say, "Oh well, the computer would be more fair than me. I might be biased but then the software won't be." And you could even do this as trying to do well. Like you could have good intentions and put in place one of these technologies that's been trained on bias data.

Dr. Benjamin: Absolutely. And so I think that's a reason why we should not focus so much on the intentionality of the designers to do harm necessarily because a kind of obliviousness to social reality and these historic patterns are more likely to reproduce discrimination enabled by technology. So precisely because you're not thinking about the history or the sociology, you're likely to reproduce the default settings. And also kind of indifference to whether the technology is going to harm certain groups, enables the harm to proceed rather than the idea that there is a racist program or behind the screen. There may very well be in some context, but more likely it's people who are just not thinking about the issues at hand or who are in different to them to begin with.

Anil Dash: People want to look for the boogeyman. They want to have this sort of the mustache twirling villain who's like... I mean, I'm sure there are people who are explicitly biased who are making these technologies text no different than any other industry in that regard. But they don't have to be, they don't have to be setting out to build a biased system. They could simply, as you said, be oblivious and that's enough to replicate the problem.

Dr. Benjamin: Absolutely. And that's why it's important to know our history with respect to science and technology. Because oftentimes we revise history when we look at what we've decided are very unethical practices. We create bogeyman where in reality at the time, the people who are engaged in let's say eugenics practices, or the US Public Health Service experiments in Tuskegee, or any number of experiments in which vulnerable populations were used to hone science and technology. The people doing it thought of themselves as very progressive and the people around them often saw those practices as somehow moving society forward. So at the time they were often well-respected scientists, technologists, and on the political spectrum they often fell on the more progressive side. And so knowing that would alert us to how that can happen again today, how it is happening again today, where progressive sounding values can actually create a veneer in which we stop questioning science and technology.

Anil Dash: So there's this sense where no matter how good the intentions are, if you're not fluent enough in this history of there being injustices, you're going to probably make a system that replicates these problems.

Dr. Benjamin: Absolutely. And so when we think about just the idea of a systemic inequality or systemic racism, the issue that the systemic is meant to sort of mark is that we can just all go about doing business as usual. Clock in and out of our jobs. Not necessarily have any overwhelming feelings of malice or animus towards particular groups. But by just following the rules and doing business as usual, we are reproducing systems of inequality. And so it takes the personal motive, it sort of demotes that and says that's not really the issue. It's the impact of our actions and the fact that we're not questioning business as usual.

Anil Dash: This is interesting. I look at this example that keeps coming up and the one that's top of mind for me, which is technology is increasingly being used in the criminal justice system, in law enforcement systems, in contexts like safety assessments of people who've been accused of a crime, and other tools that judges and people in the criminal justice system have. I'm curious, in that area, is that something where the same narrative is playing out, that we're seeing well intentioned people who think of themselves as progressive, putting technology into place, trying to help?

Dr. Benjamin: What's interesting is that it often grow the adoption of automated systems in this context and others grows out of a basic acknowledgement that human beings often get it wrong. So it some ways it grows out of an awareness that human bias and discrimination exists. And rather than deal with that squarely, we're outsourcing the decisions to technology, presuming that it's going to make better decisions and ignoring the fact that we do the designing.

Dr. Benjamin: So if we look at different points in the process of adopting let's say a risk assessment technology in the context of criminal justice, from the point of view of programmers who are deciding how to actually quantify risks, that's not a objective science. You have to encode your own opinions about what constitutes a risky individual and what would make them more likely to commit another crime in the future.

Dr. Benjamin: And so when we look at the surveys that are administered to parolees to decide if they're risky or not, some of the factors that are being used in the algorithmic process is whether the neighborhood that they're going back to has a high crime rate or a high unemployment rate, whether they have family members who have a criminal record, whether they have a history of unemployment. And all kinds of other factors, most of which have been shaped in one way or another by forms of racial discrimination. So a neighborhood that is predominantly black, likely has a higher rate of unemployment because of employment discrimination. Or, black individuals who are being assessed are likely to have more people in their family that has a criminal record because of ongoing forms of racial profiling.

Dr. Benjamin: So for every variable that we're looking to, to quantify the risk of the individual, if we ignore how racism has shaped their life and their life chances, then what we're doing is we're building up all of these variables, and then casting them as higher risk as if the risk is embedded in them as an individual rather than in their context and in the way that racism has shaped their contexts. So for those who've studied these risk algorithms, they find that white individuals are more likely to be scored lower risk and black individuals higher risk. And then when they follow these individuals over time, that assessment, one, turns out to be wrong. But when they look at the actual input data, all of that is structured by racial domination and social inequality. But the people who are encoding this ignore that context.

Dr. Benjamin: And so one of the things to consider is that even in the way that we pose a problem or a question for technology to solve, just at that point, before you even start working with the technology itself, just asking a question itself reflects certain forms of ignorance and various kinds of assumptions. For example, the idea that an individual is risky or not, rather than our social world actually produces risk for certain populations more than others.

Anil Dash: Right. It seems like almost a double victimization to sort of say, you're all ready, because of circumstance, because of context, living in a neighborhood that may be does have higher crime, that does have higher risks in some ways. And then to be penalized for that because that's where you are from or where you can afford to live seems absurd. But that seems like because there's this layer of abstraction with the technology, people aren't realizing that those kinds of absurdities are being used to make decisions.

Dr. Benjamin: It reminds me a lot of this diagnosis in the 19th century, this really well known scientist by the name of Samuel Cartwright created a condition that he called drapetomania, which diagnosed enslaved people who were found running away from slavery, who were runaways. And so he said they had a mental condition called drapetomania. And so he predicted that if slave owners mistreated them or whipped them more, they would develop this inexplicably condition of wanting to run away. It was published in the leading journals and people really took it seriously. He had all kinds of treatments for this mental condition, but this is an early example where rather than diagnosing a sick system, that is in that case slavery, in our case, mass incarceration and criminalization. Then rather than look at the systemic pathology, we pathologize the individuals who are trying to survive it and we say that, "You have a problem because you're trying to run away from this oppressive system."

Dr. Benjamin: And so in some ways what we're dealing with today are like digital versions of drapetomania by focusing on the risk factors of individuals rather than the institutions that make it just hard to live.

Anil Dash: I just want to sit with that for a minute. I mean, the heaviness of somebody doing the only thing they can to survive. I'm curious, as we sort of said, there's this huge gap between the best of intentions and the reality of implementation. And then there is the sort of nefariousness of not knowing how these systems get biased. Is it possible to build a system that is a little more just to using technology? Could somebody build software that does help properly identify risk for people if it were informed by how we've built biased systems in the past? Can you get it right or is it intrinsically wrong?

Dr. Benjamin: I think so long as the input to that is inequitable and unjust, it's hard for the technology to readdress that. I do think that technology can be used to assist wider processes of social change as long as we're dealing with the fundamental issues rather than trying to create tech fixes for various social problems. But I think the desire to look for technology to fix something that is so foundational-

Dr. Benjamin: ...to fix something that is so foundational is, in some ways, not a good use of our energy. We should try to deal with the root causes, whatever the problems are. And so, that's why I mentioned at the beginning, just thinking about, what is the question? What is the problem that we're trying to fix? Is it just bias, individual bias, and so, we can change the algorithm so that it somehow alerts us to that? Or is it something more rooted in the everyday operation of our organizations, of our institutions?

Dr. Benjamin: Kind of zooming the lens out to that broader context, and trying to address that, is what I would advocate in the end. For all of us, it would behoove us to question what we think human beings are capable of, and once we do that, start to actually engender our highest ideals in what we do every day, what's within our grasp. We don't have to wait to create some shiny, new gadget or shiny, new system that is going to make us better. We become better ourselves.

Anil Dash: Great. Thank you so much for joining us on Function.

Dr. Benjamin: It's been a pleasure. Thanks for having me.

Anil Dash: But there are some people who do believe that new systems and new technologies can make us better. After the break, my conversation with a nonprofit that has created a risk assessment tool used by law enforcement. They say they're reducing the number of people held on bail, but critics of this type of software say it's just enabling a newer, more high-tech version of the same old racist system.

Anil Dash: Welcome back to Function. I talked about criminal justice risk assessment software with Dr. Benjamin. In this category, risk assessment software, it's a kind of tool used by law enforcement agencies, and it gives them a risk number, sometimes literally just a number they see on a screen, that is sort of a recommendation about how likely a particular defendant is to re-offend in the future, or whether or not that person's going to return to court if they're released by the court.

Anil Dash: ProPublica did a really extraordinary story about this kind of risk assessment software, and the bottom line of what it found, you should read the article, but the takeaway was, this software often gives biased recommendations, especially for black people, and those recommendations are often wrong.

Anil Dash: But to be clear, there are several of these different kinds of risk assessments out there, and they aren't all the same. The most egregious ones don't claim to address systemic racism, and consider obviously biased factors like the neighborhood that a person lives in, or their parents' education status, and that stuff skews the data heavily.

Anil Dash: But some of the other risk assessment software like, for example, the Public Safety Assessment, or PSA, created by the nonprofit Arnold Ventures, it claims to be designed to keep people out of jail, and the way it does that is it uses data to help lower-risk individuals completely avoid the cash bail system entirely.

Anil Dash: The PSA doesn't ask questions about a person's neighborhood or their educational background; it just kind of sticks to the facts about past criminal records. But even with that being true, studies have shown that black people are more likely to be convicted of crimes, and to receive longer sentences for those crimes, than similar white counterparts would. But it turns out, that's a point my guests already understood.

James Cadogan: Bias in the criminal justice data is something that we're going to have to contend with across all systems.

Anil Dash: James Cadogan is the VP of Criminal Justice for Arnold Ventures, the nonprofit that makes the Public Safety Assessment.

James Cadogan: We too often let the perfect be the enemy of the good, saying that, if we can't eliminate all bias at the front end of the criminal justice system and policing, then we shouldn't use any data in reform overall.

James Cadogan: And I think that misses the forest for the trees, and that there are real people who are sitting in jail who could be helped right now...

Anil Dash: I talked to James and Kristin Bechtel, Director of Criminal Justice Research for Arnold Ventures, about their Public Safety Assessment, and if data that was capture from a racist system can be used to fight that racist system.

Anil Dash: To get started, I want to give people a little bit of background, talking about, in particular, the platform of Public Safety Assessment, right? Which is a tool used to assess sort of this likelihood during the pre-trial phase of getting a positive outcome, and what people want from the system. Can you describe, and I'll start with Kristin here, a little bit about what its role is in that entire process?

Kristin Bechtel: Just in terms of the PSA itself, it is a risk assessment that is used to inform the release decision, and more importantly, release conditions, that can really maximize the rates of court appearance within a community, and the rates of public safety. We are very much focused on providing objective information to judges and attorneys so they can take information, consistently process it, and make informed decisions recognizing that especially judicial officers, the PSA itself doesn't actually make the decision. It just provides one source of information for them to consider.

Anil Dash: Okay. And then, James, who uses this tool? Who are the people that are going to be looking at this technology and saying, "This is going to help me make a decision"?

James Cadogan: Primarily, you're looking at judges who are going to rely on the Public Safety Assessment, if it's been adopted in their jurisdiction. But one of the things I think is important is to take a step back and contextualize how the Public Safety Assessment, or the PSA, is used, and why jurisdictions are choosing to adopt the PSA or other assessments.

James Cadogan: And principally, that's because we have a mass incarceration problem in the United States. The statistics tend to roll off the tongue, that we have 2.2 million people incarcerated across the country, and that is by far the greatest pro capita incarceration rate of any country in the world. An even bigger number that we're now starting to focus on, and that's in pre-trial detention in jails.

James Cadogan: So, every day across the country, in jails at a state and local level, there are 475,000 people at any given moment sitting behind bars, many of whom are detained pre-trial. That's before they've gone to court, before a judge has rendered any decision, before a jury has heard any testimony, just being detained pre-trial. 475,000, which translates to between 10 and 12 million people every year who are behind bars for that reason. That's just far too many, and it far dwarfs even the astronomical numbers that we think about when we think about mass incarceration.

James Cadogan: That problem of pre-trial detention, and the fact that it touches so many people throughout our country, is something folks are now paying attention to, which is fantastic. So, our mission on the pre-trial team at Arnold Ventures is to reduce unnecessary and unjust pre-trial detention. And as a philanthropy, we invest in and support and make grants to organizations who are doing the work that we think will help do that. And the Public Safety Assessment is just one part of that.

James Cadogan: Thinking about how judges make decisions, they're legally obligated to make decisions about who's in jail and who gets to go free. And what we know for sure is that judges are human, and human decision-making is biased. What we want to be able to do is, to the extent possible, to mitigate those biases, particularly the racial biases that we know are so prevalent in the criminal justice system.

James Cadogan: And so, giving judges a consistent source of information that will allow them to make consistent decisions case to case, is part of how we can go about creating a more just pre-trial apparatus jurisdiction by jurisdiction.

Anil Dash: It's an interesting starting point, right? Because I think it's sort of an inarguable assertion, right? Humans have biases, and we bring them to bear in what we do, and that's a recurring theme here on Function. We talk about how our biases, whether visible or not, get replicated into technology.

Anil Dash: But one of the other things we talk about a lot in the world of tech ethics is that algorithms replicate our human behaviors. We train them on our human systems, right? And I want to get into that a little bit, but to start before that, there are factors that you all consider in this platform, in PSA, providing assessment. Right? There's sort of your risk factors that you calculate in.

Anil Dash: Can you speak to that? What are the inputs that you put into a system like this, to have it sort of say, "I can form an assessment about this person's risk"?

Kristin Bechtel: The PSA has nine risk factors. The data sources for those risk factors primarily come from court data and criminal history data, so most of the information, across three scales: one that predicts failure to appear, one that predicts new criminal activity, and the third that predicts new violent criminal activity, pulls from information from those two sources.

Kristin Bechtel: So, the majority of those risk factors look at past behavior. For example, for the failure to appear scale, there's risk factors associated with prior history of failures to appear, including the recency of failures to appear.

Kristin Bechtel: There's risk factors associated with prior convictions. There's risk factors associated with whether or not the individual has a current pending case that's open at the time of the current case in which the PSA is being completed, and whether or not the current charges are violent, and if the individual has served a period of incarceration for a conviction.

Anil Dash: There's a list here, and I think on the surface, they all sound like... as a layperson, what I would expect, right? It's like, "Okay. Somebody didn't show up before. Maybe they won't show up this time. That seems fairly straightforward." But that's me not knowing.

Anil Dash: One of the first questions that sort of jumps out on that is, where do you get that data? Who's telling you about that person's history?

Kristin Bechtel: Sure. In terms of the history, and the development of the PSA, data were captured or collected from a variety of different jurisdictions, but essentially, about 1.5 million cases were pulled and extracted from these different data sources, and 750,000 were used for analysis.

Kristin Bechtel: And just as a quick takeaway, when we think about all this data, those original data sources were used to develop pre-trial risk assessments in those specific jurisdictions, just so you have a little sense of the history.

Anil Dash: So, they apply to the places where the data came from.

Kristin Bechtel: That's exactly right.

Anil Dash: Okay.

Kristin Bechtel: And so, what's really important now in moving forward, and when we think about the application of the PSA, is that jurisdictions are using the PSA locally, and then working toward validating the PSA in their own jurisdiction with their own data. Which is a really important step in terms of not just the fidelity of a risk assessment, but jurisdiction and that community understanding how that risk assessment works, how it's predicting, and how it's informing decisions.

Anil Dash: Okay. So, sort of what I'm hearing, if I'm hearing this correctly, is you're hoping they take that data and make sure it matches their own experience, too, in that context so they're not just sort of taking what the system says, but does it match what their experience has been in that jurisdiction? Like, "Does this approach work for us?", basically.

Kristin Bechtel: Right. The validation piece is one way that that is captured, but another piece is that before jurisdictions adopt risk assessments, and specifically the PSA, there's a process in which stakeholders, the community, have to identify local policies and practices, because the PSA itself doesn't actually make a decision. It generates information.

Kristin Bechtel: It's that information, then, that local policy and practice can be integrated in to say, you know, "We're looking at this information. It appears this is where individuals perhaps may have some struggles with getting to court. Rather than detaining that individual, what can we do with this information? What's our local policy and practice? What do we care about?" So that this person is not detained. It could be something as simple as providing court reminders. It could be something...

Kristin Bechtel: And I know this may vary based on the jurisdiction, rural versus urban, but it could be something as small as providing transportation. The number of individuals who struggle with making arrangements for childcare for employment, to go to school, it just compounds itself.

Anil Dash: This comes up here in New York City, as our transit system is a mess, and talking to folks here who... I've talked about, who are like, you know, "I almost missed my date because the trains, because I had to get my kid to daycare." So, you're saying that's a thing, that they would have the sort of flexibility to factor into this.

Kristin Bechtel: That's right. That's part of a step that's essential for a proper adoption of the PSA. It's developing a release conditions matrix, it's developing a decision-making framework, where that information can be inputted into the system.

Anil Dash: I'm curious, James, about when you give them that set of tools, what happens to sort of put that into place? What are the things they're doing to tailor it for the environment and the context that it's in?

James Cadogan: I'll take another step back, if I could, for a second, on where we are. One of the things we haven't talked about that's really important... The backdrop here is that we currently have a system in most states that's predicated on money, and a lot of people have heard about money bail, but unless you've had contact with the system or are studying it, they may not realize that, for the most part, when you are arrested, you don't immediately go to trial. You go before a judge for what's called an arraignment, where the judge decides, "Can this person be released before their trial date?" Which will be set for a few weeks, or a few months out. "Or do they need to be detained for some reason?"

James Cadogan: And judges are obligated to make that decision in every state across the country at arraignment, about whether or not somebody can be released or needs to be detained. And as it currently operates, we have money bail, and bail schedules in particular, in which somebody's freedom is predicated on a dollar amount. And everybody knows the concept of setting bail, and it's a familiar thing that we hear about, but it really means that at arraignment, a judge will look at his schedule, and say, "Based on the type of offense with which you have been charged but not convicted, I'm going to set bail at a certain amount."

James Cadogan: The end result of that is that people's freedom is predicated on whether or not they're rich or they're poor. So, you could be somebody who hasn't actually committed the crime of which you are accused. You could be low-risk to society, but because you don't have enough money, you can't pay your way out of jail, and have to sit in pre-trial detention until your trial date.

James Cadogan: Judges should be releasing as many people as possible unless there is some reason so compelling that they think, "This person either presents some danger to society, or they are a risk of not showing up for their trial." Those are the two legal bases on which a judge can detain somebody.

Anil Dash: In jurisdictions where they're using a tool like PSA, a platform like this, do you see a difference in what happens at an arraignment? Are there metrics or feedback where it's like, all of a sudden, a different decision can happen?

James Cadogan: Absolutely. What we're seeing right now is, most jurisdictions who are adopting a risk assessment are adopting it as part of a comprehensive suite of interventions that they think will be good for their jurisdiction in creating more just pre-trial outcomes.

James Cadogan: So, I really want to be clear about this. A risk assessment in isolation, implemented without anything else, is not going to do anything and it is not going to get the jurisdiction that chooses it to the place it wants to be. A risk assessment has to be implemented as part of a comprehensive set of reforms to the pre-trial practices of the municipality or the state or the locality.

James Cadogan: That means thinking about, by statute, limiting the number of offenses, and the folks who will actually even be eligible for jail in the first place, it means thinking about due process and making sure that, no matter what, there's no detention decision made without a full hearing on the record in front of the judge. It's thinking about prosecution practices and parsimony, and the way that prosecutors use their extraordinary powers to charge to make bail recommendations, to make sentencing recommendations. It's about defenders and making sure that they are well-resourced.

James Cadogan: All of that goes into the process of creating pre-trial practices that will help bring down the number of people who are unjustly held pre-trial.

Anil Dash: Right. So, you can't just throw an app at it, right? You're not going to say, "Sprinkle some data on this and that's going to solve it."

James Cadogan: Exactly.

Anil Dash: But if we come at it, and we say there's a... And you'll forgive my skepticism here, but we say there's a jurisdiction that is trying to engage in good faith in all the things they ought to be doing. You have a laundry list of all the changes they need to make in policy.

Anil Dash: But one aspect of that is, "And let's get good, informed recommendations." Do you have those results, you can say, "And then, in this jurisdiction, a comprehensive system that included using this kind of technology," did have an impact that you could measure, that you could see.

James Cadogan: Absolutely. We've been getting some really promising results. First and foremost in New Jersey, that implemented a comprehensive pre-trial justice reform under its Criminal Justice Reform Act two to three years ago, and they had just released a few months ago their first comprehensive report, showing that they had reductions in levels of pre-trial detention, without any changes in the number of people who showed up for their court date or any concomitant uptick in the crime rate.

James Cadogan: Those are the kinds of things that are incredibly helpful, showing that a big state who invests in holistic reform, and in this case, that included adopting the PSA, can see the kinds of beneficial impacts for their population. They've actually reduced their pre-trial detention population nearly 40% over the past two years, after they implemented the reform. That is, quite literally, tens of thousands of people who are able to go home, who are able to go to their jobs, who are able to provide for their families, and aren't spending time in jail unnecessarily while the administration of justice continues to happen successfully.

Kristin Bechtel: Very similar to New Jersey, the Mecklenburg County, or Charlotte, North Carolina was one of our earlier adopters to the PSA, and, well, what's unique about Mecklenburg is that the use of the PSA is almost one step right after you are booked. So, what I mean by that is, when an individual is arrested, they are brought essentially to the jail, and a magistrate hears information about why they've been brought forward, and takes into account that information, and then makes a decision to release or to detain that individual, or to set bail.

Kristin Bechtel: What's transformed in Mecklenburg is that, along with the use of the PSA...

Kristin Bechtel: Warmed in Mecklenburg is that along with the use of the PSA, which is not completed at the magistrate level, there have been a number of pretrial reforms and cultural changes that have occurred that you know the magistrate to learn more about risk assessment and and risk factors and predictors for these various outcomes. They've adopted practices where they're mindful of what that information is and what the needs of their community are. So pretrial detention rates actually have gone down in Mecklenburg County, and I want to be very candid here. They were actually going down prior to the adoption of the PSA even, but even with that larger number of people being released, I think one of the fears that often happens when a risk assessment is adopted is that we think, well there's a larger number of people. How could it not be that failures to appear didn't increase or perhaps public safety rates didn't decrease?

Kristin Bechtel: And what we found in Mecklenburg, which is very similar to what James spoke to regarding New Jersey, is that actually they've maintained their very high court appearance rates. They've maintained their high public safety rates. And it's that information of we can use brisk assessment and make informed decisions alongside other pretrial reforms and we can ensure that people are out in the community getting the services are or their needs met. And we're really then creating sort of a better justice system and a more fair justice system.

Anil Dash: So I'm curious in particular about, some of the most vulnerable and over-sentenced communities, like Black and Latinx communities. Do you have specific ideas about the numbers or the impact of, you know, you talked about 40% drop. How is that proportional to race which has been this confounding factor in all these sentences?

Kristin Bechtel: Both of the Mecklenburg and the information that was produced from New Jersey's judiciary looking at their outcomes did see overall drops by race. But what they are mindful of is that the disparity by race is still present. And that's a fundamental piece that I think all of us want to continue to focus on. It's great to see the drop, but we can do better and we need to to take those next steps and figure out what's the next piece that we need to move forward with.

James Cadogan: I think this is part of where a lot of the conversation around whether or not a jurisdiction should or should not adopt risk assessment comes in.

James Cadogan: We all need to acknowledge and recognize that the data in the criminal justice system overall is racially biased and so as good as any assessment can be and as helpful as it can be in giving a judge better information and allowing them to make a less biased decision with the information they have, it is still information that reflects and biases from the front end of our criminal justice system and that's something that no assessment will ever be able to correct for.

James Cadogan: So what we essentially have is this intervention, which is really useful, but it takes in what is by definition going to be data that reflects biases. The most important thing for us to remember is that the status quo is judges looking at that same information and making decisions based solely on their experience and their gut, which we know will reflect their individual biases, as good as the judge may be.

James Cadogan: What we want to do and what we want to see and what a well-validated assessment like the the PSA does is provide better information, even if it's predicated on the same flawed information that permeates our system. So we have a tool that helps get us to a better place, but there's no assessment that's going to be perfect and that's going to eliminate bias in the way that we would want to, that exists from the front of our system.

Anil Dash: So that's sort of the core of the question that I get to, right, which is one of your factors is for example, prior conviction, right? And you know, this is America. We know prior convictions are skewed heavily on a racial basis, right? And there's plenty of documentation and evidence of this. And so if you train the system and we have this all the time and every kind of machine learning or AI thing, whatever you train it on, it replicates and sometimes unfortunately accurately. Right?

Anil Dash: I wonder about in a system like this, how do you anticipate and correct for the fact that the system is trained on data that accurately reflects the reality that a black defendant, a Latinx defendant is more likely to be convicted?

Kristin Bechtel: Yep. One of the things I want to clarify about the PSA itself is that it is not artificial intelligence, so there's not an ongoing feed of data modifying the algorithm or the tool itself.

Anil Dash: Yeah, no, I appreciate that. From a technical perspective, that's a very important clarification because those systems do evolve over time and are not sort of something that you've been able to analyze in a static way. So this is a much ... You know, speaking as a technologist, this is a system I would trust much more because you sort of know what you're putting in and how it's going to get out when it gets out. Those are systems that also do exacerbate those issues. And it is that sense of like the data you put in affects what you get out.

Kristin Bechtel: Absolutely. And I agree 100%. I do think it is something we need to be mindful of. And I want to be careful also to say we're very interested in learning more about if you will, branches of artificial intelligence such as machine learning to see how they can be helpful.

Kristin Bechtel: But I think because one of our core values, and I think a core value that we're trying to really ensure permeates throughout the criminal justice system is that there be a very objective and transparent process, which I think is a fundamental key that's that's missing from something like a bail schedule where we don't necessarily know what risk factors a judge perhaps may be considering in formulating a decision associated with bail.

Kristin Bechtel: You would of course with the PSA, the assessment itself, essentially if you had access to those same data sources be able to replicate that information and I think there's value in having that level of transparency.

Anil Dash: To sort of reproducibility, it at least gives you a check.

Kristin Bechtel: That's correct.

James Cadogan: No assessment is ever going to correct for racial bias that is baked into our criminal justice system that starts with policing. It just cannot do that and that's a really important reality for us to confront and it's a frustrating reality for anybody like me who wants to see generational change in our criminal justice system and wants to see changes as quickly as possible that are sustainable and that create more opportunity and less injustice. But that reality means that we have to try to provide the kinds of tools that will get us a step in the right direction.

James Cadogan: So we first have to acknowledge that challenge that we will never be able to correct for that using an algorithm or assessment or any kind of tool. That's going to require policing policy work and the kind of policy work that we support at Arnold Ventures through our policing team and our colleagues there, that has to be done.

James Cadogan: Second of all, once you take that and we know and accept that every single day people are sitting behind bars in jail because of the way the system currently operates. The question is how do we make that better and can we make improvements on that system so that fewer people are detained unjustly?

Anil Dash: So, one of the things I want to sort of call out on a high level here is you know, there's a refrain ... This conversation is catalyzed because there's a technology that is provocative and new and that has promised to change things hopefully for the better. But the refrain I hear from you both sort of repeatedly here is this is about systems and this is almost a single input into a very complex system.

Anil Dash: And that immediately gets me thinking about like how do you build for accountability, right? How do you make sure this is sort of fitting together the way you hope that it will? I wonder about, you know, and I've seen this happen. I've had friends that have been through this in the courtroom that are like much more primitive versions of system.

Anil Dash: And I know there are other tools out there that are maybe not created with the same sources of rigor as what you're doing. But they've had judges say the number came up. That was the phrasing. The number came up. Right. Which is this sort of, it's out of my hands. An algorithm that I have been told as authoritative and that comes from well-intentioned people, keeps saying they want to get things right, and as much as you can say, and rightfully so, this should be a case by case basis. This should be in a context, there should be a person in the room looking into the specifics of the case and make a judgment.

Anil Dash: Tech has authority and we treat it as infallible. How do you account for that? How do you train people for that? How do you accommodate that?

James Cadogan: The training matters in that if a judge is saying the number came up, this is authoritative and then follows that simply because it is the number, then they're doing it wrong. That's not the way that any assessment should be applied and the legal responsibility for the decision to release, to assign conditions or to ultimately detain, rests with the judge and they need to exercise that responsibility with their own experience.

James Cadogan: Being able to balance an input from an algorithm, which is really just a number, and that judicial experience is critical. That is what it means to be a judge. They take in information and they make decisions with that information. That information is mostly qualitative, but sometimes quantitative.

Anil Dash: If this is my arraignment, can I see what number was generated and where it came from? Is that a thing my lawyer can ask for or that I can ask or petition for?

Kristin Bechtel: So I certainly wouldn't be able to sum up for every judge or every case.

Anil Dash: Right, right, but broadly.

Kristin Bechtel: But broadly, in terms of the training that's provided for stakeholders, everybody is trained from public defenders to prosecutors, to judicial officers in pretrial services on the risk assessment. It's intended to be reproducible. And judges often speak to the information from the risk assessment on the record, but that's oftentimes part of the practice. And generally speaking, the respective attorneys, prosecution and defense have access to that information.

Anil Dash: So this is interesting because one of the refrains I hear in what you say there, and Kristin, one of the things you sort of alluded to earlier where people have concerns about, you know, oh you're releasing people, right? Is this sort of anxiety, and I don't think I'm tipping my hand very surprisingly here, is like I want more folks out and I want cash bail abolished. That's my personal view.

Anil Dash: But I imagine there are law and order types who say we shouldn't let any of these folks out and we need to, even if it's an arraignment, we need to throw the book at them and it needs to be expensive to have bail and all of these kinds of things. I'm curious about when people resist a system like this coming in, especially as part of comprehensive reforms, which is what you advocate happening in jurisdiction, what do they state are their objections? What are their concerns, where they feel like this is going to be dangerous. This it going to be bad for our community if we implement this kind of technology as part of a larger structural reform?

James Cadogan: The challenge we have and the objections we see are mostly from national level advocates who are concerned about bias and racial bias specifically in the criminal justice system, which completely makes sense and I share exactly that concern.

James Cadogan: What we generally tend to see with jurisdictions who are thinking about the practicalities of reform and how to adopt new practices in their pretrial systems is open arms, saying that we want as much as we can get that will be helpful to us in managing our pretrial detention population in bringing those numbers down in protecting due process and being more fair and that schism and that difference between the national dialogue and the local conversation is one of the most difficult things that we have in the pretrial space because so many jurisdictions, and we hear this all the time, want to adopt reforms, a lot of them including risk assessment and are very clear about how they think that will be helpful to their judges and to their court administrators, and to their populations.

James Cadogan: But that sits at odds with the dialogue that's ongoing about risk assessment writ large, which in reality is a conversation about criminal justice data and bias in criminal justice data. And so as we've seen a movement towards, a wonderful movement towards greater data transparency and a larger recognition of the fact that bias in the criminal justice data is something that we're going to have to contend with across all systems in CJ.

James Cadogan: Transparency is an issue that we're going to have to confront and be more transparent across all systems in CJ, but we're not even close to where we need to be. And those conversations get confused a lot of the time. And in my view, we too often let the perfect be the enemy of the good, saying that if we can't eliminate all bias at the front end of the criminal justice system in policing, then we shouldn't use any data in CJ reform overall.

James Cadogan: And I think that misses the forest for the trees and that there are real people who are sitting in jail who could be helped right now and anytime we are thinking about that criminal justice data, we have to be really careful to say that we need to do the front end advocacy that will help reduce racial bias in policing practices and reduce racial bias that is therefore reflected in data that's used throughout the system after that, but not be so shortsighted as to say that for people who are in jail right now, we're not going to try to use even the flawed data that we currently have and use it better and in a fairer way.

Anil Dash: I'll be honest, I went into this with my existing view of how the criminal justice system works and a hell of a lot of skepticism about technology playing a role in making it much better. I was pleasantly surprised, especially talking to the folks from Arnold Ventures by the thought they'd put into the fact that everybody wants a silver bullet in technology. They want to be able to say we waived some artificial intelligence at it or some software at it and all of a sudden, the system got more fair, more just.

Anil Dash: One of the questions that raised for me that I put to Dr. Benjamin was, well, if everybody's impulse is to think technology can fix it, should we take advantage of that and use that to drive a bigger, broader change?

Dr. Benjamin: Now what do we do as you ask like with that energy and I think it's to really ask ourselves is it that we just want to hone algorithms that better produce risk? What about actually trying to mitigate the underlying things that produced risks to begin with, whether it's investment in education or employment?

Dr. Benjamin: All of the things that actually create contexts that make people vulnerable rather than spend more energy creating a new technology. Let's channel that into the things that we know make for a good life.

Dr. Benjamin: I would really encourage people who feel motivated to deal with the racial disparities in the criminal justice system to know that the problem is not just that there are these disparities, that the underlying logic of punishment in these contexts has to be questioned, no matter who's harmed by it. And so what are the things that we know make for a good life and let's channel some of our resources, our imagination into developing that.

Anil Dash: I really appreciated Dr. Benjamin pushing me to think a little bit bigger.

Anil Dash: The truth of it is this seems like another one of the many big social problems where there's a tension between small incremental change within the system that we have and a bigger solution that starts by burning down the whole system.

Anil Dash: Tech's not really good at the let's burn it all down problem. It could play a role though at that small incremental change. I say small because it's smaller than changing the entire criminal justice system. But if you're somebody who is in that system right now and you're being treated unfairly, the difference tech can make today is not small at all.

Anil Dash: So my takeaway is that we have to do both. We have to push as hard as we can to make sure tech is making things better for people today. And that turns out to be a little bit more possible than I'd imagined, but we can't lose sight of that larger goal, which is that we have to undo unjust systems at a global scale, at a really large, huge, unimaginably big scale. And that's going to take a lot more than simply applying some software to it.

Anil Dash: Function is produced by Bridget Armstrong. Our Glitch producer is Keisha TK Dutes. Nishat Kurwa is the executive producer of audio for the Vox Media podcast network and our theme music was composed by Brandon McFarland.

Anil Dash: Thanks to the whole engineering team at Vox and a huge thanks to our team at glitch and you can follow me on Twitter @AnilDash, but you should also follow the show @PodcastFunction, all one word.

Anil Dash: Please remember to subscribe to the show wherever you're listening to us right now, and also check out glitch.com/function. We've got transcripts for every episode up there, apps, all kinds of stuff to check out about the show. We'll be back next week and we hope you join us then.