What would any discussion of the future be without diving into technology? To many people, technology is hopeful, even utopian — a way of augmenting our existence that promises better living for everyone. Technologists emphasize that the future is frictionless, that tech will blend seamlessly into our days. However, history teaches us that technology can be a lot of things– and if there is one thing you can say technology definitely is, it’s human.
To take an optimistic starting point, let’s assume that the technology that is being developed now is intended to improve the lives of other humans everywhere. How would one reconcile that with the fact that only a ~2.5% of the creators and developers of technology are Black? As we have discussed every day this month, Black people do not historically fare well when white and non-Black people are in charge of making decisions about them and their lives. If racism exists in both the structures that create an almost entirely white and Asian technological workforce, and also in the minds of those who benefit from white supremacist structures, what would be so magical about technology to make it universally beneficial, or even neutral? You guessed it — nothing: technology encodes racism, often in insidious ways that may not be identifiable at first glance.
Let’s start with a new-ish form of accessible technology: home DNA testing kits. The ability to trace ancestry through genetics has been an exciting development for many people, but is particularly significant for Black people, as many people’s family histories were stolen from them by slavery. Through genetic testing, Black people have been able to trace their roots to regions of origin on the African continent (contrary to popular belief, enslaved Africans did not only originate in West Africa, but were also captured well inland and brought to the coast).
Because of the special potential significance of genetic testing for Black Americans, there exist a number of DNA testing kits marketed specifically to them (including one co-developed by Harvard professor Henry Louis Gates). In “Mediated Science, Genetics and Identity in the US African Diaspora,” Elonda Clay covers both these tests and the narratives that surround them, including a number of documentaries centered on specific people discovering their African heritage. Clay also references sociologist and author Alondra Nelson, who has written extensively about the evolving understanding of race and identity in the age of DNA testing, pointing out that while DNA tests can function as a window into identity for Black people who are the descendants of enslaved Africans, these tests can also serve to support the idea of race as a genetic reality, rather than a social construction. As Afrofuturist author Ytasha Womack frames it, race itself is a technology — an invention that was created to bolster support for the slave trade, and continues to be used to justify oppression today. The danger of DNA testing, then, might be to provide a kind of scientific blessing on a genetic idea of race.
Technology also encodes racism even when the technology has seemingly nothing to do with race. Take facial recognition: technologist Joy Buolamwini writes about the idea of the coded gaze, that facial recognition software relies on algorithms that are trained using images that do not represent people of color, and specifically are often worst at including women of color. (Buolamwini is also featured in the banner image for our post today. The image is from an MIT News article about the study on which Buolamwini is the lead author.)
Check out the interactive Gender Shades, which demonstrates this effect using several prominent means of facial recognition, showing that seemingly biasless algorithms perform poorly on darker faces, and also perform poorly on women’s faces. And, you guessed it, they are worst at recognizing Black women. The “examination of facial-analysis software shows error rate of 0.8 percent for light-skinned men, 34.7 percent for dark-skinned women.”
Why does that matter? Because facial recognition is increasingly used by law enforcement to make automated decisions about who is and is not a suspect in a crime. Perpetual Lineup presents this setup in the summary of their extensive report on the use of facial recognition (and other algorithmic approaches to law enforcement):
“There is a knock on your door. It’s the police. There was a robbery in your neighborhood. They have a suspect in custody and an eyewitness. But they need your help: Will you come down to the station to stand in the line-up?
Most people would probably answer “no.” This summer, the Government Accountability Office revealed that close to 64 million Americans do not have a say in the matter: 16 states let the FBI use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm.”
Given that law enforcement’s use of facial recognition is largely completely unregulated, proceeds both in targeted and open-ended ways (i.e. regarding a specific suspect, versus constant scanning of surveillance video of people who are not suspected of a crime), and is in the hands of carceral system with pervasive racism in its very marrow, the abuse of algorithmic “justice” is potentially huge.