A Reading List for Algorithmic Bias

Five great books to get you started

Algorithmic bias is not a new topic, but one that not many talk about. Perhaps we are intimidated by the technological terminology and vocabulary used to explain bias in algorithms. Or maybe we don’t understand the intersection of systemic racism and technology in our daily lives. In my last post for LibTech Insights, I provided a quick introduction to racial and gender biases in algorithms. The following books help further break down and explain what algorithmic bias is, how it affects us, and what we can do to combat it.

In Safiya Umoja Noble’s exploration of racist search results in Algorithms of Oppression: How Search Engines Reinforce Racism (NYU, 2018) is explosive and enlightening. She draws the conclusion that search engine results are not neutral, but reflect the designer’s bias and the data that search engines are trained on. Biases like these are seen and experienced by users and can be used to further oppress, deny opportunities, or perpetuate harmful stereotypes for people of color.

Focusing on bias in library discovery tools, Matthew Reidsma brings the topic of algorithmic bias home for librarians and other informational professionals in his book Masked by Trust: Bias in Library Discovery (Library Juice, 2019). Most powerful is his research showing inaccuracies and bias in library discovery system search results, such as discovery systems returning different results for searches like “September 11th” and “9/11,” with the 9/11 search appearing to emphasize conspiracy theories about the event. Libraries often market themselves and their products as neutral tools for research, gaining their users’ trust, but Reidsma clearly demonstrates that these biases and inaccuracies undermine that trust. Reidsma advocates for librarians to encourage transparency from library discovery service vendors and reporting biases and errors when they are found.

In Race After Technology: Abolitionist Tools for the New Jim Code (Polity, 2019), Ruha Benjamin argues against technological neutrality and instead shows how technology’s “New Jim Code” upholds social inequality and reinforces systemic racism. Algorithmic bias reflects the deep racial biases in our society. Benjamin calls out the idea that examples of bias are just glitches in the systems, harmless mistakes with inconsequential results. Glitches are “a form of exclusion and subordination built into the ways in which priorities are established and solutions defined in the tech industry … glitches are not spurious, but rather a kind of signal of how the system operates” (p. 79).


🌟 Subscribe to the LibTech Insights newsletter for more great posts and bonus resources, including:


Sara Wachter-Boettcher’s book Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech (W. W. Norton, 2018) is eminently readable and very engaging. It explores how technology can be harmful to women and other marginalized groups, and demonstrates how bias in tech is built into products, apps, and systems. She demands increasing diversity in the tech industry, auditing algorithms for bias, focusing design on privacy and security first, and holding tech companies accountable for misinformation, hate speech, and other harm their products may do.

Cathy O’Neil calls mathematical models and algorithms “weapons of math destruction” or “WMDs” in her book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown, 2017), due to their ability to wreak havoc on people’s lives. Because the algorithms used today are unregulated and often poorly understood and hard to contest, they significantly impact us by reinforcing discrimination. A WMD is opaque, scales large quite easily, and through pernicious feedback loops, causes significant damage. O’Neill suggests solutions including auditing algorithms to make them more transparent and accountable, diversifying the data used to train algorithms, and enabling due-process around algorithmic decision-making.

What can we do when algorithms are often proprietary information with very little transparency? Librarians, with their training in search and retrieval, may be especially adept at auditing search engine results and documenting cases of racial and gender bias. It’s even possible to report problem search results to Google. We must recognize the limitations of algorithms and learn how to approach search results with skepticism. And we must share this knowledge with our patrons and users.


🔥 Sign up for LibTech Insights (LTI) new post notifications and updates.

📅 Join us for a free webinar on AI citations and ethics for librarians.

✍️ Interested in contributing to LTI? Send an email to Deb V. at Choice with your topic idea.