Skip to content

Commit

Permalink
Blog: Discussion Guide: More than a Glitch (#356)
Browse files Browse the repository at this point in the history
  • Loading branch information
emmercm authored Jan 27, 2025
1 parent 65bae8d commit 26b6535
Show file tree
Hide file tree
Showing 2 changed files with 178 additions and 0 deletions.
92 changes: 92 additions & 0 deletions src/blog/discussion-guide-more-than-a-glitch.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
---

title: "Discussion Guide: More than a Glitch"
date: 2025-01-27T00:42:00
tags:
- books

---

A guide with discussion prompts for [Meredith Broussard's](https://meredithbroussard.com/) [More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech](https://mitpress.mit.edu/9780262548328/more-than-a-glitch/).

I think it is important now more than ever to educate oneself on the topics covered by this book. Large language models (LLMs) are more widespread than ever; and the Trump administration is actively dismantling diversity, equity, and inclusion (DEI) programs, with many big tech companies following suit. You may not agree with every point made by the author, but you should at least be aware of the power you have as a technologist.

I run the engineering book club at my current company, [Attentive](https://www.attentive.com/). For the pace of our book club, we committed to reading 2 chapters every 2 weeks. I personally appreciate having multiple discussions over time, I find I better retain content that way. These are the exact discussion prompts I used over the few months of reading the book, so they're colored by my personal experience and the current culture at Attentive. All prompts are wholly my own and are free to use by all.

## Discussion ground rules

I led the discussion of this book in a workplace setting, which can be an unsafe place to discuss personally vulnerable topics. I gave this preamble at the start of the book and repeated versions of it before starting each discussion:

> As someone in the majority in tech and who isn't minoritized in really any way, I'm the exact type of person who should be reading this book, but I will likely struggle to write meaningful discussion prompts. I can promise to use my voice to create a safe and inclusive space, but I will need your help to create meaningful conversation.
Ground rules:

- **Respect the speaker** - don't interrupt, listen to the message and not the person, and avoid invalidating others' experiences by building on their story.
- **Respect other attendees** - avoid being a dominant voice, give space to others to share their experiences and opinions, and give voice to those who are seeking it.
- **Respect non-attendees** - try to share only your personal experience rather than speak authoritatively for a group of people, and keep the confidentiality of others when sharing experiences.
- **Respect yourself** - give yourself grace for not feeling as educated as others, and acknowledge that you have biases even if you might not recognize them.
- **Lead with good faith** - assume all attendees are present for good reasons who want to add to the conversation or learn from it. Recognize that people have different levels of education on different topics, and ask questions to clarify statements that don't seem constructive.
- **It is ok to disagree respectfully** - people's experiences are different, people's education level on these topics is different (and the quality of material that exists online is varying)—ask questions to seek to understand, and never attack the speaker.
- **Keep discussions confidential** - what is said in the discussion should remain in the discussion, attendees are putting themselves in a vulnerable place, and we must respect that. Coworkers' experiences are for them to share and not you.
- But do recognize that vulnerable conversations amongst coworkers are inherently less safe than conversations outside the workplace, you may often work with other attendees, and it is important to not destroy working relationships.

## Chapter 1

- Much of the introduction is centered around the word "technochauvinism" and some real-world examples of it. What does the word mean to you, in your own words?

## Chapter 2

- As technologists, do you agree with the author's explanation of artificial intelligence and machine learning (ML)? Do you think there were any consequential details missed?
- Do you have any examples of ML applications you like to use when talking with non-technologists?
- Do you agree or disagree with this sentence from the book: "whoever owns the [machine learning] model has an enormous amount of power"?
- What do you think are good and just uses for machine learning?

## Chapter 3

- The author tells a story about [Robert Williams](https://www.aclumich.org/en/press-releases/farmington-hills-father-sues-detroit-police-department-wrongful-arrest-based-faulty), a southeast Michigan man who was arrested based on a low-quality surveillance video screenshot and error-prone facial recognition technology (FRT). Many people had the chance to question the algorithm's "answer" but didn't. Do you have any personal examples of a decision chain failing you, or regrets of participating in a faulty one?
- Companies that create facial recognition technology (FRT) are incentivized to continually push the adoption of the technology, mostly to government agencies. Customers are incentivized to use the technology as much as possible to justify the cost. Can these incentives ever be aligned to the goal of fair and just policing?
- The lack of audit trails and accountability with various governments' use of facial recognition technology (FRT) is inexcusable at best. The author states that while facial recognition technology is provably racist, training it on a more diverse set of people would only exacerbate a racist over-policing problem. Do you think facial recognition technology could ever be used safely for good?

## Chapter 4

- [Paige Fernandez](https://www.aclu.org/bio/paige-fernandez) from the ACLU discussed in a [2020 podcast](https://www.aclu.org/podcast/why-it-so-hard-hold-police-accountable-ep-102) that the current system of American policing began to "maintain social control of black, enslaved people." This kind of history is not well known and is rarely taught. What other historical facts or stories do you have about American policing that others are unlikely to know?
- The author quotes [Hamid Khan](https://unequalcities.org/hamid-khan/) who said "[those using location-based policing] are not there to police potholes and trees. They are there to police people in the location." What predictive algorithms would you like to see local governments use for social good, such as predicting areas that are prone to potholes?
- A major theme of the chapter is how numbers and statistics can be weaponized against groups of people. This was evidenced by the NYPD's use of CompStat, and again by [Karl Pearson's](https://en.wikipedia.org/wiki/Karl_Pearson) eugenic motivations for using statistics. How have you seen statistics used to advance societal equity?

## Chapter 5

- Educational technology (edtech) companies have historically had a difficult time raising capital, but this has been changing recently, especially in the last five years. What are examples of companies that are using technology to increase education outcomes rather than decrease labor costs?
- The author states "one of the big misconceptions of data science is that it provides insights. It doesn't always." This is relevant to the tech industry, where many companies claim to use data to drive decisions. Have you seen data be misused or creatively interpreted to justify business decisions?

## Chapter 6

- Do you think your company is investing enough in making their products accessible?
- The book gives multiple examples of companies taken to court over their products being inaccessible. Do you know of companies that are getting it right, are ahead of the curve, or advocate for accessibility outside their workplace and products?
- Universities have started adding ethics courses to their computer science curriculums. The author would likely argue that one course is not enough. What other human-centric courses do you think should be required for computer science students?

## Chapter 7

- The "[Y2GAY](https://qntm.org/gay)" problem came about because databases storing personally identifiable information (PII) stored gender information as a boolean. What do you think about websites collecting gender information at all?
- Where does your company collect gender information? How does your company (or its clients) use that information? How does your company store that information (string, boolean, etc.)?

## Chapter 8

- The author shared a story of encountering an electronic medical/health record (EMR/EHR) system with restrictive options to describe her race. As machine learning's understanding of unstructured data progresses, do you think we could allow for more freeform answers on medical and legal forms?
- The author gives a call to action to "‘call bullshit' on claims about future technology and a rosy tech-enabled future," as a response to a [2017 interview](https://www.newyorker.com/magazine/2017/04/03/ai-versus-md) with [Geoffrey Hinton](https://en.wikipedia.org/wiki/Geoffrey_Hinton). What technology is currently over-valued, either monetarily or in terms of hype from the media?
- The author asserts that one perpetrator of bias is overvaluing experts' opinions in an area that isn't their expertise, such as technologists making statements about social science. What similar harm do you see technologists making today?

## Chapter 9

- The author calls out that "the whole reason to do open science is so that other people can replicate or challenge your scientific results," but also calls out that rarely happens. What studies have you heard of that couldn't be replicated recently?

## Chapter 10

- As a technologist, would you consider working for the [US Digital Service (USDS)](https://www.usds.gov/) or [18F](https://18f.gsa.gov/ "https://18f.gsa.gov/")? Why or why not?
- The author touches on the idea of emotional attachment to one's code and the difficulty in throwing it out. What is your team or organization's history with cutting losses on code that doesn't solve its intended goal, or is even counterproductive?
- Is algorithmic auditing applicable to your organization? Do you know if your organization already pays for or participates in this type of auditing? Where do you think your organization could invest in further technical auditing?
- One theme with algorithmic auditing is organizations' unwillingness to fund it at the same time as net-new development. How does your organization approach funding these initiatives that aren't revenue-generating?

## Chapter 11

- The author paints a picture of the progress made in understanding bias in technology between 2018 and 2022. What progress would you like to see in the next five years?
- The author specifically avoids using the term "magic" when describing ML algorithms to non-technical audiences. How do you like to succinctly describe these algorithms to friends and family?
86 changes: 86 additions & 0 deletions src/static/img/blog/discussion-guide-more-than-a-glitch.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 26b6535

Please sign in to comment.