Fix your algorithmic racism, or face the FTC

Published on MediaPost, 23 April 2021.

One of the most important things I’ve ever learned is that there are two vastly different categories of racism: one that involves individual ill intent towards someone because of their race, and one that involves systemic, structural, societal power and outcome imbalances.

What’s the difference? As a white person, I don’t have to bear any individual ill will towards Blacks or people of color in order to receive the benefit of, for example, not being terrified when I get pulled over by a cop. My individual good-heartedness has no bearing on my complicity in a racist system.

The difference between the two is why the FTC’s post this week about bias in algorithms is a big deal. Author Elisa Jillson calls out three particular laws and how they relate to the use of algorithms:

— The unfair or deceptive practices referenced in the Federal Trade Commission Act would include the use of racially biased algorithms.

— The Fair Credit Reporting Act can apply when an algorithm denies people employment, housing, credit, insurance or other benefits.

— And the Equal Credit Opportunity Act “makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.”

Jillson goes on to say, “Hold yourself accountable — or be ready for the FTC to do it for you.”

And your individual good-heartedness has no bearing on whether the algorithms you use are racially biased. In fact, it’s disturbingly easy for algorithms to absorb, and then reflect, existing prejudice, as developer Robyn Speer demonstrated four years ago in a tutorial called “How to make a racist AI without really trying.”

Speer built a sentiment classifier — an algorithm designed to “read” text and determine whether it conveyed a positive or a negative sentiment. She used off-the-shelf word definitions and training data representing the gold standard of positive and negative words. She followed an entirely mainstream software development process. And yet, somehow, the classifier decided that “Let’s go get Mexican food” was a more negative statement than “Let’s go get Italian food,” and that the name “Shaniqua” was more negative than the name “Emily.”

There are tons of examples of getting it wrong. Amazon’s hiring algorithm penalised resumes that included the word “women.” An algorithm used to inform parole decisions labelled Black defendants more likely than whites to re-offend, even when the ultimate outcomes were the opposite. Google Images classified Black people as “gorillas” and, in response to a search for “CEO,” turned up almost all men — with one notable exception: Barbie!.

The chance that your company uses algorithms, whether your own or furnished by others, is high. The chance that those algorithms embed and reinforce historic biases and injustices is also high.

It’s time to make algorithms that embody an aspiration for the future, rather than ones that act to continue our disgraceful past. The FTC has put us all on notice, and ignorance is no longer an excuse.

Ngā mihi mahana,
Kaila

Kaila Colbin, Certified Dare to Lead™ Facilitator
Co-founder, Boma Global // CEO, Boma NZ