Read the Email That Led to the Exit of Google A.I. Ethicist Timnit Gebru

live streaming
3 min readDec 5, 2020

‘Your life gets worse when you start advocating for underrepresented people’

Photo: Kimberly White/Stringer/Getty Images

Timnit Gebru, one of Google’s most prominent researchers on ethics and computer vision, says she was fired this week after sending an email to Google Brain Women and Allies, an internal resource group at the company.

https://note.com/rifatout/n/nd35b728c7b06
https://note.com/rifatout/n/n82c5471e3df2
https://blog.goo.ne.jp/marathan/e/227b6cec779f2b7e9769df22b4d5eb62
https://blog.goo.ne.jp/marathan/e/250110f9c060ea7be23b9c6bfcf7f709
https://blog.goo.ne.jp/marathan/e/894c957540f5bc01fd5d29fcf414fac1
https://blog.goo.ne.jp/marathan/e/1L2ZS6zQsEGNuMy4Y2AUkkYycCL4yPeV92
https://wearempl.zendesk.com/hc/en-gb/community/posts/360075156391
https://ameblo.jp/nlinesports/entry-12642303720.html
https://coxaracryocom.hatenablog.com/entry/2020/12/06/041018
https://ameblo.jp/nlinesports/entry-12642304031.html
https://paiza.io/projects/DaLM57Y0oQc8xzNAV9H3Kg

The email alludes to Google censoring one of Gebru’s research papers without talking to her about it, as well as the poor treatment of those who advocate for underrepresented people at the company. The email was published in full on the outlet Platformer.

After sending the email, Gebru had an exchange with managers and privately threatened to quit unless certain undisclosed conditions were met. Instead, Gebru says she was immediately fired, she told OneZero’s Will Oremus.

Gebru’s contributions to the field have shaped modern understanding of how artificial intelligence fails and the technical underpinnings of how algorithms treat underrepresented people differently. A Twitter thread by Fast.ai co-founder Rachel Thomas lays out how Gebru’s years of scholarship have influenced A.I. research, including her co-authoring a seminal work that showed facial recognition is far less accurate on women of color than on white men.

Gebru helped lead of Google’s A.I. ethics team and co-founded Black in A.I., an international organization focused on supporting Black A.I. researchers and expanding access to the traditionally exclusive field.

According to the Platformer, the email reads, in part:

Imagine this: You’ve sent a paper for feedback to 30+ researchers, you’re awaiting feedback from PR & Policy who you gave a heads up before you even wrote the work saying “we’re thinking of doing this”, working on a revision plan figuring out how to address different feedback from people, haven’t heard from PR & Policy besides them asking you for updates (in 2 months). A week before you go out on vacation, you see a meeting pop up at 4:30pm PST on your calendar (this popped up at around 2pm). No one would tell you what the meeting was about in advance. Then in that meeting your manager’s manager tells you “it has been decided” that you need to retract this paper by next week, Nov. 27, the week when almost everyone would be out (and a date which has nothing to do with the conference process). You are not worth having any conversations about this, since you are not someone whose humanity (let alone expertise recognized by journalists, governments, scientists, civic organizations such as the electronic frontiers foundation etc) is acknowledged or valued in this company.

Then, you ask for more information. What specific feedback exists? Who is it coming from? Why now? Why not before? Can you go back and forth with anyone? Can you understand what exactly is problematic and what can be changed?

And you are told after a while, that your manager can read you a privileged and confidential document and you’re not supposed to even know who contributed to this document, who wrote this feedback, what process was followed or anything. You write a detailed document discussing whatever pieces of feedback you can find, asking for questions and clarifications, and it is completely ignored. And you’re met with, once again, an order to retract the paper with no engagement whatsoever.

Then you try to engage in a conversation about how this is not acceptable and people start doing the opposite of any sort of self reflection — trying to find scapegoats to blame.

Silencing marginalized voices like this is the opposite of the NAUWU principles which we discussed. And doing this in the context of “responsible AI” adds so much salt to the wounds.

--

--

live streaming
0 Followers

2020 大王製紙エリエールレディスオープン 生放送🔴📺📲👉https://hdsports247.net/elleair-ladies-Open/