The Future Must Be Ethical: #MakeAIEthical
Google has chosen to dismantle its AI ethics work, making it clear that the company will only tolerate research that supports its bottom line. This is a matter of urgent public concern.
We stand in solidarity with Dr. Margaret Mitchell, a world-leading AI researcher who until recently led the most diverse team at Google Research, the Ethical AI team. She is the latest casualty of Google’s attack on AI ethics.
Three months after abruptly firing, publicly disparaging, and gaslighting the co-lead of Google’s Ethical AI team, Dr. Timnit Gebru, Google has now fired Dr. Margaret Mitchell, the team’s founder. The company terminated her at the end of the day on a Friday, after locking her out of her work account for five anxious weeks.
The day Dr. Mitchell was abruptly cut off from her corporate account, she sent this document to Google’s public relations department, marshalling research that questioned the company’s stated reasons for firing her co-lead, Dr. Timnit Gebru.
This pattern makes clear that Google is working intentionally to punish, silence, and dismantle the entire Ethical AI team. During the five weeks that Dr. Mitchell was suspended, Google’s “global investigations” unit intimidated remaining members of the team, some of whom were coerced into interrogations outside of work hours and on Friday evenings with short notice, no option for postponement, and no information about the subject of the interrogation.
Google’s harsh retaliation is being directed at a team with a record of success and impact. Drs. Mitchell and Gebru’s team has produced ground-breaking work that has won awards, garnered international praise, and shaped the burgeoning research field examining the social implications of AI, not to mention being critical to founding and leading major conferences in algorithmic fairness. Their work has been loudly celebrated by Google and has led to the launch of multiple Google products. Their exceptional accomplishments go above and beyond the bar for achievement at Google Research and other industry labs. Drs. Mitchell and Gebru also built one of the most diverse teams in Google Research, people who could connect their lived experiences to practices of power, subjection, and domination which get encoded into AI and other data-driven systems.
The team has spent significant energy working in the public interest. They critically examined the benefits and risks of powerful AI systems — especially those whose potential harms outside of the Google workplace were likely to be overlooked or minimized in the name of profit or efficiency. The team also escalated pressing concerns about toxic workplace cultures directly to HR and leadership, offering solutions and suggesting methods to work together towards social good, while ensuring appropriate accountability structures were in place.
But even unparalleled research accomplishments, and visionary leadership that put the well-being of diverse workers at the forefront, could not shield them from Google’s retaliation. Instead of rewarding Drs. Mitchell and Gebru, and their team for this urgently necessary work, Google responded by publicly demeaning and firing them, then harassing and intimidating the remaining members.
This is a watershed moment for the tech industry, with implications that reach far beyond it. Following in a long tradition, Google workers have been organizing from within, raising inextricably linked issues of toxic workplace conditions and unethical and harmful tech to leadership and to the public. With the firing of Drs. Mitchell and Gebru, Google has made it clear that they believe they are powerful enough to withstand the public backlash, and are not concerned with publicly damaging their employees’ careers and mental health. They have also shown that they are willing to crack down hard on anyone who would perturb the company’s quest for growth and profit. Google is not committed to making itself better, and has to be held accountable by organized workers with the unwavering support of social movements, civil society, and the research community beyond.
Therefore, we call on members of the AI community, especially those who make their careers researching the social and ethical consequences of tech, to take the following actions in solidarity with the Ethical AI team. We must stand up together now, or the precedent we set for the field — for the integrity of our own research and for our ability to check the power of big tech — bodes a grim future for us all.
WE ASK THAT:
- Academic conferences require papers from submitting organizations to supply publication approval policies (if they exist), refuse to review papers that have been subjected to editing by lawyers or similar corporate representatives, and decline sponsorship from organizations such as Google engaged in retaliatory actions towards researchers.
Industry publication procedures that require certain papers to be edited by corporate lawyers in order to mislead the public about potential harms [e.g. 1, 2] should not be accepted at academic conferences. Therefore we call for organizations, especially industry organizations, to “show their work” in describing their publication approval processes. We applaud the recent decision by the FAccT Conference to drop their Google sponsorship.
- Potential recruits to Google decline invitations from Google recruiters. To this effect, we applaud initiatives such as #RecruitMeNot that seeks to break the tech talent pipeline to Google. Recruits can respond to Google recruiters with the following text:
The treatment of Drs. Mitchell, Gebru and their team has made it clear that Google is dangerously escalating its retaliation against members of marginalized communities and those who hope to stop the company from creating harmful and unethical technology. I do not plan on seeking employment at Google until it shows clear signs of reversing the dangerous course it is currently on.
- Academic institutions and other research organizations publicly commit to stop receiving funding from Google until it commits to clear and externally enforced and validated standards of research integrity. Too many institutions of higher learning are inextricably tied to Google funding (along with other Big Tech companies), with many faculty having joint appointments with Google. We call on universities, especially those that claim to be human-centered, such as Stanford’s Human Centered AI Institute and MIT’s Schwarzman College of Computing to publicly reject Google funding.
- State and national legislatures strengthen whistleblower protections. As others have written, the existing legal infrastructure for whistleblowing at corporations developing technologies is wholly insufficient. Researchers and other tech workers need protections which allow them to call out harmful technology when they see it, and whistleblower protection can be a powerful tool for guarding against the worst abuses of the private entities which create these technologies.