AI Watch

Welcome! This is a website to track people and organizations working on AI safety. See the code repository for the source code and data of this website.

This website is developed by Issa Rice with data contributions from Sebastian Sanchez, Amana Rice, and Vipul Naik, and has been partially funded by Vipul Naik.

If you like (or want to like) this website and have money: the current funder doesn't want to continue funding this project. As a result, it is currently mostly sitting around. If you want to bring this site to the next level, contact Issa at What you get: site improvements, recognition in the site credits. What the site needs: money.

If you have time and want experience building websites: this website is looking for contributors. If you want to help out, contact Issa at What you get: little or no pay (this could change if the site gets funding; see previous paragraph), recognition in the site credits, privilege of working with me, knowledge of the basics of web development (MySQL, PHP, Git). What the site needs: data collection/entry and website code improvements.

Last updated on 2022-08-02; see here for a full list of recent changes.

Table of contents


Agenda name Associated people Associated organizations
Iterated amplification Paul Christiano, Buck Shlegeris, Dario Amodei OpenAI
Embedded agency Eliezer Yudkowsky, Scott Garrabrant, Abram Demski Machine Intelligence Research Institute
Comprehensive AI services Eric Drexler Future of Humanity Institute
Ambitious value learning Stuart Armstrong Future of Humanity Institute
Factored cognition Andreas Stuhlmüller Ought
Recursive reward modeling Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, Shane Legg Google DeepMind
Debate Paul Christiano OpenAI
Interpretability Christopher Olah
Inverse reinforcement learning
Preference learning
Cooperative inverse reinforcement learning
Imitation learning
Alignment for advanced machine learning systems Jessica Taylor, Eliezer Yudkowsky, Patrick LaVictoire, Andrew Critch Machine Intelligence Research Institute
Learning-theoretic AI alignment Vanessa Kosoy
Counterfactual reasoning Jacob Steinhardt

Positions grouped by person

Showing 3 people with positions.

Name Number of organizations List of organizations
Chris Maddison 2 Google DeepMind, University of Oxford
David Manheim 2 Association for Long Term Existence and Resilience, Future of Humanity Institute
Paul Christiano 2 OpenAI, Theiss Research

Positions grouped by organization

Showing 25 organizations.

Organization Number of people List of people
Machine Intelligence Research Institute 11 Linda Linsefors, Evan Hubinger, David Simmons, Daniel Demski, Alex Zhu, Alex Mennen, Alex Appel, Patrick LaVictoire, Eliezer Yudkowsky, Benya Fallenstein, Katja Grace
OpenAI 5 Long Ouyang, Christopher Olah, Geoffrey Irving, Paul Christiano, Dario Amodei
University of Oxford 4 Ruth Fong, Chris Maddison, Heather Roff, Owain Evans
Center for Human-Compatible AI 3 Christopher Cundy, Beth Barnes, Dmitrii Krasheninnikov
AIDEUS 2 Sergey Rodionov, Alexey Potapov
Association for Long Term Existence and Resilience 2 Vanessa Kosoy, David Manheim
Future of Humanity Institute 2 David Manheim, Sören Mindermann
Google DeepMind 2 Pedro A. Ortega, Chris Maddison
Learning Intelligent Distribution Agent 2 Tamas Madl, Stan Franklin
Australian National University 1 Jarryd Martin
Carnegie Mellon University 1 Noam Brown
Centre for Effective Altruism 1 Owen Cotton-Barratt
ETH Zurich 1 Felix Berkenkamp
Foundational Research Institute 1 Caspar Oesterheld
Georgia Institute of Technology 1 Fuxin Li
Google Brain 1 Jeremy Nixon
IDSIA 1 Bas R. Steunebrink
Massachusetts Institute of Technology 1 Jon Gauthier
Oregon State University 1 Thomas Dietterich
Quebec Artificial Intelligence Institute 1 Vincent Luczkow
Stanford University 1 Aditi Raghunathan
Theiss Research 1 Paul Christiano
University of California, Berkeley 1 Michael Janner
University of Cambridge 1 Jose Hernandez-Orallo
University of Toronto 1 Roger Grosse