AI Watch

Welcome! This is a website to track people and organizations in the AI safety/alignment/AI existential risk communities. A position or organization being on AI Watch does not indicate an assessment that that position or organization is actually making AI safer or that the position or organization is good for the world in any way. It is mostly a sociological indication that the position or organization is associated with these communities, as well as an indication that the position or organization claims to be working on AI safety or alignment. (There are some plans to eventually introduce such assessments on AI Watch, but for now there are none.) See the code repository for the source code and data of this website.

This website is developed by Issa Rice with data contributions from Sebastian Sanchez, Amana Rice, and Vipul Naik, and has been partially funded by Vipul Naik and Mati Roy (who in July 2023 paid for the time Issa had spent answering people’s questions about AI Watch up until that point).

If you like (or want to like) this website and have money: the current funder is mostly only funding data updates to existing organizations as well as adding data for some new effective altruist organizations. As a result, the site is not getting any new features or improvements in design. If you want to bring this site to the next level, contact Issa at What you get: site improvements, recognition in the site credits. What the site needs: money.

If you have time and want experience building websites: this website is looking for contributors. If you want to help out, contact Issa at What you get: little or no pay (this could change if the site gets funding; see previous paragraph), recognition in the site credits, privilege of working with me, knowledge of the basics of web development (MySQL, PHP, Git). What the site needs: data collection/entry and website code improvements.

Last updated on 2024-01-01; see here for a full list of recent changes.

Table of contents


Agenda name Associated people Associated organizations
Iterated amplification Paul Christiano, Buck Shlegeris, Dario Amodei OpenAI
Embedded agency Eliezer Yudkowsky, Scott Garrabrant, Abram Demski Machine Intelligence Research Institute
Comprehensive AI services Eric Drexler Future of Humanity Institute
Ambitious value learning Stuart Armstrong Future of Humanity Institute
Factored cognition Andreas Stuhlmüller Ought
Recursive reward modeling Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, Shane Legg Google DeepMind
Debate Paul Christiano OpenAI
Interpretability Christopher Olah
Inverse reinforcement learning
Preference learning
Cooperative inverse reinforcement learning
Imitation learning
Alignment for advanced machine learning systems Jessica Taylor, Eliezer Yudkowsky, Patrick LaVictoire, Andrew Critch Machine Intelligence Research Institute
Learning-theoretic AI alignment Vanessa Kosoy
Counterfactual reasoning Jacob Steinhardt

Positions grouped by person

Showing 2 people with positions.

Name Number of organizations List of organizations
David Manheim 2 Association for Long Term Existence and Resilience, Future of Humanity Institute
Seán Ó hÉigeartaigh 2 Berkeley Existential Risk Initiative, Global Catastrophic Risk Institute

Positions grouped by organization

Showing 7 organizations.

Organization Number of people List of people
Global Catastrophic Risk Institute 35 Dakota Norris, Allan Suresh, Uliana Certan, Kyle L. Evanoff, McKenna Fitzgerald, Oliver Couttolenc, Andrea Owe, Jared Brown, Lena Wang, Jenny Mith, Matthijs Maas, Jessica Cianci, Trevor White, Gary Ackerman, Roman Yampolskiy, Caroline Zaw-Mon, Dave Denkenberger, Robert de Neufville, Arden Rowell, Jianhua Xu, U. Tuncay Alparslan, Steven Umbrello, Jacob Haqq-Misra, Mark Fusco, Kaitlin Butler, Grant Wilson, Tim Maher, Matt Moretto, Kelly Hostetler, Tony Barrett, Seth Baum, Adam Scholl, Marilyn Cotrich, Seán Ó hÉigeartaigh, John Garrick
Berkeley Existential Risk Initiative 10 Andrew Critch, Kyle Scott, Rebecca Raible, Kenzi Amodei, Jacob Tsimerman, Seán Ó hÉigeartaigh, Malo Bourgon, Andrew Snyder-Beattie, Michael Keenan, Gina Stuessy
Association for Long Term Existence and Resilience 5 Vanessa Kosoy, Joshua Fox, Gidon Kadosh, Edo Arad, David Manheim
Convergence Analysis 5 Ozzie Gooen, Claire Abu-Assal, Kristian Rönn, Andrew X Stewart, Justin Shovelain
Future of Humanity Institute 2 David Manheim, David Kristoffersson
AI Challenge 1 David Denkenberger
Foundational Research Institute 1 Max Daniel