Welcome! This is a website to track people and organizations working on AI safety. See the code repository for the source code and data of this website.
This website is developed by Issa Rice with data contributions from Sebastian Sanchez, Amana Rice, and Vipul Naik, and has been partially funded by Vipul Naik and Mati Roy (who in July 2023 paid for the time Issa had spent answering people’s questions about AI Watch up until that point).
If you like (or want to like) this website and have money: the current funder is mostly only funding data updates to existing organizations as well as adding data for some new effective altruist organizations. As a result, the site is not getting any new features or improvements in design. If you want to bring this site to the next level, contact Issa at email@example.com. What you get: site improvements, recognition in the site credits. What the site needs: money.
If you have time and want experience building websites: this website is looking for contributors. If you want to help out, contact Issa at firstname.lastname@example.org. What you get: little or no pay (this could change if the site gets funding; see previous paragraph), recognition in the site credits, privilege of working with me, knowledge of the basics of web development (MySQL, PHP, Git). What the site needs: data collection/entry and website code improvements.
Last updated on 2023-09-07; see here for a full list of recent changes.
|Agenda name||Associated people||Associated organizations|
|Iterated amplification||Paul Christiano, Buck Shlegeris, Dario Amodei||OpenAI|
|Embedded agency||Eliezer Yudkowsky, Scott Garrabrant, Abram Demski||Machine Intelligence Research Institute|
|Comprehensive AI services||Eric Drexler||Future of Humanity Institute|
|Ambitious value learning||Stuart Armstrong||Future of Humanity Institute|
|Factored cognition||Andreas Stuhlmüller||Ought|
|Recursive reward modeling||Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, Shane Legg||Google DeepMind|
|Inverse reinforcement learning|
|Cooperative inverse reinforcement learning|
|Alignment for advanced machine learning systems||Jessica Taylor, Eliezer Yudkowsky, Patrick LaVictoire, Andrew Critch||Machine Intelligence Research Institute|
|Learning-theoretic AI alignment||Vanessa Kosoy|
|Counterfactual reasoning||Jacob Steinhardt|
Showing 0 people with positions.
|Name||Number of organizations||List of organizations|
Showing 5 organizations.
|Organization||Number of people||List of people|
|AI Impacts||13||Daniel Kokotajlo, Asya Bergal, Ronja Lutz, Richard Korzekwa, Tegan McCaslin, Paul Christiano, Jimmy Rintjema, Ben Hoffman, Justis Mills, Connor Flexman, Finan Adamson, John Salvatier, Stephanie Zolayvar|
|Machine Intelligence Research Institute||5||Dávid Natingga, Sebastian Nickel, Ben Hoskin, Frank Adamek, Alyssa Vance|
|Foundational Research Institute||3||Brian Tomasik, Kaj Sotala, Lukas Gloor|
|Future of Humanity Institute||1||Anders Sandberg|