AI Watch

Welcome! This is a website to track people and organizations working on AI safety. See the code repository for the source code and data of this website.

This website is developed by Issa Rice and has been partially funded by Vipul Naik.

If you like (or want to like) this website and have money: the current funder doesn't want to continue funding this project. As a result, it is currently mostly sitting around. If you want to bring this site to the next level, contact Issa at riceissa@gmail.com. What you get: site improvements, recognition in the site credits. What the site needs: money.

If you have time and want experience building websites: this website is looking for contributors. If you want to help out, contact Issa at riceissa@gmail.com. What you get: little or no pay (this could change if the site gets funding; see previous paragraph), recognition in the site credits, privilege of working with me, knowledge of the basics of web development (MySQL, PHP, Git). What the site needs: data collection/entry and website code improvements.

Last updated on 2019-09-14.

Table of contents

Agendas

Agenda name Associated people Associated organizations
Iterated amplification Paul Christiano, Buck Shlegeris, Dario Amodei OpenAI
Embedded agency Eliezer Yudkowsky, Scott Garrabrant, Abram Demski Machine Intelligence Research Institute
Comprehensive AI services Eric Drexler Future of Humanity Institute
Ambitious value learning Stuart Armstrong Future of Humanity Institute
Factored cognition Andreas Stuhlmüller Ought
Recursive reward modeling Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, Shane Legg Google DeepMind
Debate Paul Christiano OpenAI
Interpretability Christopher Olah
Inverse reinforcement learning
Preference learning
Cooperative inverse reinforcement learning
Imitation learning
Alignment for advanced machine learning systems Jessica Taylor, Eliezer Yudkowsky, Patrick LaVictoire, Andrew Critch Machine Intelligence Research Institute
Learning-theoretic AI alignment Vanessa Kosoy
Counterfactual reasoning Jacob Steinhardt

Positions grouped by person

Showing 25 people with positions.

Name Number of organizations List of organizations
Alyssa Vance 1 Machine Intelligence Research Institute
Anders Sandberg 1 Future of Humanity Institute
Anish Mohammed 1 EthicsNet
Asya Bergal 1 AI Impacts
Ben Hoffman 1 AI Impacts
Ben Hoskin 1 Machine Intelligence Research Institute
Brian Tomasik 1 Foundational Research Institute
Connor Flexman 1 AI Impacts
Daniel Kokotajlo 1 AI Impacts
Dávid Natingga 1 Machine Intelligence Research Institute
Finan Adamson 1 AI Impacts
Frank Adamek 1 Machine Intelligence Research Institute
Jimmy Rintjema 1 AI Impacts
John Salvatier 1 AI Impacts
Justis Mills 1 AI Impacts
Kaj Sotala 1 Foundational Research Institute
Katja Grace 1 AI Impacts
Lukas Gloor 1 Foundational Research Institute
Michael Wulfsohn 1 AI Impacts
Paul Christiano 1 AI Impacts
Rick Korzekwa 1 AI Impacts
Ronja Lutz 1 AI Impacts
Sebastian Nickel 1 Machine Intelligence Research Institute
Stephanie Zolayvar 1 AI Impacts
Tegan McCaslin 1 AI Impacts

Positions grouped by organization

Showing 5 organizations.

Organization Number of people List of people
AI Impacts 15 Asya Bergal, Rick Korzekwa, Ronja Lutz, Daniel Kokotajlo, Tegan McCaslin, Stephanie Zolayvar, Ben Hoffman, John Salvatier, Paul Christiano, Jimmy Rintjema, Justis Mills, Connor Flexman, Finan Adamson, Michael Wulfsohn, Katja Grace
Machine Intelligence Research Institute 5 Dávid Natingga, Sebastian Nickel, Ben Hoskin, Frank Adamek, Alyssa Vance
Foundational Research Institute 3 Brian Tomasik, Kaj Sotala, Lukas Gloor
EthicsNet 1 Anish Mohammed
Future of Humanity Institute 1 Anders Sandberg