Welcome! This is a website to track people and organizations working on AI safety. See the code repository for the source code and data of this website.
This website is developed by Issa Rice and has been partially funded by Vipul Naik.
If you like (or want to like) this website and have money: the current funder doesn't want to continue funding this project. As a result, it is currently mostly sitting around. If you want to bring this site to the next level, contact Issa at firstname.lastname@example.org. What you get: site improvements, recognition in the site credits. What the site needs: money.
If you have time and want experience building websites: this website is looking for contributors. If you want to help out, contact Issa at email@example.com. What you get: little or no pay (this could change if the site gets funding; see previous paragraph), recognition in the site credits, privilege of working with me, knowledge of the basics of web development (MySQL, PHP, Git). What the site needs: data collection/entry and website code improvements.
Last updated on 2019-11-09.
|Agenda name||Associated people||Associated organizations|
|Iterated amplification||Paul Christiano, Buck Shlegeris, Dario Amodei||OpenAI|
|Embedded agency||Eliezer Yudkowsky, Scott Garrabrant, Abram Demski||Machine Intelligence Research Institute|
|Comprehensive AI services||Eric Drexler||Future of Humanity Institute|
|Ambitious value learning||Stuart Armstrong||Future of Humanity Institute|
|Factored cognition||Andreas Stuhlmüller||Ought|
|Recursive reward modeling||Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, Shane Legg||Google DeepMind|
|Inverse reinforcement learning|
|Cooperative inverse reinforcement learning|
|Alignment for advanced machine learning systems||Jessica Taylor, Eliezer Yudkowsky, Patrick LaVictoire, Andrew Critch||Machine Intelligence Research Institute|
|Learning-theoretic AI alignment||Vanessa Kosoy|
|Counterfactual reasoning||Jacob Steinhardt|
Showing 20 people with positions.
|Name||Number of organizations||List of organizations|
|Andrew Critch||1||Berkeley Existential Risk Initiative|
|Andrew Snyder-Beattie||1||Berkeley Existential Risk Initiative|
|Andrew X Stewart||1||Convergence Analysis|
|Claire Abu-Assal||1||Convergence Analysis|
|David Kristoffersson||1||Future of Humanity Institute|
|Gina Stuessy||1||Berkeley Existential Risk Initiative|
|Jaan Tallinn||1||Berkeley Existential Risk Initiative|
|Jacob Tsimerman||1||Berkeley Existential Risk Initiative|
|Justin Shovelain||1||Convergence Analysis|
|Kenzi Amodei||1||Berkeley Existential Risk Initiative|
|Kristian Rönn||1||Convergence Analysis|
|Kyle Scott||1||Berkeley Existential Risk Initiative|
|Malo Bourgon||1||Berkeley Existential Risk Initiative|
|Max Daniel||1||Foundational Research Institute|
|Michael Keenan||1||Berkeley Existential Risk Initiative|
|Ozzie Gooen||1||Convergence Analysis|
|Rebecca Raible||1||Berkeley Existential Risk Initiative|
|Robert de Neufville||1||Global Catastrophic Risk Institute|
|Seán Ó hÉigeartaigh||1||Berkeley Existential Risk Initiative|
|Stuart Russell||1||Berkeley Existential Risk Initiative|
Showing 5 organizations.
|Organization||Number of people||List of people|
|Berkeley Existential Risk Initiative||12||Jaan Tallinn, Kyle Scott, Rebecca Raible, Kenzi Amodei, Jacob Tsimerman, Stuart Russell, Seán Ó hÉigeartaigh, Malo Bourgon, Andrew Snyder-Beattie, Michael Keenan, Gina Stuessy, Andrew Critch|
|Convergence Analysis||5||Ozzie Gooen, Claire Abu-Assal, Kristian Rönn, Andrew X Stewart, Justin Shovelain|
|Foundational Research Institute||1||Max Daniel|
|Future of Humanity Institute||1||David Kristoffersson|
|Global Catastrophic Risk Institute||1||Robert de Neufville|