AI Watch

Welcome! This is a website to track people and organizations working on AI safety. See the code repository for the source code and data of this website.

This website is developed by Issa Rice and is partially funded by Vipul Naik.

This site is still under active development.

Last updated on 2018-09-16.

Table of contents

Positions grouped by person

Showing 20 people with positions.

Name Number of organizations List of organizations
Andrew Critch 1 Berkeley Existential Risk Initiative
Andrew Snyder-Beattie 1 Berkeley Existential Risk Initiative
Andrew X Stewart 1 Convergence Analysis
Claire Abu-Assal 1 Convergence Analysis
David Kristoffersson 1 Future of Humanity Institute
Gina Stuessy 1 Berkeley Existential Risk Initiative
Jaan Tallinn 1 Berkeley Existential Risk Initiative
Jacob Tsimerman 1 Berkeley Existential Risk Initiative
Justin Shovelain 1 Convergence Analysis
Kenzi Amodei 1 Berkeley Existential Risk Initiative
Kristian Rönn 1 Convergence Analysis
Kyle Scott 1 Berkeley Existential Risk Initiative
Malo Bourgon 1 Berkeley Existential Risk Initiative
Max Daniel 1 Foundational Research Institute
Michael Keenan 1 Berkeley Existential Risk Initiative
Ozzie Gooen 1 Convergence Analysis
Rebecca Raible 1 Berkeley Existential Risk Initiative
Robert de Neufville 1 Global Catastrophic Risk Institute
Seán Ó hÉigeartaigh 1 Berkeley Existential Risk Initiative
Stuart Russell 1 Berkeley Existential Risk Initiative

Positions grouped by organization

Showing 5 organizations.

Organization Number of people List of people
Berkeley Existential Risk Initiative 12 Andrew Critch, Andrew Snyder-Beattie, Gina Stuessy, Jaan Tallinn, Jacob Tsimerman, Kenzi Amodei, Kyle Scott, Malo Bourgon, Michael Keenan, Rebecca Raible, Seán Ó hÉigeartaigh, Stuart Russell
Convergence Analysis 5 Andrew X Stewart, Claire Abu-Assal, Justin Shovelain, Kristian Rönn, Ozzie Gooen
Foundational Research Institute 1 Max Daniel
Future of Humanity Institute 1 David Kristoffersson
Global Catastrophic Risk Institute 1 Robert de Neufville