Welcome! This is a website to track people and organizations working on AI safety. See the code repository for the source code and data of this website.
This website is developed by Issa Rice with data contributions from Sebastian Sanchez, Amana Rice, and Vipul Naik, and has been partially funded by Vipul Naik.
If you like (or want to like) this website and have money: the current funder doesn't want to continue funding this project. As a result, it is currently mostly sitting around. If you want to bring this site to the next level, contact Issa at riceissa@gmail.com. What you get: site improvements, recognition in the site credits. What the site needs: money.
If you have time and want experience building websites: this website is looking for contributors. If you want to help out, contact Issa at riceissa@gmail.com. What you get: little or no pay (this could change if the site gets funding; see previous paragraph), recognition in the site credits, privilege of working with me, knowledge of the basics of web development (MySQL, PHP, Git). What the site needs: data collection/entry and website code improvements.
Last updated on 2021-01-04; see here for a full list of recent changes.
Note: as shown by the large number of “unknown” values, most of the positions haven’t been categorized by relation/subject so this table will only be useful in the future.
Subject | Unknown | AGI organization | GCR organization | position | Total |
---|---|---|---|---|---|
Unknown | 2883 | 238 | 20 | 245 | 3386 |
background | 0 | 0 | 0 | 25 | 25 |
general | 0 | 0 | 2 | 8 | 10 |
grant investigation | 0 | 0 | 0 | 3 | 3 |
policy | 0 | 0 | 0 | 1 | 1 |
popularization | 0 | 0 | 0 | 2 | 2 |
scientific advising | 0 | 0 | 0 | 4 | 4 |
software engineering | 0 | 2 | 0 | 10 | 12 |
strategy | 0 | 0 | 0 | 1 | 1 |
technical research | 0 | 0 | 0 | 34 | 34 |
Total | 2883 | 240 | 22 | 333 | 3478 |
Note: as shown by the large number of “unknown” values, most of the positions haven’t been categorized by start/end dates so this table will only be useful in the future.
Year | Start date | End date |
---|---|---|
Unknown | 1323 | 2333 |
1999 | 2 | 1 |
2000 | 6 | 0 |
2002 | 1 | 1 |
2004 | 5 | 0 |
2005 | 13 | 2 |
2006 | 12 | 13 |
2007 | 17 | 6 |
2008 | 5 | 5 |
2009 | 24 | 7 |
2010 | 31 | 23 |
2011 | 46 | 36 |
2012 | 56 | 36 |
2013 | 83 | 59 |
2014 | 124 | 43 |
2015 | 186 | 130 |
2016 | 406 | 187 |
2017 | 343 | 194 |
2018 | 374 | 235 |
2019 | 243 | 117 |
2020 | 178 | 50 |
Showing 878 people with positions.
Showing 126 organizations.
Showing 1720 people.
This section lists AI safety-related “products”: interactive tools, websites, flowcharts, datasets, etc. Unlike documents, products tend to be interactive, are updated continually, or require inputs from the consumer.
Showing 33 products.
Name | Type | Creator | Creation date | Description |
---|---|---|---|---|
Clarifying some key hypotheses in AI alignment | diagram | Ben Cottier, Rohin Shah | 2019-08-15 | A diagram collecting several hypotheses in AI alignment and their relationships to existing research agendas. |
AI Alignment Forum | blog | LessWrong 2.0 | 2018-07-10 | A group blog for discussion of technical aspects of AI alignment. The forum is built using the same software as LessWrong 2.0, and is integrated with LessWrong 2.0. For creation date, see [25]. |
AI Safety Research Camp | workshop | Tom McGrath, Remmelt Ellen, Linda Linsefors, Nandi Schoots, David Kristoffersson, Chris Pasek | 2018-02-01 | A research camp to take place in Gran Canaria in April 2018 and in the United Kingdom in July–August 2018. Facebook group at [26]. The creation date is the date of announcement on LessWrong 2.0. |
“Levels of defense” in AI safety | flowchart | Alexey Turchin | 2017-12-12 | A flowchart applying multilevel defense to AI safety. There is an accompanying post on LessWrong at [27]. |
AI Alignment Prize | contest | Zvi Mowshowitz, Vladimir Slepnev, Paul Christiano | 2017-11-03 | A prize for work that advances understanding in alignment of smarter-than-human artificial intelligence. Winners for the first round, as well as announcement of the second round, can be found at [24]. Winners for the second round, as well as announcement of the third round, can be found at [28]. |
AI Watch | interactive application | Issa Rice | 2017-10-23 | A website to track people and organizations working on AI safety. |
AI Safety Open Discussion | discussion group | Mati Roy | 2017-10-23 | A Facebook discussion group about AI safety. This is an open group. |
AI safety resources | list | Victoria Krakovna | 2017-10-01 | A list of resources for long-term AI safety. Seems to have been first announced at [29]. |
Map of the AI Safety Community | graphic | Søren Elverlin | 2017-09-26 | A pictorial map that lists organizations and individuals in the AI safety community. |
Open Philanthropy Project AI Fellows Program | fellowship | Open Philanthropy Project | 2017-09-12 | A fellowship to support PhD students in AI and machine learning. For the creation date, see [30]. |
LessWrong 2.0 | blog | LessWrong 2.0 | 2017-06-18 | A community blog about rationality, decision theory, AI, the rationality community, and other topics relevant to AI safety. This is a re-launch/modernization of the original LessWrong. For the launch date, the date of the welcome post [31] is used. |
Road to AI Safety Excellence | course | Toon Alfrink | 2017-06-15 | A proposed course that is designed to produce AI safety researchers. It used to be called “Accelerating AI Safety Adoption in Academia” and was announced on LessWrong at [32]. The Facebook group was created on 2017-06-30 [33]. |
Annotated bibliography of recommended materials | list | Center for Human-Compatible AI | 2016-12-01 | An annotated and interactive bibliography of AI safety-related course materials, textbooks, videos, papers, etc. |
Extinction Risk from Artificial Intelligence | blog | Michael Cohen | 2016-06-01 | A series of pages exploring arguments for and against working on AI safety. The creation date is inferred from the URLs of images (example: [34]). |
AI Alignment | blog | Paul Christiano | 2016-05-28 | Paul Christiano’s blog about AI alignment. |
AISafety.com Reading Group | discussion group | Søren Elverlin, Erik B. Jacobsen, Volkan Erdogan | 2016-05-24 | A weekly reading group covering topics in AI safety. |
Cause prioritization app | interactive application | Michael Dickens, Buck Shlegeris | 2016-05-18 | An interactive app for quantitative cause prioritization. The app includes a section [35] on AI safety intervention. The creation date is the date of the first commit in the Git repository [36]. |
Arbital AI alignment domain | wiki | Arbital, Eliezer Yudkowsky | 2016-03-04 | A collection of wiki-like pages on topics in AI alignment. The creation date is the date of the launch announcement for Arbital [37]; it’s unclear when the AI alignment domain itself was created. |
Introductory resources on AI safety research | list | Victoria Krakovna | 2016-02-28 | A list of readings on long-term AI safety. Mirrored at [38]. There is an updated list at [39]. |
AI Safety Discussion | discussion group | Victoria Krakovna | 2016-02-21 | A Facebook discussion group about AI safety. This is a closed group so one needs to request access to see posts. |
Reinforce.js implementation of Stuart Armstrong’s toy control problem | interactive application | Gwern Branwen, FeepingCreature | 2016-02-03 | A live demo of Stuart Armstrong’s toy control problem [40]. gwern introduced the demo in a LessWrong comment [41]. |
AI Policies Wiki | wiki | Gordon Irlam | 2015-12-14 | A wiki on AI policy. The wiki creation date can be seen in the revision history of the main page [42]. |
The Control Problem | discussion group | CyberPersona | 2015-08-29 | A subreddit about AI safety and control. For the subreddit creation date, see [43]. |
AGI Failures Modes and Levels map | flowchart | Alexey Turchin | 2015-01-01 | A flowchart about failure modes of artificial general intelligence, grouped by the stage of development. There is an accompanying post on LessWrong at [18]. |
AGI Safety Solutions Map | flowchart | Alexey Turchin | 2015-01-01 | A flowchart on potential solutions to AI safety. There is an accompanying post on LessWrong at [44]. |
Intelligent Agent Foundations Forum | discussion group | Machine Intelligence Research Institute | 2014-11-04 | A forum for technical AI safety research. The source code is hosted on GitHub [45]. The timestamp on the introductory post [46] gives the launch date. |
A flowchart of AI safety considerations | flowchart | Eliezer Yudkowsky | 2014-11-02 | The flowchart was posted to Eliezer Yudkowsky’s Essays (a Facebook group) and has no title. |
Effective Altruism Forum | blog | Centre for Effective Altruism, Rethink Charity, Ryan Carey | 2014-09-10 | A community blog about effective altruism which often has posts about AI safety. The forum was announced on LessWrong by Ryan Carey [47]. |
How to study superintelligence strategy | list | Luke Muehlhauser | 2014-07-03 | A list of project ideas in superintelligence strategy. |
Ordinary Ideas | blog | Paul Christiano | 2011-12-21 | Paul Christiano’s blog about “weird AI stuff” [48]. |
The Uncertain Future | interactive application | Machine Intelligence Research Institute | 2009-10-01 | A tool to model future technology and its effect on civilization. For more about the history of the site, see [49]. |
LessWrong Wiki | wiki | Machine Intelligence Research Institute | 2009-03-12 | A companion wiki to the community blog LessWrong. The wiki has pages about AI safety. |
LessWrong | blog | Machine Intelligence Research Institute | 2009-02-01 | A community blog about rationality, decision theory, AI, updates to MIRI, among other topics. |