Item | Value |
---|---|
Agendas | Counterfactual reasoning |
Organization | Title | Start date | End date | AI safety relation | Subject | Employment type | Source | Notes |
---|---|---|---|---|---|---|---|---|
Stanford University | Graduate Student | graduate student | [1], [2], [3], [4] | One of 37 AGI Safety Researchers of 2015 funded by donations from Elon Musk and the Open Philanthropy Project | ||||
Center for Human-Compatible AI | Affiliate | 2019-12-01 | affiliate | [5] | ||||
Open Philanthropy | Technical advisor | [6] |
Name | Creation date | Description |
---|
Title | Publication date | Author | Publisher | Affected organizations | Affected people | Document scope | Cause area | Notes |
---|
Title | Publication date | Author | Publisher | Affected organizations | Affected people | Affected agendas | Notes |
---|
Showing at most 20 people who are most similar in terms of which organizations they have worked at.