Item | Value |
---|
Organization | Title | Start date | End date | AI safety relation | Subject | Employment type | Source | Notes |
---|---|---|---|---|---|---|---|---|
Future of Humanity Institute | Director of Research | 2015-12-01 | [1], [2], [3], [4] | |||||
Future of Humanity Institute | Academic Project Manager | 2013-11-01 | 2015-12-01 | [1], [2], [3] | ||||
Leverhulme Centre for the Future of Intelligence | Research Exercise Leader | [5], [6] | ||||||
Berkeley Existential Risk Initiative | Advisor | 2017-02-01 | 2019-02-01 | GCR organization | [7], [8], [9] | |||
Longview Philanthropy | Advisor | 2021-01-01 | advisor | [10] | ||||
Open Philanthropy | Program officer | 2019-01-01 | [11], [12] |
Name | Creation date | Description |
---|
Title | Publication date | Author | Publisher | Affected organizations | Affected people | Document scope | Cause area | Notes |
---|
Title | Publication date | Author | Publisher | Affected organizations | Affected people | Affected agendas | Notes |
---|
Showing at most 20 people who are most similar in terms of which organizations they have worked at.