Item | Value |
---|
Organization | Title | Start date | End date | AI safety relation | Subject | Employment type | Source | Notes |
---|---|---|---|---|---|---|---|---|
Centre for the Study of Existential Risk | Executive Director | position | [1], [2], [3], [4], [5], [6], [7] | |||||
Future of Humanity Institute | Research Associate | [8], [9] | ||||||
Leverhulme Centre for the Future of Intelligence | Project Leader | [10], [11], [7] | ||||||
Berkeley Existential Risk Initiative | Board advisor | 2017-02-01 | GCR organization | advisor | [12], [13], [14] | |||
Global Catastrophic Risk Institute | Senior Advisor | GCR organization | advisor | [15] |
Name | Creation date | Description |
---|
Title | Publication date | Author | Publisher | Affected organizations | Affected people | Document scope | Cause area | Notes |
---|
Title | Publication date | Author | Publisher | Affected organizations | Affected people | Affected agendas | Notes |
---|
Showing at most 20 people who are most similar in terms of which organizations they have worked at.