Item | Value |
---|
Organization | Title | Start date | End date | AI safety relation | Subject | Employment type | Source | Notes |
---|---|---|---|---|---|---|---|---|
Epoch | Staff Researcher | 2022-12-03 | [1] |
Name | Creation date | Description |
---|---|---|
Clarifying some key hypotheses in AI alignment | 2019-08-15 | With Rohin Shah. A diagram collecting several hypotheses in AI alignment and their relationships to existing research agendas. |
Title | Publication date | Author | Publisher | Affected organizations | Affected people | Document scope | Cause area | Notes |
---|
Title | Publication date | Author | Publisher | Affected organizations | Affected people | Affected agendas | Notes |
---|
Showing at most 20 people who are most similar in terms of which organizations they have worked at.
Person | Number of organizations in common | List of organizations in common |
---|---|---|
Daniela Amodei | 1 | Epoch |
David Atkinson | 1 | Epoch |
Ege Erdil | 1 | Epoch |
Jenny Xiao | 1 | Epoch |
Keith Wynroe | 1 | Epoch |
Matthew Barnett | 1 | Epoch |