| Item | Value |
|---|---|
| Donations List Website (data still preliminary) |
| Organization | Title | Start date | End date | AI safety relation | Subject | Employment type | Source | Notes |
|---|---|---|---|---|---|---|---|---|
| Machine Intelligence Research Institute | Staff Writer | 2016-10-01 | 2018-05-01 | position | [1], [2] | First appears on the team page in February 2017. | ||
| Center for Applied Rationality | Adjunct Instructor | 2017-04-01 | 2019-02-01 | [3], [4], [5] | ||||
| Lightcone Infrastructure | 2017-06-01 | 2021-07-01 | position | [6], [7], [8] |
| Name | Creation date | Description |
|---|
| Title | Publication date | Author | Publisher | Affected organizations | Affected people | Document scope | Cause area | Notes |
|---|---|---|---|---|---|---|---|---|
| Comment on “Taking AI Risk Seriously” (thoughts by Critch) | 2018-02-01 | Matthew Graves | LessWrong | Machine Intelligence Research Institute | AI safety | Graves (who does recruiting work at MIRI) gives his personal opinion on way to get involved in AI safety. |
| Title | Publication date | Author | Publisher | Affected organizations | Affected people | Affected agendas | Notes |
|---|
Showing at most 20 people who are most similar in terms of which organizations they have worked at.