Item | Value |
---|---|
Facebook username | tom.everitt |
Intelligent Agent Foundations Forum username | 108 |
Website | http://www.tomeveritt.se/ |
Source | [1] |
Agendas | Recursive reward modeling |
Organization | Title | Start date | End date | AI safety relation | Subject | Employment type | Source | Notes |
---|---|---|---|---|---|---|---|---|
Google DeepMind | [2] | |||||||
Australian National University | [2], [3], [4], [5] |
Name | Creation date | Description |
---|
Title | Publication date | Author | Publisher | Affected organizations | Affected people | Document scope | Cause area | Notes |
---|
Title | Publication date | Author | Publisher | Affected organizations | Affected people | Affected agendas | Notes |
---|---|---|---|---|---|---|---|
Scalable agent alignment via reward modeling: a research direction | 2018-11-19 | Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, Shane Legg | arXiv | Google DeepMind | Recursive reward modeling, Imitation learning, inverse reinforcement learning, Cooperative inverse reinforcement learning, myopic reinforcement learning, iterated amplification, debate | This paper introduces the (recursive) reward modeling agenda, discussing its basic outline, challenges, and ways to overcome those challenges. The paper also discusses alternative agendas and their relation to reward modeling. |
Showing at most 20 people who are most similar in terms of which organizations they have worked at.