Information for Victoria Krakovna

Table of contents

Basic information

Item Value
Facebook username vkrakovna
Intelligent Agent Foundations Forum username 70
Donations List Website (data still preliminary) donor

List of positions (3 positions)

Organization Title Start date End date AI safety relation Subject Employment type Source Notes
Future of Life Institute Co-Founder [1]
Google DeepMind Research Scientist [2], [3], [4]
Machine Intelligence Research Institute Research advisor 2018-09-30 position advisor [5]

Products (3 products)

Name Creation date Description
AI Safety Discussion 2016-02-21 A Facebook discussion group about AI safety. This is a closed group so one needs to request access to see posts.
Introductory resources on AI safety research 2016-02-28 A list of readings on long-term AI safety. Mirrored at [6]. There is an updated list at [7].
AI safety resources 2017-10-01 A list of resources for long-term AI safety. Seems to have been first announced at [8].

Organization documents (0 documents)

Title Publication date Author Publisher Affected organizations Affected people Document scope Cause area Notes

Documents (1 document)

Title Publication date Author Publisher Affected organizations Affected people Affected agendas Notes
New safety research agenda: scalable agent alignment via reward modeling 2018-11-20 Victoria Krakovna LessWrong Google DeepMind Jan Leike Recursive reward modeling, iterated amplification Blog post on LessWrong announcing the recursive reward modeling agenda. Some comments in the discussion thread clarify various aspects of the agenda, including its relation to Paul Christiano’s iterated amplification agenda, whether the DeepMind safety team is thinking about the problem of whether the human user is a safe agent, and more details about alternating quantifiers in the analogy to complexity theory. Jan Leike is listed as an affected person for this document because he is the lead author and is mentioned in the blog post, and also because he responds to several questions raised in the comments.