Information for Victoria Krakovna

Table of contents

Basic information

Item Value
Facebook username vkrakovna
Intelligent Agent Foundations Forum username 70
Website https://vkrakovna.wordpress.com/
Donations List Website (data still preliminary) donor

List of positions (3 positions)

Organization Title Start date End date AI safety relation Subject Employment type Source Notes
Future of Life Institute Co-Founder 2014-03-01 board member [1], [2]
Google DeepMind Research Scientist [3], [4], [5]
Machine Intelligence Research Institute Advisor 2018-11-26 position advisor [6], [7]

Products (3 products)

Name Creation date Description
AI Safety Discussion 2016-02-21 A Facebook discussion group about AI safety. This is a closed group so one needs to request access to see posts.
Introductory resources on AI safety research 2016-02-28 A list of readings on long-term AI safety. Mirrored at [8]. There is an updated list at [9].
AI safety resources 2017-10-01 A list of resources for long-term AI safety. Seems to have been first announced at [10].

Organization documents (0 documents)

Title Publication date Author Publisher Affected organizations Affected people Document scope Cause area Notes

Documents (1 document)

Title Publication date Author Publisher Affected organizations Affected people Affected agendas Notes
New safety research agenda: scalable agent alignment via reward modeling 2018-11-20 Victoria Krakovna LessWrong Google DeepMind Jan Leike Recursive reward modeling, iterated amplification Blog post on LessWrong announcing the recursive reward modeling agenda. Some comments in the discussion thread clarify various aspects of the agenda, including its relation to Paul Christiano’s iterated amplification agenda, whether the DeepMind safety team is thinking about the problem of whether the human user is a safe agent, and more details about alternating quantifiers in the analogy to complexity theory. Jan Leike is listed as an affected person for this document because he is the lead author and is mentioned in the blog post, and also because he responds to several questions raised in the comments.

Similar people

Showing at most 20 people who are most similar in terms of which organizations they have worked at.

Person Number of organizations in common List of organizations in common
Nick Bostrom 3 Future of Life Institute, Google DeepMind, Machine Intelligence Research Institute
Jaan Tallinn 2 Future of Life Institute, Machine Intelligence Research Institute
Stuart Russell 2 Future of Life Institute, Machine Intelligence Research Institute
Max Tegmark 2 Future of Life Institute, Machine Intelligence Research Institute
Janos Kramar 2 Future of Life Institute, Machine Intelligence Research Institute
Daniel Dewey 2 Future of Life Institute, Machine Intelligence Research Institute
Jesse Galef 2 Future of Life Institute, Machine Intelligence Research Institute
Jan Leike 2 Google DeepMind, Machine Intelligence Research Institute
Martin Rees 1 Future of Life Institute
Elon Musk 1 Future of Life Institute
Stephen Hawking 1 Future of Life Institute
Francesca Rossi 1 Future of Life Institute
Melody Guan 1 Future of Life Institute
Ales Flidr 1 Future of Life Institute
Akhil Deo 1 Future of Life Institute
Alan Alda 1 Future of Life Institute
Alan Guth 1 Future of Life Institute
Alan Yan 1 Future of Life Institute
Alexandra Tsalidis 1 Future of Life Institute
Andrea Berman 1 Future of Life Institute