Information for Google DeepMind

Table of contents

Basic information

Item Value
Country United Kingdom
Website https://deepmind.com/
Source [1]

Staff count by year

Hover over a name to see the position and date range. This table only includes positions where at least the start date is known. The positions count can count the same person multiple times if they held different positions. For each year, a person is included if they were at the organization for any part of the year; this means the actual staff count at any point during the year can be lower.

Year Positions count Researchers General staff Associates Board members Advisors
2010 3 Demis Hassabis, Mustafa Suleyman, Shane Legg
2011 3 Demis Hassabis, Mustafa Suleyman, Shane Legg
2012 3 Demis Hassabis, Mustafa Suleyman, Shane Legg
2013 3 Demis Hassabis, Mustafa Suleyman, Shane Legg
2014 3 Demis Hassabis, Mustafa Suleyman, Shane Legg
2015 3 Demis Hassabis, Mustafa Suleyman, Shane Legg
2016 4 Pedro A. Ortega Demis Hassabis, Mustafa Suleyman, Shane Legg
2017 6 Pedro A. Ortega Demis Hassabis, Mustafa Suleyman, Sean Legassick, Shane Legg, Verity Harding
2018 7 Pedro A. Ortega Demis Hassabis, Mustafa Suleyman, Sean Legassick, Shane Legg, Verity Harding, Vishal Maini
2019 7 Pedro A. Ortega Demis Hassabis, Mustafa Suleyman, Sean Legassick, Shane Legg, Verity Harding, Vishal Maini

Number of full-time staff at the beginning each year

The following table lists some dates and people who were at the organization on the given date (namely, the start of the year). The table may not list every person who worked for the organization (e.g. they could have joined and left in the middle of a single year). This table excludes associates, interns, advisors, and board members.

Date Staff count Staff
2011-01-01 3 Demis Hassabis, Mustafa Suleyman, Shane Legg
2012-01-01 3 Demis Hassabis, Mustafa Suleyman, Shane Legg
2013-01-01 3 Demis Hassabis, Mustafa Suleyman, Shane Legg
2014-01-01 3 Demis Hassabis, Mustafa Suleyman, Shane Legg
2015-01-01 3 Demis Hassabis, Mustafa Suleyman, Shane Legg
2016-01-01 4 Demis Hassabis, Mustafa Suleyman, Pedro A. Ortega, Shane Legg
2017-01-01 4 Demis Hassabis, Mustafa Suleyman, Pedro A. Ortega, Shane Legg
2018-01-01 6 Demis Hassabis, Mustafa Suleyman, Pedro A. Ortega, Sean Legassick, Shane Legg, Verity Harding
2019-01-01 7 Demis Hassabis, Mustafa Suleyman, Pedro A. Ortega, Sean Legassick, Shane Legg, Verity Harding, Vishal Maini

Full history of additions and subtractions

This table shows the full change history of positions. Each row corresponds to at least one addition or removal of a position. Additions are in green and subtractions are in red. If a position name changed, it is listed simultaneously as an addition (of the new name) and removal (of the old name) and colored yellow. Additionally there are faded variants of each color for visited links.

Date Number of positions Number of positions added Number of positions removed Positions added Positions removed
14 14 0 Andrew Lefrancq, Chris Maddison, Christiana Figueres, Diane Coyle, Edward W. Felten, James Manyika, Jan Leike, Jeffrey D. Sachs, Laurent Orseau, Miljan Martic, Nick Bostrom, Thore Graepel, Tom Everitt, Victoria Krakovna
2010-09-23 17 3 0 Demis Hassabis, Mustafa Suleyman, Shane Legg
2016-01-01 18 1 0 Pedro A. Ortega
2017-10-04 20 2 0 Sean Legassick, Verity Harding
2018-02-01 21 1 0 Vishal Maini

List of people (21 positions)

Person Title Start date End date AI safety relation Subject Employment type Source Notes
Chris Maddison Research Scientist position technical research [2] One of the Open Philanthropy Project 2018 AI Fellows.
Laurent Orseau Research Scientist [3], [4], [5], [6], [7], [8], [9]
Victoria Krakovna Research Scientist [10], [8], [9]
Nick Bostrom DeepMind Ethics & Society Fellow [11]
Diane Coyle DeepMind Ethics & Society Fellow [11]
Edward W. Felten DeepMind Ethics & Society Fellow [11]
Christiana Figueres DeepMind Ethics & Society Fellow [11]
James Manyika DeepMind Ethics & Society Fellow [11]
Jeffrey D. Sachs DeepMind Ethics & Society Fellow [11]
Sean Legassick Head of DeepMind Ethics & Society 2017-10-04 [12], [13]
Verity Harding Head of DeepMind Ethics & Society 2017-10-04 [12], [13]
Demis Hassabis Co-Founder and CEO 2010-09-23 [14], [15]
Mustafa Suleyman Co-Founder and Head of Applied AI 2010-09-23 [14], [15]
Shane Legg Co-Founder and Chief Scientist 2010-09-23 [14], [15], [16], [8], [9]
Miljan Martic [16], [8]
Jan Leike [16], [8]
Tom Everitt [8]
Andrew Lefrancq [8]
Pedro A. Ortega Research Scientist 2016-01-01 position technical research full-time [8], [17], [18], [19], [20]
Vishal Maini Strategic Communications Manager 2018-02-01 position popularization [21], [22], [23]
Thore Graepel [24]

Products (0 products)

Name Creation date Description

Organization documents (0 documents)

Title Publication date Author Publisher Affected organizations Affected people Document scope Cause area Notes

Documents (2 documents)

Title Publication date Author Publisher Affected organizations Affected people Affected agendas Notes
New safety research agenda: scalable agent alignment via reward modeling 2018-11-20 Victoria Krakovna LessWrong Google DeepMind Jan Leike Recursive reward modeling, iterated amplification Blog post on LessWrong announcing the recursive reward modeling agenda. Some comments in the discussion thread clarify various aspects of the agenda, including its relation to Paul Christiano’s iterated amplification agenda, whether the DeepMind safety team is thinking about the problem of whether the human user is a safe agent, and more details about alternating quantifiers in the analogy to complexity theory. Jan Leike is listed as an affected person for this document because he is the lead author and is mentioned in the blog post, and also because he responds to several questions raised in the comments.
Scalable agent alignment via reward modeling: a research direction 2018-11-19 Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, Shane Legg arXiv Google DeepMind Recursive reward modeling, Imitation learning, inverse reinforcement learning, Cooperative inverse reinforcement learning, myopic reinforcement learning, iterated amplification, debate This paper introduces the (recursive) reward modeling agenda, discussing its basic outline, challenges, and ways to overcome those challenges. The paper also discusses alternative agendas and their relation to reward modeling.