Information for Gillian Hadfield

Table of contents

Basic information

Item Value

List of positions (2 positions)

Organization Title Start date End date AI safety relation Subject Employment type Source Notes
Center for Human-Compatible AI Faculty Affiliate 2017-10-01 [1], [2]
OpenAI Senior Policy Advisor 2018-08-01 AGI organization [3], [4]

Products (0 products)

Name Creation date Description

Organization documents (0 documents)

Title Publication date Author Publisher Affected organizations Affected people Document scope Cause area Notes

Documents (1 document)

Title Publication date Author Publisher Affected organizations Affected people Affected agendas Notes
AI Alignment Podcast: An Overview of Technical AI Alignment with Rohin Shah (Part 2) 2019-04-25 Lucas Perry Future of Life Institute Rohin Shah, Dylan Hadfield-Menell, Gillian Hadfield Embedded agency, Cooperative inverse reinforcement learning, inverse reinforcement learning, deep reinforcement learning from human preferences, recursive reward modeling, iterated amplification Part two of a podcast episode that goes into detail about some technical approaches to AI alignment.

Similar people

Showing at most 20 people who are most similar in terms of which organizations they have worked at.

Person Number of organizations in common List of organizations in common
Smitha Milli 2 Center for Human-Compatible AI, OpenAI
Jakob Foerster 2 Center for Human-Compatible AI, OpenAI
Pieter Abbeel 2 Center for Human-Compatible AI, OpenAI
Christina Hendrickson 1 OpenAI
Joshua Achiam 1 OpenAI
Brad Lightcap 1 OpenAI
Ethan Knight 1 OpenAI
Ingmar Kanitscheider 1 OpenAI
Lei Zhang 1 OpenAI
Bowen Baker 1 OpenAI
Daniel Ziegler 1 OpenAI
Maddie Hall 1 OpenAI
Christine McLeavey Payne 1 OpenAI
Danny Hernandez 1 OpenAI
Eric Sigler 1 OpenAI
Diane Yoon 1 OpenAI
David Luan 1 OpenAI
Larissa Schiavo 1 OpenAI
Arthur Petron 1 OpenAI
Beth Barnes 1 Center for Human-Compatible AI