Information for Paul Christiano

Table of contents

Basic information

Item Value
Country United States
GitHub username paulfchristiano
LessWrong username paulfchristiano
Intelligent Agent Foundations Forum username Paul_Christiano
Website https://paulfchristiano.com
Source [1]
Donations List Website (data still preliminary) donor
Agendas Iterated amplification, Debate

List of positions (12 positions)

Organization Title Start date End date AI safety relation Subject Employment type Source Notes
Theiss Research Contractor position technical research contractor [2]
Future of Humanity Institute Research Associate [3], [4]
University of California, Berkeley [2], [5], [6], [7] One of 37 AGI Safety Researchers of 2015 funded by donations from Elon Musk and the Open Philanthropy Project
80,000 Hours Advisor 2013-09-18 2015-11-26 advisor [8], [9], [10]
AI Impacts Contributor position background [11]
Machine Intelligence Research Institute Research Associate 2013-05-01 2015-03-01 position [12], [13]
Open Philanthropy Technical advisor [14]
OpenAI 2017-01-01 position technical research full-time [1], [15], [16] The description given is "working on alignment"
OpenAI Intern 2016-05-25 AGI organization [17], [18]
Ought Collaborator 2018-01-01 position [19], [20]
Ought Board member 2021-01-01 position board member [19], [21]
Redwood Research Board Member 2021-01-01 board member [22]

Products (3 products)

Name Creation date Description
Ordinary Ideas 2011-12-21 Paul Christiano’s blog about “weird AI stuff” [23].
AI Alignment 2016-05-28 Paul Christiano’s blog about AI alignment.
AI Alignment Prize 2017-11-03 With Zvi Mowshowitz, Vladimir Slepnev. A prize for work that advances understanding in alignment of smarter-than-human artificial intelligence. Winners for the first round, as well as announcement of the second round, can be found at [24]. Winners for the second round, as well as announcement of the third round, can be found at [25].

Organization documents (2 documents)

Title Publication date Author Publisher Affected organizations Affected people Document scope Cause area Notes
Hiring engineers and researchers to help align GPT-3 2020-10-01 Paul Christiano LessWrong OpenAI Hiring-related notice AI safety Paul Christiano posts on LessWrong a hiring note asking for engineers and researchers to work on GPT-3 alignment problems, as the language model is already being deployed in the OpenAI API
What I’ll be doing at MIRI 2019-11-12 Evan Hubinger LessWrong Machine Intelligence Research Institute, OpenAI Evan Hubinger, Paul Christiano, Nate Soares Successful hire AI safety Evan Hubinger, who has just finished an internship at OpenAI with Paul Christiano and others, is going to start work at MIRI. His research will be focused on solving inner alignment for amplification. Although MIRI's research policy is one of nondisclosure-by-default [26] Hubinger expects that his own research will be published openly, and that he will continue collaborating with researchers at institutions like OpenAI, Ought, CHAI, DeepMind, FHI, etc. In a comment, MIRI Executive Director Nate Soares clarifies that "my view of MIRI's nondisclosed-by-default policy is that if all researchers involved with a research program think it should obviously be public then it should obviously be public, and that doesn't require a bunch of bureaucracy. [...] the policy is there to enable researchers, not to annoy them and make them jump through hoops." Cross-posted from the AI Alignment Forum; original is at [27]

Documents (1 document)

Title Publication date Author Publisher Affected organizations Affected people Affected agendas Notes
Challenges to Christiano’s capability amplification proposal 2018-05-19 Eliezer Yudkowsky Machine Intelligence Research Institute Paul Christiano Iterated amplification This post was summarized in Alignment Newsletter #7 [28].

Similar people

Showing at most 20 people who are most similar in terms of which organizations they have worked at.

Person Number of organizations in common List of organizations in common
Ryan Carey 4 Future of Humanity Institute, Machine Intelligence Research Institute, OpenAI, Ought
Ben Weinstein-Raun 3 Machine Intelligence Research Institute, Ought, Redwood Research
Katja Grace 3 Future of Humanity Institute, AI Impacts, Machine Intelligence Research Institute
Claire Zabel 3 80,000 Hours, Open Philanthropy, Redwood Research
Holden Karnofsky 3 Open Philanthropy, OpenAI, Redwood Research
Carl Shulman 3 Future of Humanity Institute, 80,000 Hours, Machine Intelligence Research Institute
Girish Sastry 3 Future of Humanity Institute, OpenAI, Ought
Daniel Dewey 3 Future of Humanity Institute, Machine Intelligence Research Institute, Open Philanthropy
Howie Lempel 2 80,000 Hours, Open Philanthropy
Ajeya Cotra 2 Open Philanthropy, Redwood Research
Jacob Trefethen 2 80,000 Hours, Open Philanthropy
Owen Cotton-Barratt 2 80,000 Hours, Redwood Research
Andrew Critch 2 University of California, Berkeley, Machine Intelligence Research Institute
Pieter Abbeel 2 University of California, Berkeley, OpenAI
Smitha Milli 2 University of California, Berkeley, OpenAI
Qiaochu Yuan 2 University of California, Berkeley, Machine Intelligence Research Institute
Stuart Russell 2 University of California, Berkeley, Machine Intelligence Research Institute
Nick Beckstead 2 Future of Humanity Institute, Open Philanthropy
Helen Toner 2 Open Philanthropy, OpenAI
Christopher Olah 2 Open Philanthropy, OpenAI