Information for Paul Christiano

Table of contents

Basic information

Item Value
Country United States
GitHub username paulfchristiano
LessWrong username paulfchristiano
Intelligent Agent Foundations Forum username Paul_Christiano
Website https://paulfchristiano.com
Source [1]
Donations List Website (data still preliminary) donor
Agendas Iterated amplification, Debate

List of positions (13 positions)

Organization Title Start date End date AI safety relation Subject Employment type Source Notes
University of California, Berkeley [2], [3], [4], [5] One of 37 AGI Safety Researchers of 2015 funded by donations from Elon Musk and the Open Philanthropy Project
AI Impacts Contributor 2017-10-26 2017-10-26 position background [6], [7]
Alignment Research Center Researcher 2021-04-26 [8], [9], [10]
Future of Humanity Institute Research Associate 2017-11-24 [11], [12]
Machine Intelligence Research Institute Research Associate 2013-05-01 2015-03-01 position [13], [14]
Open Philanthropy Technical advisor [15]
OpenAI Intern 2016-05-25 2017-01-01 AGI organization [16], [17]
OpenAI 2017-01-01 2021-01-01 position technical research full-time [1], [18], [19], [20] The description given is "working on alignment"
Ought Board member & collaborator 2018-10-17 2019-02-02 position board member [21], [22]
Ought Advisor 2021-05-14 2023-09-01 position advisor [23]
Redwood Research Board Member 2021-01-01 2023-01-22 board member [24], [25]
Redwood Research Director 2023-03-31 2023-08-30 board member [26], [27]
Theiss Research Contractor position contractor [2]

Products (3 products)

Name Creation date Description
Ordinary Ideas 2011-12-21 Paul Christiano’s blog about “weird AI stuff” [28].
AI Alignment 2016-05-28 Paul Christiano’s blog about AI alignment.
AI Alignment Prize 2017-11-03 With Zvi Mowshowitz, Vladimir Slepnev. A prize for work that advances understanding in alignment of smarter-than-human artificial intelligence. Winners for the first round, as well as announcement of the second round, can be found at [29]. Winners for the second round, as well as announcement of the third round, can be found at [30].

Organization documents (2 documents)

Title Publication date Author Publisher Affected organizations Affected people Document scope Cause area Notes
Hiring engineers and researchers to help align GPT-3 2020-10-01 Paul Christiano LessWrong OpenAI Hiring-related notice AI safety Paul Christiano posts on LessWrong a hiring note asking for engineers and researchers to work on GPT-3 alignment problems, as the language model is already being deployed in the OpenAI API
What I’ll be doing at MIRI 2019-11-12 Evan Hubinger LessWrong Machine Intelligence Research Institute, OpenAI Evan Hubinger, Paul Christiano, Nate Soares Successful hire AI safety Evan Hubinger, who has just finished an internship at OpenAI with Paul Christiano and others, is going to start work at MIRI. His research will be focused on solving inner alignment for amplification. Although MIRI's research policy is one of nondisclosure-by-default [31] Hubinger expects that his own research will be published openly, and that he will continue collaborating with researchers at institutions like OpenAI, Ought, CHAI, DeepMind, FHI, etc. In a comment, MIRI Executive Director Nate Soares clarifies that "my view of MIRI's nondisclosed-by-default policy is that if all researchers involved with a research program think it should obviously be public then it should obviously be public, and that doesn't require a bunch of bureaucracy. [...] the policy is there to enable researchers, not to annoy them and make them jump through hoops." Cross-posted from the AI Alignment Forum; original is at [32]

Documents (1 document)

Title Publication date Author Publisher Affected organizations Affected people Affected agendas Notes
Challenges to Christiano’s capability amplification proposal 2018-05-19 Eliezer Yudkowsky Machine Intelligence Research Institute Paul Christiano Iterated amplification This post was summarized in Alignment Newsletter #7 [33].

Similar people

Showing at most 20 people who are most similar in terms of which organizations they have worked at.

Person Number of organizations in common List of organizations in common
Ryan Carey 4 Future of Humanity Institute, Machine Intelligence Research Institute, OpenAI, Ought
Katja Grace 3 AI Impacts, Future of Humanity Institute, Machine Intelligence Research Institute
Girish Sastry 3 Future of Humanity Institute, OpenAI, Ought
Ben Weinstein-Raun 3 Machine Intelligence Research Institute, Ought, Redwood Research
Stuart Russell 2 University of California, Berkeley, Machine Intelligence Research Institute
Pieter Abbeel 2 University of California, Berkeley, OpenAI
Andrew Critch 2 University of California, Berkeley, Machine Intelligence Research Institute
Connor Flexman 2 AI Impacts, Machine Intelligence Research Institute
Jimmy Rintjema 2 AI Impacts, Machine Intelligence Research Institute
John Salvatier 2 AI Impacts, Future of Humanity Institute
Jacob Hilton 2 Alignment Research Center, OpenAI
Kyle Scott 2 Alignment Research Center, Future of Humanity Institute
Tao Lin 2 Alignment Research Center, Redwood Research
Nick Bostrom 2 Future of Humanity Institute, Machine Intelligence Research Institute
Jan Leike 2 Future of Humanity Institute, Machine Intelligence Research Institute
Robin Hanson 2 Future of Humanity Institute, Machine Intelligence Research Institute
Owain Evans 2 Future of Humanity Institute, Ought
Miles Brundage 2 Future of Humanity Institute, OpenAI
Tom McGrath 2 Future of Humanity Institute, Ought
Helen Toner 2 Future of Humanity Institute, OpenAI