Information for Rohin Shah

Table of contents

Basic information

Item Value

List of positions (1 position)

Organization Title Start date End date AI safety relation Subject Employment type Source Notes
Center for Human-Compatible AI [1], [2]

Products (0 products)

Name Creation date Description

Organization documents (0 documents)

Title Publication date Author Publisher Affected organizations Affected people Document scope Cause area Notes

Documents (2 documents)

Title Publication date Author Publisher Affected organizations Affected people Affected agendas Notes
AI Alignment Podcast: An Overview of Technical AI Alignment with Rohin Shah (Part 2) 2019-04-25 Lucas Perry Future of Life Institute Rohin Shah, Dylan Hadfield-Menell, Gillian Hadfield Embedded agency, Cooperative inverse reinforcement learning, inverse reinforcement learning, deep reinforcement learning from human preferences, recursive reward modeling, iterated amplification Part two of a podcast episode that goes into detail about some technical approaches to AI alignment.
AI Alignment Podcast: An Overview of Technical AI Alignment with Rohin Shah (Part 1) 2019-04-11 Lucas Perry Future of Life Institute Rohin Shah iterated amplification Part one of an interview with Rohin Shah that goes covers some technical agendas for AI alignment.