What I’ll be doing at MIRI |
2019-11-12 |
Evan Hubinger |
LessWrong |
Machine Intelligence Research Institute, OpenAI |
Evan Hubinger, Paul Christiano, Nate Soares |
Successful hire |
AI safety |
Evan Hubinger, who has just finished an internship at OpenAI with Paul Christiano and others, is going to start work at MIRI. His research will be focused on solving inner alignment for amplification. Although MIRI's research policy is one of nondisclosure-by-default [9] Hubinger expects that his own research will be published openly, and that he will continue collaborating with researchers at institutions like OpenAI, Ought, CHAI, DeepMind, FHI, etc. In a comment, MIRI Executive Director Nate Soares clarifies that "my view of MIRI's nondisclosed-by-default policy is that if all researchers involved with a research program think it should obviously be public then it should obviously be public, and that doesn't require a bunch of bureaucracy. [...] the policy is there to enable researchers, not to annoy them and make them jump through hoops." Cross-posted from the AI Alignment Forum; original is at [10] |
Is it harder to become a MIRI mathematician in 2019 compared to in 2013? |
2019-10-29 |
Issa Rice |
LessWrong |
Machine Intelligence Research Institute |
Nate Soares |
Third-party commentary on organization |
AI safety |
Issa Rice divides MIRI research employees between "mathematicians" and "engineers" and notes that recently, MIRI has hired mostly engineers and not mathematicians. He also considers the example of Nate Soares, whose background prior to joining MIRI matched the engineer profile, but who still joined and did initial work as a mathematician. His post asks the question of whether this suggests it is harder to become a MIRI mathematician in 2019 (the time of writing the post) compared to 2013. The post includes a list of potential differences between the time periods. |
MIRI’s newest recruit: Edward Kmett! |
2018-12-01 |
yiavin |
Reddit |
Machine Intelligence Research Institute, Ought |
Edward Kmett, Nate Soares |
|
AI safety |
A post on r/haskell about Edward Kmett joining MIRI, with comments from Kmett. In comments, Kmett says that “Nate Soares came out to Boston personally, and made a very compelling argument for me going off and doing the work I’d been trying to complete solely in my evening hours full time”. Kmett also says he has “helped ought.org find at least one developer”. |
Comment on Ask MIRI Anything (AMA) |
2016-10-12 |
Nate Soares |
Effective Altruism Forum |
Machine Intelligence Research Institute |
|
|
AI safety |
In response to a question, Soares writes that MIRI has decided against hiriting senior math people in a supervisor role, and also writes that MIRI is bottlenecked on technical writing ability. |
Comment on Let’s conduct a survey on the quality of MIRI’s implementation |
2016-02-19 |
Nate Soares |
Effective Altruism Forum |
Machine Intelligence Research Institute, Open Philanthropy |
Daniel Dewey |
|
AI safety |
Soares responds to a blog post calling for an evaluation of MIRI’s research output and strategy. Soares mentions an ongoing investigation by Open Philanthropy Project, as well as plans for “an independent evaluation of our organizational efficacy”. |
Comment on I am Nate Soares, AMA! |
2015-06-11 |
Nate Soares |
Effective Altruism Forum |
Machine Intelligence Research Institute |
|
|
AI safety |
Soares gives a list of metrics that MIRI uses internally to measure its own success. |
Comment on I am Nate Soares, AMA! |
2015-06-11 |
Nate Soares |
Effective Altruism Forum |
Machine Intelligence Research Institute |
|
|
AI safety |
In response to a question, Soares writes that at the moment MIRI is talent-constrained, while noting that MIRI is taking steps to hire more researchers. |
Comment on I am Nate Soares, AMA! |
2015-06-11 |
Nate Soares |
Effective Altruism Forum |
Machine Intelligence Research Institute |
|
|
AI safety |
Soares notes that MIRI is going to hire a full-time office manager soon. He also writes that MIRI is looking for researchers who can write fast and well, and will look for “a person who can stay up to speed on the technical research but spend most of their time doing outreach and stewarding other researchers who are interested in doing AI alignment research”. |
Comment on I am Nate Soares, AMA! |
2015-06-11 |
Nate Soares |
Effective Altruism Forum |
Machine Intelligence Research Institute |
|
|
AI safety |
Soares responds to a software engineer about how to get involved. |
MIRI Research Guide |
2014-11-07 |
Nate Soares |
LessWrong |
Machine Intelligence Research Institute |
|
|
AI safety |
Blog post announcing the publication of a new research guide to help people become involved in MIRI’s AI safety research. |