About

AI Watch is a website to track people, organizations, products (tools, websites, etc.), and (in the future) other aspects of AI safety.

Inclusion criteria

The site is new and is still under active development, so at the moment there are no firm criteria for inclusion on this site. With that said, below I describe my best guesses for what should be included and my current decision procedure.

People

Generally I look for public output that is related to AI safety/risk/alignment and which is novel or an original summary-type work (i.e. not just a rehash of standard arguments, and not just a journalistic reporting of a paper). This could be academic papers, books, blog posts, flowcharts, Facebook posts, web pages, wiki pages, substantive comments to existing blog posts, and so on. I also pay attention to how often names appear in discussions (e.g. on Facebook or LessWrong) as well as inclusion in lists of e.g. participation in relevant workshops or other lists of people relevant to AI safety.

So far I have been adding people who perform general work like office management, even though if they did the exact same work at an organization that didn’t work on AI safety, they wouldn’t have been added. I am not sure if this is ideal and might change that at some point.

I have been excluding funders because I hope these will be covered in Vipul Naik’s Donations List Website (the site already tracks some donations and other funding, but data is preliminary).

I am not sure what the most “useful” (in terms of how useful people will find this website) criteria for inclusion are. It seems like writing a couple of blog posts is a fairly low bar, and might turn out to be so low that I can’t keep up with adding everyone, or that the list feels diluted. So far this doesn’t seem to be the case though, and with proper tagging the dilution issue can be mitigated.

Organizations

I look for explicit statement of interest in AI safety/risk/alignment (hopefully with some actual output), or someone with an explicit interest claiming relevance to safety work.

If an organization is about global catastrophic risks, I try to find the specific people from the organization who work on AI safety. If that is not possible I add everyone, for the sake of inclusiveness.

Similarly, if an organization is about building an AGI and I can identify the specific people within the organization who work on safety, I try to add only those people. Otherwise I just add everyone.

Products

So far I have been adding products that I have seen in the past and am able to remember. As long as the product is somewhat usable, useful, and polished, I have been adding all the ones I could think of.

How the site was built

The site is built using PHP for the interface and logic and MySQL for the database. The site design is influenced by Vipul Naik’s websites (e.g. his Donations List Website). You can find the Git repository on GitHub, which has all the code and data.

So far, all of the development is done by Issa Rice.

Funding is provided by Vipul Naik. That page only covers task payment. The site is developed under work time for Vipul, which is funded by a stipend as well. The stipend payment for AI Watch can be calculated as a percentage of the total stipend depending on how much time I have spent working on this site. Development on Vipul’s time is limited to one day per week (for several more weeks). I will add a calculation here at some point.

Finding people/positions to add to the site was an informal process.

What the site is still missing

The main AI safety organizations are covered, so the missing people are mostly individual researchers, people who used to work at these organizations, and also interns (which most organizations don’t list on their team pages). I also haven’t added a lot of the start/end dates for positions, since these are difficult to find.

I have a private list of people I want to add which I will get to soon. (I won’t bother with making that public for now, as I will get to it soon enough.)

Eventually I would like to add more features to the site like graphs but for now I am mostly just adding data (which will make the graphs more interesting).

I am also considering adding more tables to the database. Currently I have people, organizations, and positions. But I could also track documents, tools (e.g. websites, software), etc.

Update strategy

I don't think there is any single place to look for to find additions to this site (which is part of why I wanted to make this site in the first place). Some places to look are:

History

Development for the site began on October 23, 2017.

Feedback

If you have feedback for the site, email Issa at riceissa@gmail.com. You can also add an issue on the GitHub repository. All feedback including praise, criticism, concerns, thoughts on the usability of the site, feature requests, and people or positions that should be added, are appreciated.

Positions

The positions table in the AI Watch database has a field describing the relation of the position to safety (ai_safety_relation), and another field describing the type of work (subject). Combining these two fields gives a more intuitive idea of what the position is about, so the table below gives the possible combinations and what that means for the position.

AI safety relation Subject Meaning Example
Position Technical research This is a technical position that is related to AI safety in virtue of the position, so this is a technical safety research position. Nate Soares’s work at MIRI
Position General This is a general position that we specifically know has to do with safety. An office management or general writing position at an organization that specifically works on AI safety counts. Since this is a general position, if the same sort of work was done at an organization that doesn’t do any AI safety work, it would’t be considered AI safety work. Aaron Silverbook’s work at MIRI.
Position Policy This is an AI safety policy position.
Position
Position
Position
Position
AGI organization Technical research This is a technical position at an organization that (1) aims to develop artificial general intelligence and (2) has voiced interest in safety concerns. However, since the organization doesn’t exclusively focus on AI safety, we don’t know if the position is safety-related or not.
AGI organization
AGI organization
AGI organization
AGI organization
AGI organization
AGI organization
GCR organization Technical research This is a technical position at an organization that (1) focuses on existential risks or global catastrophic risks and (2) has given unaligned AI as a potential global catastrophic risk. However, since the organization doesn’t exclusively focus on AI safety, we don’t know if the position is safety-related or not.
GCR organization
GCR organization
GCR organization
GCR organization
GCR organization
GCR organization
Unrelated
Unrelated
Unrelated
Unrelated
Unrelated
Unrelated
Unrelated