About

AI Watch is a website to track people, organizations, products (tools, websites, etc.), and (in the future) other aspects of AI safety.

Inclusion criteria

The site has not had a lot of deep thought put into it, so at the moment there are no firm criteria for inclusion on this site. With that said, below I describe my best guesses for what should be included and my current decision procedure.

People

Generally I look for public output that is related to AI safety/risk/alignment and which is novel or an original summary-type work (i.e. not just a rehash of standard arguments, and not just a journalistic reporting of a paper). This could be academic papers, books, blog posts, flowcharts, Facebook posts, web pages, wiki pages, substantive comments to existing blog posts, and so on. I also pay attention to how often names appear in discussions (e.g. on Facebook or LessWrong) as well as inclusion in lists of e.g. participation in relevant workshops or other lists of people relevant to AI safety.

So far I have been adding people who perform general work like office management, even though if they did the exact same work at an organization that didn’t work on AI safety, they wouldn’t have been added. I am not sure if this is ideal and might change that at some point.

I have been excluding funders because I hope these will be covered in Vipul Naik’s Donations List Website (the site already tracks some donations and other funding, but data is preliminary) and eventually integrated with AI Watch somehow.

I am not sure what the most “useful” (in terms of how useful people will find this website) criteria for inclusion are. It seems like writing a couple of blog posts is a fairly low bar, and might turn out to be so low that I can’t keep up with adding everyone, or that the list feels diluted. So far this doesn’t seem to be the case though, and with proper tagging the dilution issue can be mitigated.

I guess I am also worried about lost purposes (i.e. the site tracking something that’s not the thing it should be tracking, or AI safety as a field becoming like that so that the site goes down with the whole field). I haven’t given this too much thought yet, so I am pretty mindlessly just adding things to the site for now (modulo the above).

Organizations

I look for explicit statement of interest in AI safety/risk/alignment (hopefully with some actual output), or someone with an explicit interest claiming relevance to safety work.

If an organization is about global catastrophic risks, I try to find the specific people from the organization who work on AI safety. If that is not possible I add everyone, for the sake of inclusiveness.

Similarly, if an organization is about building an AGI and I can identify the specific people within the organization who work on safety, I try to add only those people. Otherwise I just add everyone.

Products

So far I have been adding products that I have seen in the past and am able to remember. As long as the product is somewhat usable, useful, and polished, I have been adding all the ones I could think of.

How the site was built

The site is built using PHP for the interface and logic and MySQL for the database. The site design is influenced by Vipul Naik’s websites (e.g. his Donations List Website). You can find the Git repository on GitHub, which has all the code and data.

So far, all of the website code is written by Issa Rice. Sebastian Sanchez, Amana Rice, and Vipul Naik have provided contributions to the data.

Funding is provided by Vipul Naik (see here for the Org Watch funding). That page only covers task payment. The site is developed under work time for Vipul, which is funded by a stipend as well. The stipend payment for AI Watch can be calculated as a percentage of the total stipend depending on how much time I have spent working on this site. Development on Vipul’s time is limited to one day per week (for several more weeks). I will add a calculation here at some point.

Finding people/positions to add to the site was an informal process.

What the site is still missing

The main AI safety organizations are covered, so the missing people are mostly individual researchers, people who used to work at these organizations, and also interns (which most organizations don’t list on their team pages). I also haven’t added a lot of the start/end dates for positions, since these are difficult to find.

I have a private list of people I want to add which I will get to soon. (I won’t bother with making that public for now, as I will get to it soon enough.)

Eventually I would like to add more features to the site like graphs but for now I am mostly just adding data (which will make the graphs more interesting).

I am also considering adding more tables to the database. Currently I have people, organizations, and positions. But I could also track documents, tools (e.g. websites, software), etc.

Update strategy

I don't think there is any single place to look for to find additions to this site (which is part of why I wanted to make this site in the first place). Some places to look are:

History

Development for AI Watch began on October 23, 2017.

Expansion to positions outside of AI safety (Org Watch) began on June 17, 2018. The Org Watch subdomain became active on June 21, 2018.

Feedback

If you have feedback for the site, email Issa at riceissa@gmail.com. You can also add an issue on the GitHub repository. All feedback including praise, criticism, concerns, thoughts on the usability of the site, feature requests, and people or positions that should be added, are appreciated.