Round 4 of the AI Fermi Prize closed March 23 at 23:59 GMT. One prizes has been awarded to jotto, for substantially contributing to our mission of tracking and improving the state-of-the-art of forecasts on AI progress.
There were much fewer comments in this round of the prize, and as a result we are only awarding a 2nd prize, of $200, which goes to jotto, for his Monte-Carlo model of the rate at which DeepMind publishes papers.
Jotto updated a prediction on March 20
I thought I'd try a spreadsheet with the dates of Deepmind's publications to get a base rate. I ended up only adding 200. This question didn't seem as important so I guess if we ever need it for something in the future, I could fill in the rest. Pretty sure there are ways to grab this in an automated fashion but I don't know how, so I just copy+pasted the dates and titles in there manually from Deepmind's publications pages.
In the second sheet (Since18Rate), I have included a table that shows how many publications were in each month. So for instance, December of 2018 had 21 publications. EDIT: I only included months up to January of 2019, out of convenience. Deepmind has presumably put out something since then, and I just haven't factored that in my base rate since it is not included yet on their publications page. So it would be slightly different, after including more months such as February.
In the third sheet (Simulation), I include just the results of a simulation I did, where I generated monthly data simulating the number of publications, with a mean average of 14.53846154 and a standard deviation of 7.816812912, normally distributed. Each simulated month was repeated 10,000 times. Then for each I tried adding progressively more months to 390 to see how the percent of outcomes greater than or equal to 450 changes. In the column titled 3.333 I've included a slight addition to account for the remainder of March, as of the 20th (plus April, May, and June before the resolution date). In my simulation, that value was 19.36%, which is my current base rate.
For the Excel file where I tried the actual simulation, try here.
But I am not as familiar with Deepmind's workings, and my analysis is kind of crude. I don't think other people here have done an analysis like mine, but they may know something I don't. So to hedge out model limitations, and also because of possible knowledge about current Deepmind activities/schedule that other people might be factoring, I converted my 19.36% rate into odds format, and averaged that with the community prediction in odds format. Placing my prediction changed the community prediction significantly, and I ended up updating a bit lower than the actual average of the two.
One thing I'm a little worried about is the trend, which seems to be up, and which is not accounted for in my base rate. Well, except to the extent that I only included more recent months (since the start of 2018). I just haven't thought in any detail about how to go about adding that in. So I am glad that I added a bit of extra probability, but the amount is arbitrary. I feel a little better about it knowing that there is probably some degree of annual cyclicality here, with more publications in the last half of the year (I'm guessing conferences are factor there, but I haven't actually researched that yet.)
Following this fourth round, the prize will be taking a hiatus. We received much fewer comments than hoped in the last round, and so will be thinking through the causes and whether to run future prize rounds.