There are also versions of this question with a doubling and a >=8x increase rather than a >=32x increase.
This is also related to the numerical questions about the size of the maximum compute experiment by mid-2019 and mid-2020.
In May 2018, OpenAI reported that since 2012 the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time, or 10x per year.
In July 2018 Ryan Carey from the FHI analysed how long such an incredibly fast doubling trend is sustainable.
So let’s suppose that [the richest actor, the US government] could spend at most 1% of GDP, or $200B, on one AI experiment. [This is similar to what was spent on the Manhattan project and the Apollo program.] Given the growth of one order of magnitude per 1.1-1.4 years, and the initial experiment size of $10M [for AlphaGo Zero], the AI-Compute trend predicts that we would see a $200B experiment in 5-6 years. So given a broadly similar economic situation to the present one, that would have to mark an end to the AI-Compute trend.
We can also consider how long the trend can last if government is not involved. Due to their smaller size, economic barriers hit a little sooner for private actors. The largest among these are tech companies: Amazon and Google have current research and development budgets of about ~20B/yr each, so we can suppose that the largest individual experiment outside of government is $20B. Then the private sector can keep pace with the AI-Compute trend for around ¾ as long as government, or ~3.5-4.5 years.
In line with the Metaculus AI mission of “tracking and improving the state-of-the-art in AI forecasting”, this question attempts to be a real-time tracker of how well Carey’s sustainability estimate holds up. We ask:
Before Jan 1st 2024 (5.5 years after Carey’s initial post), but after Jan 1st 2019, will there have been a 2-year interval in which the maximum compute used in a published AI experiment did not grow >=32x?
The 2-year interval was chosen so as to avoid the question resolving negative merely due to a time-lag between major experiments.
Data
Here’s a spreadsheet extrapolating the current AI compute trend along the lines suggested by Ryan Carey.
Here are Carey’s calculations as a Guesstimate model. I especially recommend checking out the calculators (by pressing the little calculator symbol in the menu bar), which allow you to enter a target compute/cost and find out how long until it's reached.