AI progress is driven by at least three things: compute, algorithms, and data (including simulations).
Forecasting algorithmic progress is a key, unsolved challenge in forecasting AI. In this thread we invite discussion of this problem.
We don’t aim to fully solve this (hard) problem. Rather, we hope to build up an arsenal of useful mental tools for dealing with it in practice. You can compare this to the difference between proving formal theorems about Bayesian probability theory, and actually doing forecasting!
If you’re unsure where to start, we encourage things like the following:
- Examples of how you have thought about algorithmic progress questions in the past, and how well that turned out
- Applications of the background material to concrete forecasts (e.g. “I used this model for this question”)
- Stating your confusions about this problem to see if others can help resolve them
- Pointers to relevant blog posts/other literature
- …and more!
We have also launched a new question batch focused on this:
- Will the growth rate of conceptual AI insights remain linear in 2019?
- By mid-2020, what will be the smallest number of years of gameplay required for OpenAI Five-level dota performance?
- By 2020, what will be the smallest number of frames required for the Atari performance of a basic DQN?