This question is related to others on algorithmic progress, including:
- Will the growth rate of conceptual AI insights remain linear in 2019?
- By 2020, what will be the smallest number of frames required for the Atari performance of a basic DQN?
- How can we forecast algorithmic progress?
Following OpenAI Five’s success against OG, OpenAI revealed that the agent trained with the equivalent of 45,000 years of gameplay. Greg Brockman commented:
We agree the massive training data requirement is a real limitation — fixing that issue is a next focus.
As did the official blog post:
[...] we think decreasing the amount of experience is a next challenge for RL.
How successful will these attempts be? We now ask:
By July 1st 2020, what will be the smallest number of years of gameplay used in training by an agent at least as good as OpenAI Five (Finals 2019 version)?
Resolution
The method of calculation should be as close as possible to that generating the 45,000 years figure.
An agent will be taken to be “at least as good as OpenAI Five” if it either...
- ...defeats OpenAI Five in >=40% of games, or...
- ...has a TrueSkill rating >=64, or some other commonly used Dota 2 rating (e.g. MMR) >= OpenAI Five, or…
- ...in some other way is revealed to be at this level of performance (such as by credible statements by the OpenAI team or by Dota 2 pros).
3 is included as we don’t want to end up deciding the question based on which graph is reported in a paper, etc., but nonetheless 3. will require a high level of credibility.
If the agent ends up exceeding the performance of OpenAI Five using a total of X years of experience, but there is enough data to determine after how many years Y the agents were evenly matched, we will use Y as the resolution.