This question is related to the Atari question and the Atari, Go, Starcraft and poker question.
When will there be a single AI architecture that can be trained, using only self play, to play either Go or each of the seven Atari 2600 games used in Deep Mind’s Playing Atari with Deep Reinforcement Learning, at a superhuman level?
The architecture must be able to learn each game. That is, this the criteria of this question are met if one copy of the system is trained on Go, and another copy is trained on Atari, even if no single system can play each game. However, the system may not be tuned or modified by a human for the differing tasks. “Superhuman”, here, means performance superior to that of the best human experts in each domain)
Resolution date will be set to the earliest of the following dates:
-
Publication date of a credible paper, blog-post, video or similar demonstrating an AI achieving the feat in question
-
A date earlier than the publication date, but referenced in a credible paper, blog-post, video or similar, by which the feat was achieved (similar to how DeepMind kept AlphaGo's victory over European champion Fan Hui secret from October 2015 to January 2016, in order to coincide with the publication of the corresponding Nature paper)
-
A date such that an expert council of several senior AI researchers agree (by majority vote) that it is >=95% likely that the feat could have been carried out by that date. This means not just the date when the computational resources and algorithmic insights were available, but the date were they could have been fully deployed to solve the problem. For example, think the end and not the beginning of the AlphaGo project. [1] [2]
[1] The point of this counterfactual resolution condition is as follows. Not all trajectories to advanced AI pass by a Go+Atari system. There might be a point at which it is clear that the feat in the question is achievable, even though no one has actually bothered to implement the experiment.
This is similar to how the release of the AlphaZero agent (playing chess, Go and shogi) should give us >=95% confidence that it is possible to play Othello at superhuman level without domain-specific knowledge, even though no one (to the author's knowledge) ran that experiment.
[2] More details regarding the council composition will be announced on Metaculus in the coming months.