This question is related to others tracking aspects of the AI Compute trend, including:
- whether an AI safety experiment will use as much compute as AlphaGo Zero by mid-2020
- whether an experiment involving a government researcher will use as much compute as AlphaStar by mid-2020
- what the maximum compute used in training by a published AI system will be by mid-2020
- when and by how much the trend will slow down
By July 1st 2020, will there be an AI experiment (described in a published paper, pre-print, blog post or other credible report), that used at least 1800 pfs-days of compute, and which had as lead author/principal investigator/etc. a researcher whose primary affiliation is an academic instution?
The compute refers to the amount of compute used for training of the final system. The method of calculation of compute should be as similar as possible to that used in the "AI and compute" article.
Resolution date will be retroactively set to one week prior to an eventual credible report.