This question is related to the 2018 MIRI research progress question.
The Machine Intelligence Research Institute does fundamental mathematical research to ensure smarter-than-human artificial intelligence has a positive impact.
They stand out in the AI safety research community in that the majority of their research does not focus on machine learning-based approaches similar to the architecture of current state-of-the-art systems. Instead, they use tools from logic, game theory and decision theory; with the aim of “deconfusing” our foundational understanding of AI in the way in which Newtonian physics “deconfused” our understanding of the natural world.
Every year since 2015 MIRI has made yearly predictions and evaluations of how much progress they made on five areas of their research agenda, on the following scale:
1 means “limited” progress, 2 “weak-to-modest” progress, 3 “modest,” 4 “modest-to-strong,” and 5 “sizable.”
So we ask:
What will be the average progress rating in MIRI’s evaluation of their 2019 research progress on the “agent foundations” research agenda?
Resolution
By a post on MIRI’s blog. In case three or four numbers are reported, we’ll take the average of those. If fewer numbers are reported, the question will resolve ambiguous.
The “agent foundations” research agenda comprises the areas “embedded world-models”, “decision theory”, “robust delegation” and “subsystem alignment” (but not the “other” category).
Question will close retroactively one week prior to publishing of the blog post.
Data
Past predictions and evaluations can be found in this blog post, or in a spreadsheet version here.
Note that the way in which MIRI grouped their research areas for classifying progress changed in 2018 (this is described more in the data sources).