Metaculus Help: Spread the word
If you like Metaculus, tell your friends! Share this question via Facebook, Twitter, or Reddit.
What will be MIRI’s evaluation of their 2018 research progress on the “agent foundations” research agenda?
This question is related to the 2019 MIRI research progress question.
The Machine Intelligence Research Institute does fundamental mathematical research to ensure smarter-than-human artificial intelligence has a positive impact.
They stand out in the AI safety research community in that the majority of their research does not focus on machine learning-based approaches similar to the architecture of current state-of-the-art systems. Instead, they use tools from logic, game theory and decision theory; with the aim of “deconfusing” our foundational understanding of AI in the way in which Newtonian physics “deconfused” our understanding of the natural world.
Every year since 2015 MIRI has made yearly predictions and evaluations of how much progress they made on five areas of their research agenda, on the following scale:
1 means “limited” progress, 2 “weak-to-modest” progress, 3 “modest,” 4 “modest-to-strong,” and 5 “sizable.”
They have not yet released their 2018 evaluations, offering an opportunity for a short-term question.
So we ask:
What will be the average progress rating in MIRI’s evaluation of their 2018 research progress on the “agent foundations” research agenda?
By a post on MIRI’s blog. In case three or four numbers are reported, we’ll take the average of those. If fewer numbers are reported, the question will resolve ambiguous.
The “agent foundations” research agenda comprises the areas “embedded world-models”, “decision theory”, “robust delegation” and “subsystem alignment” (but not the “other” category).
Question will close retroactively one week prior to publishing of the blog post.
Note that the way in which MIRI grouped their research areas for classifying progress changed in 2018 (this is described more in the data sources).
Metaculus help: Predicting
Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.
The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available.
The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.
Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.
Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.
This question is not yet open for predictions.
Metaculus help: Community Stats
Use the community stats to get a better sense of the community consensus (or lack thereof) for this question. Sometimes people have wildly different ideas about the likely outcomes, and sometimes people are in close agreement. There are even times when the community seems very certain of uncertainty, like when everyone agrees that event is only 50% likely to happen.
When you make a prediction, check the community stats to see where you land. If your prediction is an outlier, might there be something you're overlooking that others have seen? Or do you have special insight that others are lacking? Either way, it might be a good idea to join the discussion in the comments.