When will an AI system match amateur human performance on Obstacle Tower with no domain-specific hardcoded knowledge?

Your submission is now in Draft mode. Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

Question

Following the success of agents on classical AI benchmarks like chess, game engine company Unity has made a new video game called 'Obstacle Tower', specifically designed to showcase weaknesses of current Deep RL algorithms.

The point of Obstacle Tower is to make it as high as possible up a tower. In order to do so, the agent must…

  • Learn directly from pixel input to solve...

  • ...low level control problems...

  • ...and high-level planning problems...

  • ...using a sparse reward signal

In addition, Obstacle Tower is designed to contain:

  • Physics-driven interactions. The movement of the agent and other objects within the environment are controlled by a real-time 3D physics system.

  • High visual fidelity. The graphics are much closer to photo-realistic than other platforms such as the Atari Learning Environment, VizDoom, or DeepMind Lab.

  • Procedural generation of nontrivial levels and puzzles. Instead of a stock collection of levels, new ones can be created automatically.

  • Procedurally varied graphics. The same level might look different on different runs due to changes in textures, lighting conditions and object geometry.

So to better understand the future success of deep reinforcement learning (and potentially other techniques), we ask:

When will an AI system achieve a mean performance on Obstacle Tower of at least 9 floors solved in a single episode, in the Strong Generalization condition, using no domain-specific hardcoded knowledge?


Data

See tables 1 and 2 in the original Unity paper.

Human performance (from volunteer Unity Technologies employees) had mean scores (and standard deviations) 12 (6.8) floors reached in the training condition and 9.3 (3.1) in the test condition, and maximum of 20 in the test condition.

The best scores using state-of-the-art algorithms were 0.6 (0.8) with OpenAI’s Proximal Policy Optimisation and 1.6 (0.5) with DeepMind’s Rainbow DQN.

Make a Prediction

Prediction

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

This question is not yet open for predictions.

Current points depend on your prediction, the community's prediction, and the result. Your total earned points are averaged over the lifetime of the question, so predict early to get as many points as possible! See the FAQ.

Metaculus help: Predicting

Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.

The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available.

The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.

Thanks for predicting!

Your prediction has been recorded anonymously.

Want to track your predictions, earn points, and hone your forecasting skills? Create an account today!

Track your predictions
Continue exploring the site