modeling quantitative futures computing critical predictions formulating definitive insights computing precise futures forecasting predictive forecasts mapping the future computing calibrated forecasts forecasting intelligent forecasts calculating precise wisdom delivering probable contingencies modeling precise understanding modeling calibrated contingencies calculating definitive understanding crowdsourcing intelligent understanding

Question

Metaculus Help: Spread the word

If you like Metaculus, tell your friends! Share this question via Facebook, Twitter, or Reddit.

When will an AI system match or surpass human performance at visual question answering on the VQA2.0 dataset?

Comprehending an image involves more than recognising what objects or entities are within it, but recognising events, relationships, and context from the image. This problem requires both sophisticated image recognition, language, world-modelling, and "image comprehension". There are several datasets in use. The VQA datasets, were generated by asking Amazon Mechanical Turk workers to propose questions about photos from Microsoft's COCO image collection. (Source)

Concretely, this involves questions like looking at an image of a pizza and identifying if it is vegetarian, or how many slices it has been cut into.

Human performance on this dataset is currently ~83%, so it is quite challenging!

The question resolution date will be set to the earliest of the following dates:

*Publication date of a credible paper, blog-post, video or similar demonstrating an AI with performance >=83.00%

*A credible paper, blog-post, video or similar, referencing a date earlier than its publication date, by which the feat had been achieved (similar to how DeepMind kept AlphaGo's victory over European champion Fan Hui secret from October 2015 to January 2016, in order to coincide with the publication of the corresponding Nature paper).


Data

Data on previous performance on this benchmark can be found here: https://www.eff.org/ai/metrics#Visual-Questio… (data in table "COCO Visual Question Answering (VQA) real images 2.0 open ended") as well as on the VQA challenge site leaderboard (https://evalai.cloudcv.org/web/challenges/cha…).

You might improve your forecasts by also gathering data from other benchmarks for visual question answering. There is more data on performance on the preceding VQA1.0 dataset. This was eventually superseded due to language biases inflating performance metrics. VQA2.0 is harder as for most questions (e.g. "Is the child riding a bike?") it adds another similar image that has a different answer (e.g. a child sitting next to a bike).

In addition, the Visual7W dataset, which can also be found in the EFF dataset, is based on the same underlying dataset of images (Microsoft COCO) but provides richer questions and longer answers than VQA (see https://github.com/yukezhu/visual7w-toolkit).

{{qctrl.predictionString()}}

Metaculus help: Predicting

Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.

The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available.

The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

This question is not yet open for predictions.

Thanks for predicting!

Your prediction has been recorded anonymously.

Want to track your predictions, earn points, and hone your forecasting skills? Create an account today!

Track your predictions
Continue exploring the site

Community Stats

Metaculus help: Community Stats

Use the community stats to get a better sense of the community consensus (or lack thereof) for this question. Sometimes people have wildly different ideas about the likely outcomes, and sometimes people are in close agreement. There are even times when the community seems very certain of uncertainty, like when everyone agrees that event is only 50% likely to happen.

When you make a prediction, check the community stats to see where you land. If your prediction is an outlier, might there be something you're overlooking that others have seen? Or do you have special insight that others are lacking? Either way, it might be a good idea to join the discussion in the comments.