By end of 2019, what will be the maximum batch size used in training by a published AI system?

Your submission is now in Draft mode. Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

Question

Background

Two things determine how much compute it is useful to throw at training an AI system: first, economics (the supply and demand for compute, which determine when the cost becomes infeasible). Second, parallelizability.

To understand this concept, suppose you wanted to add up a list of 1000 numbers. If you could get 10 friends to come in and help you, you could add up 100 numbers each and finish the task much faster. The task is parallelizable. However, suppose you wanted to list all the numbers in the Fibonacci sequence (where each entry is the sum of the two preceding it). Then having 10 friends join you wouldn’t be as helpful, since to compute every new number in the sequence you need to know what the last number was. So you all couldn’t just race ahead at different speeds.

Similarly, in order for researchers to benefit from larger compute, they must find ways of parallelising their training, either by splitting the data and running many instances of the same model across many machines, or by splitting the model itself in a way which can be efficiently parallelized.

A recent OpenAI milestone release notes that:

Batch sizes of 8 thousand [GDG+17], 16 thousand [SKYL17], 32 thousand [YGG17, YZH+17, ASF17], and even 64 thousand [JSH+18] examples have been effectively employed to train ImageNet, and batch sizes of thousands have been effective for language models and generative models [OEGA18, PKYC18, BDS18].

This phenomenon is not confined to supervised learning: in reinforcement learning, batch sizes of over a million timesteps (with tens of thousands of environments running in parallel) have been used in a Dota-playing agent [BCD+18], and even in simple Atari environments batch sizes of several thousand timesteps have proved effective [AAG+18, HQB+18, SA18]

We now ask:

By Jan 1, 2020, what will be the maximum batch size used in training by a published AI system?


Resolution

A “published AI system” is a system that is the topic of a published research paper, pre-print or credible blogpost. In order to be admissible, the paper/blog post must give sufficient information to estimate training compute, within some error threshold.


Data

The current record is a batch size of ~8.4M observations, by OpenAI's 1v1 Dota bot.

We have yet to compile an outside of the historical growth in batch sizes. Contributions here are very welcome!


Acknowledgements

This question was suggested by Tamay in the question suggestion thread.

Make a Prediction

Prediction

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

This question is not yet open for predictions.

Current points depend on your prediction, the community's prediction, and the result. Your total earned points are averaged over the lifetime of the question, so predict early to get as many points as possible! See the FAQ.

Metaculus help: Predicting

Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.

The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available.

The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.

Thanks for predicting!

Your prediction has been recorded anonymously.

Want to track your predictions, earn points, and hone your forecasting skills? Create an account today!

Track your predictions
Continue exploring the site