Date ML model produces realistic 2-min video

Your submission is now in Draft mode. Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

Question

Unlike the field of image generation where many studies have succeeded in generating high-resolution and high-fidelity realistic images, video generation with unconditional GANs is still a challenging problem (Saito et al., 2018). A reason videos might be a harder problem than images is the that videos require larger memory and computational costs than static images (ibid.), and therefore involve increased data complexity (Aidan et al., 2019).

Recently, an article by DeepMind (Aidan et al., 2019), introduced the Dual Video Discriminator GAN (DVD-GAN), that scales to longer and higher resolution videos. It beat previous attempts on various performance metrics for synthesis on the Kinetics-600 dataset.

DVD-GAN synthesized video with a 3.35 Fréchet Inception Distance score (a metric that captures the similarity of ordered generated images), and a 64.05 Inception Score (a metric of performance modelled on the judgment of human annotators) for synthesised video at 12fps and a resolution of 256 × 256. However, the videos are very short — up to 48 frames — which amounts to only 2 seconds of video at 24 fps.

When will a generative model produce a video of at least 2880 frames, at a 256 × 256 resolution or better, with a reported Fréchet Inception Distance of less than 0.100, or an Inception Score of greater than 500.00?

This question resolves as the date when such a model is reported in a preprint or peer-reviewed journal.

Make a Prediction

Prediction

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

This question is not yet open for predictions.

Current points depend on your prediction, the community's prediction, and the result. Your total earned points are averaged over the lifetime of the question, so predict early to get as many points as possible! See the FAQ.

Metaculus help: Predicting

Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.

The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available.

The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.

Thanks for predicting!

Your prediction has been recorded anonymously.

Want to track your predictions, earn points, and hone your forecasting skills? Create an account today!

Track your predictions
Continue exploring the site