DAWNBench is a benchmarking suite for end-to-end deep learning systems:
Computation time and cost are critical resources in building deep models, yet many existing benchmarks focus solely on model accuracy. DAWNBench provides a reference set of common deep learning workloads for quantifying training time, training cost, inference latency, and inference cost across different optimization strategies, model architectures, software frameworks, clouds, and hardware.
As of January 27th 2019, the top ranked training time entry in the Image Classification on ImageNet competition is a deep learning model trained using the Tesla-V100. In fact all of the five top spots were trained using the Tesla-V100.
The Tesla-V100 is NVIDIA’s top of the line hardware for training deep learning models:
With 640 Tensor Cores, Tesla V100 is the world’s first GPU to break the 100 teraFLOPS (TFLOPS) barrier of deep learning performance.
For now this high performance GPU is dominant in the Image Classification challenge; however, will it remain on top? With new specialized ASICs being released and greater competition from other GPU manufacturers, NVIDIA is in an increasingly crowded field. And if the hardware rankings do change, this could serve as a leading indicator of a regime shift in the availability and price of top of the line AI systems.
By the end of 2019 will the fastest trained DAWNBench Image Classification entry at some point in 2019 have been trained with non-Nvidia hardware?
Resolution
This question will resolve as true if, at any point before 12/31/2019, the DAWNBench leaderboard reports a new verified first place entry in the "Image Classification on ImageNet" competition, in the training time category, AND that this entry uses hardware manufactured by a company other than NVIDIA.
Closing date will be retroactively set to a week before resolution date.