This question is identical to the mid-2020 maximum compute question, except for the dates.
This question tracks increases in compute as measured by this OpenAI blog post, which delinates a methodology for estimating training compute.
By July 1st 2019, what is the maximum number of computations used by an AI system described in a published paper, pre-print or blog post, calculated using the OpenAI methodology and measured in petaflop/s-days?
A “published AI system” is a system that is the topic of a published research paper or blogpost. In order to be admissible, the paper/blog post must give sufficient information to estimate training compute, within some error threshold.
Note that the system need not be deployed for research purposes. Resolution criteria also include, for example, a blog post by a major tech company revealing the amount of training used for an image classifier used in their product.
In order to think about this question, helpful resources include this spreadsheet extrapolating the current AI compute trend along the lines suggested by Ryan Carey, and this Guesstimate model. I especially recommend checking out the calculators (by pressing the little calculator symbol in the menu bar), which allow you to enter a target compute/cost and find out how long until it's reached.