This question is related to others concerning GPT-2, including:
- how much computation it used
- whether another lab will also avoid publishing details of state-of-the-art models
- whether there will be malicious use of the technology
- whether an open-source version of similar power will become publicly available
In their release of the GPT-2 language model, OpenAI wrote:
Other disciplines such as biotechnology and cybersecurity have long had active debates about responsible publication in cases with clear misuse potential, and we hope that our experiment will serve as a case study for more nuanced discussions of model and code release decisions in the AI community. [...] We will further publicly discuss this strategy in six months.
We’re now posting several questions to forecast the impact of this model and the policy surrounding it. Here we ask:
By December 31st 2019, will there be an agreement regulating publishing norms surrounding dual-use AI technologies, endorsed by both DeepMind and OpenAI?
Resolution
It is not necessary that the agreement be legally binding. It must simply be a blog post, paper, or other kind of document, outlining a set of concrete norms pertaining to publishing of results that could be used maliciously (and outlining these norms explicitly to deal with such results, as well as other potential applications). OpenAI and DeepMind must both publicly have stated their organisational support for, and intent to abide by, the norms outlined. Endorsement by single representatives (even at co-founder/CEO level) is not sufficient. The proposal must be more concrete than e.g. the Asilomar AI Principles or the OpenAI charter, which only state broad goals and commitments with little details surrounding how to navigate them in practice.