This question was operationalized during the May 12 question operationalisation workshop based on a suggestion by xigua.
Authorship credit goes to Ruby, James, Misha_Yagudin, ghabs, alexanderpolta, Jotto and Jacob.
To join a future question-writing workshop, join our Discord server here.
Resolution
By Jan 1st 2022, will there have been a confirmed case of significant damage caused by outside actors intentionally affecting a machine learning system, with coordinated/consolidated action, using adversarial examples?
-
Confirmed: by a public report or evidence undisputed by a credible subset of researchers within ML/AI communities.
-
Case: A set of incidents involving the same system and the same outside actors, but possibly distributed over time. For example:
- Repeated attacks against the same system over the course of a year, which caused more than $10M damage in total although each incident was less, counts
- A single group of actors attacking many different targets, causing less than $10M damage to each, does not count
- A large group of uncoordinated actors attacking the same target (e.g. a nation-wide epidemic of kids pasting Tesla-fooling stickers on road signs) does not count
-
Significant harm/damage: one acute death or loss of unambiguously more than $10M over the course of the attack (everyone agrees it was a least that bad). Inflation-adjusted to 2019 USD.
- Downward movements of the stock price of a company do not count as damage
-
Outside actors: not those who design or deploy the system
-
Machine learning system: a system trained on data - including “self-play” rather than having its behavior explicitly programmed
-
Intentionally: the actions leading to harm were intended to affect the system (rather than say be in the course of using the system without intent to solicit unusual outcomes)
-
Coordinated/consolidated: the damage/harm results from actions taken intentionally and collectively to cause some result from the system (e.g. even if the “attack” takes two years, all the actions over those two years were trying to do one thing)
-
Adversarial example: An input edited with some human-unnoticeable perturbation causing it to be misclassified by the system. For example:
- Someone adding a transparent sticker with a few pixels on it to a road sign, causing a Tesla to crash, counts as an adversarial example
- Someone putting a political sticker on a road sign causing a Tesla to crash does not count (even if it's intentional)
- Someone removing the road sign causing the Tesla to crash does not count
- Someone making an elaborate chalk drawing causing a cartoonish optical illusion of the road veering off, does not count