Metaculus Help: Spread the word
If you like Metaculus, tell your friends! Share this question via Facebook, Twitter, or Reddit.
By 2022, will there have been a case of >=$10M of damage caused by adversarial examples in a coordinated action?
To join a future question-writing workshop, join our Discord server here.
By Jan 1st 2022, will there have been a confirmed case of significant damage caused by outside actors intentionally affecting a machine learning system, with coordinated/consolidated action, using adversarial examples?
Confirmed: by a public report or evidence undisputed by a credible subset of researchers within ML/AI communities.
Case: A set of incidents involving the same system and the same outside actors, but possibly distributed over time. For example:
- Repeated attacks against the same system over the course of a year, which caused more than $10M damage in total although each incident was less, counts
- A single group of actors attacking many different targets, causing less than $10M damage to each, does not count
- A large group of uncoordinated actors attacking the same target (e.g. a nation-wide epidemic of kids pasting Tesla-fooling stickers on road signs) does not count
Significant harm/damage: one acute death or loss of unambiguously more than $10M over the course of the attack (everyone agrees it was a least that bad). Inflation-adjusted to 2019 USD.
- Downward movements of the stock price of a company do not count as damage
Outside actors: not those who design or deploy the system
Machine learning system: a system trained on data - including “self-play” rather than having its behavior explicitly programmed
Intentionally: the actions leading to harm were intended to affect the system (rather than say be in the course of using the system without intent to solicit unusual outcomes)
Coordinated/consolidated: the damage/harm results from actions taken intentionally and collectively to cause some result from the system (e.g. even if the “attack” takes two years, all the actions over those two years were trying to do one thing)
Adversarial example: An input edited with some human-unnoticeable perturbation causing it to be misclassified by the system. For example:
- Someone adding a transparent sticker with a few pixels on it to a road sign, causing a Tesla to crash, counts as an adversarial example
- Someone putting a political sticker on a road sign causing a Tesla to crash does not count (even if it's intentional)
- Someone removing the road sign causing the Tesla to crash does not count
- Someone making an elaborate chalk drawing causing a cartoonish optical illusion of the road veering off, does not count
Metaculus help: Predicting
Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.
The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available.
The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.
Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.
Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.
This question is not yet open for predictions.
Metaculus help: Community Stats
Use the community stats to get a better sense of the community consensus (or lack thereof) for this question. Sometimes people have wildly different ideas about the likely outcomes, and sometimes people are in close agreement. There are even times when the community seems very certain of uncertainty, like when everyone agrees that event is only 50% likely to happen.
When you make a prediction, check the community stats to see where you land. If your prediction is an outlier, might there be something you're overlooking that others have seen? Or do you have special insight that others are lacking? Either way, it might be a good idea to join the discussion in the comments.