The Facebook bAbI 20 QA dataset is an example of a basic reading comprehension task. It involves learning to answer simple reasoning questions like these:
"TASK 11: BASIC COREFERENCE Daniel was in the kitchen. Then he went to the studio. Sandra was in the office. Question: Where is Daniel? Answer: studio."
A human can come very close to 100% accuracy on this task, so we shall say that it is solved with "excellent performance" by a system if it reaches an accuracy of 99%. It has been solved in such a way with large training datasets (10,000 examples per task) but not with a smaller training dataset of 1,000 examples for each of the 20 categories of tasks.
More details and literature references can be found at the official Facebook Research bAbI page.
The question resolution date will be set to the earliest of the following dates:
*Publication date of a credible paper, blog-post, video or similar demonstrating an AI with performance >=99.00%
*A credible paper, blog-post, video or similar, referencing a date earlier than its publication date, by which the feat had been achieved (similar to how DeepMind kept AlphaGo's victory over European champion Fan Hui secret from October 2015 to January 2016, in order to coincide with the publication of the corresponding Nature paper).
Past performance on this benchmark can be found at: https://www.eff.org/ai/metrics#Reading-Compre… This link also contains data on other similar reading comprehension tasks, such as the Stanford Question Answering Dataset (SQuAD).