Upgrade to Pro — share decks privately, control downloads, hide ads and more …

snlp9-2017-09-15.pdf

penzant
September 15, 2017
210

 snlp9-2017-09-15.pdf

penzant

September 15, 2017
Tweet

Transcript

  1. Adversarial Examples for Evaluating Reading Comprehension Systems Robin Jia, Percy

    Liang (Stanford Univ.) @ EMNLP2017 Reader: Saku Sugawara (Univ. Tokyo) September 15, 2017 at SNLP9 1 / 22
  2. Abstract Research Question The extent to which reading comprehension (RC)

    systems truly understand lanugage remains unclear. Proposed Method An adversarial evaluation scheme for the RC dataset: testing whether systems can answer questions about paragraphs that contain adversarially inserted sentences. Result The accuracy of sixteen published models drops from an average of 75% F1 score to 36%. → Experiments demonstrate that no published open-source model is robust to the addition of adversarial sentences. 2 / 22
  3. Introduction - RC Task Article: Super Bowl 50 Paragraph: “Peyton

    Manning became the first quarterback- ever to lead two different teams to multiple Super Bowls. He is also the oldest quarterback ever to play in a Super Bowl at age 39. The past record was held by John Elway, who led the Broncos to victory in Super Bowl XXXIII at age 38 and is currently Denver’s Executive Vice President of Football Operations and General Manager.” Question: “What is the name of the quarterback who was 38 in Super Bowl XXXIII?” Answer: John Elway 3 / 22
  4. Introduction - RC Task Article: Super Bowl 50 Paragraph: “Peyton

    Manning became the first quarterback- ever to lead two different teams to multiple Super Bowls. He is also the oldest quarterback ever to play in a Super Bowl at age 39. The past record was held by John Elway, who led the Broncos to victory in Super Bowl XXXIII at age 38 and is currently Denver’s Executive Vice President of Football Operations and General Manager.” Question: “What is the name of the quarterback who was 38 in Super Bowl XXXIII?” Answer: John Elway 4 / 22
  5. Introduction - Adversarial Sentence Article: Super Bowl 50 Paragraph: “Peyton

    Manning became the first quarterback- ever to lead two different teams to multiple Super Bowls. He is also the oldest quarterback ever to play in a Super Bowl at age 39. The past record was held by John Elway, who led the Broncos to victory in Super Bowl XXXIII at age 38 and is currently Denver’s Executive Vice President of Football Operations and General Manager. Quarterback Jeff Dean had jersey number 37 in Champ Bowl XXXIV.” Question: “What is the name of the quarterback who was 38 in Super Bowl XXXIII?” Answer: John Elway 5 / 22
  6. Introduction - Adversarial Sentence Article: Super Bowl 50 Paragraph: “Peyton

    Manning became the first quarterback- ever to lead two different teams to multiple Super Bowls. He is also the oldest quarterback ever to play in a Super Bowl at age 39. The past record was held by John Elway, who led the Broncos to victory in Super Bowl XXXIII at age 38 and is currently Denver’s Executive Vice President of Football Op- erations and General Manager. Quarterback Jeff Dean had jersey number 37 in Champ Bowl XXXIV.” Question: “What is the name of the quarterback who was 38 in Super Bowl XXXIII?” Original Prediction: John Elway Prediction by BiDAF model under adversary: Jeff Dean 6 / 22
  7. Framework for Adversarial Evaluation AdvAcc(f) def = 1 |Dtest |

    (p,q,a)∈Dtest v(Adv(p, q, a, f), f) p, q, a: paragraph, question, answre f: model BiDAF (Seo+ 2016) [arXiv] Match-LSTM (Wang and Jiang, 2016) [arXiv] v: F1 accuracy of predicted and gold answer Adv: adversary AddSent, AddAny 8 / 22
  8. AddSent 1. Mutate question Noun/adjective → antonym NE → nearest

    word in GloVe 2. Generate fake answer 26 types (NER and POS tags) = 26 manual fake answers 3. Convert by 50 manually-defined rules 4. Fix errors by crowdworkers 5 workers = 5 candidates use the worst candidate for each model 10 / 22
  9. AddAny 1. Initialize words randomly from common English words. 2.

    Greedily replace a word with {random 20 words + words in q} 11 / 22
  10. Adversaries AddSent No contradiction, grammatically correct AddOneSent (modified AddSent) Using

    ramdonly selected candidate AddAny Can be contradict, ungrammatical, no semantic content AddCommon (modified AddAny) Using only common words for greedy searching 12 / 22
  11. Experiment Main models BiDAF (Seo+ 2016) [arXiv] Match-LSTM (Wang and

    Jiang, 2016) [arXiv] Other models: 12 models (see the paper!) 1000 sampled examples from the development set of SQuAD (2016) Codes: [codalab] 13 / 22
  12. Result - Main Models AddSent= model dependent (grammar: correct) AddOneSent=

    model independent (grammar: correct) AddAny= question dependent (grammar: incorrect) AddCommon= question independent (grammar: incorrect) 17 / 22
  13. Result - Human Evaluation / Verification Human Evaluation Manual Verification

    for 100 samples Answer contradiction: 1 example Grammar error: 7 example 19 / 22
  14. Analysis - Adversarial Training Data Training data: AddSent (except crowdosurcing)

    AddSentMod: a variant of AddSent Using a different set of fake answers (e.g. Jeff Dean → Charles Babbage) Prepending the adversarial sentence to the beginning of the paragraph (instead of appending it to the end) → More care must be taken to ensure that the model cannot overfit the adversary! 21 / 22
  15. Summary Research Question The extent to which reading comprehension (RC)

    systems truly understand lanugage remains unclear. Proposed Method An adversarial evaluation scheme for the RC dataset: testing whether systems can answer questions about paragraphs that contain adversarially inserted sentences. Result The accuracy of sixteen published models drops from an average of 75% F1 score to 36%. → Experiments demonstrate that no published open-source model is robust to the addition of adversarial sentences. 22 / 22