Upgrade to Pro — share decks privately, control downloads, hide ads and more …

An introduction of 
‘Good-Enough Compositional ...

Shuntaro Yada
September 25, 2020

An introduction of 
‘Good-Enough Compositional Data Augmentation’

最先端NLP勉強会での論文紹介

‘Good-Enough Compositional Data Augmentation’ (Jacob Andreas at ACL2020)

https://sites.google.com/view/snlp-jp/home/2020#h.hkh7toqi7fvv

Shuntaro Yada

September 25, 2020
Tweet

More Decks by Shuntaro Yada

Other Decks in Research

Transcript

  1. 3 (&$"ͷ࢓૊Έ ems, we often want his dataset and infer

    2b) is not: o an inference about 00). Because cat and a) and (1b), they are ewhere; cat and sang ble. Human learners t novel lexical items two (possibly discontinuous) fragments of training examples appear in some common environment, then any additional environment where the first fragment appears is also a valid environment for the second. She picks the wug up in Fresno. She puts the wug down in Tempe. Pat picks cats up. Pat puts cats down. (a) (b) (c) (d) Figure 1: Visualization of the proposed approach: two discontinuous sentence fragments (a–b, under- lined) which appear in similar environments (a–b, high- lighted) are identified. Additional sentences in which the first fragment appears (c) are used to synthesize new examples (d) by substituting in the second fragment. &OWJSPONFOU ಉ͡ (She ␣ the wug ␣ in)  ಉ͡FOWJSPONFOUͰ࢖ΘΕΔҟͳΔͭͷGSBHNFOU͕͋Δʜ B  C   ҰํͷGSBHNFOU͕࢖ΘΕΔผͷFOWJSPONFOU͕͋Δʜ D   ΋͏ҰํͷGSBHNFOU΋  ͷFOWJSPONFOUͰ࢖͑Δ͸ͣʜ E (puts … down) (picks … up) 'SBHNFOU 'SBHNFOU ͜ͷFOWJSPONFOUͰ΋ʜ ͜ͷGSBHNFOU͕࢖͑Δ͸ͣʂ (Pat ␣ cats ␣) (puts … down) ಉ͡ 'SBHNFOU (picks … up)
  2. 4 (&$"ͷ࢓૊Έ ems, we often want his dataset and infer

    2b) is not: o an inference about 00). Because cat and a) and (1b), they are ewhere; cat and sang ble. Human learners t novel lexical items two (possibly discontinuous) fragments of training examples appear in some common environment, then any additional environment where the first fragment appears is also a valid environment for the second. She picks the wug up in Fresno. She puts the wug down in Tempe. Pat picks cats up. Pat puts cats down. (a) (b) (c) (d) Figure 1: Visualization of the proposed approach: two discontinuous sentence fragments (a–b, under- lined) which appear in similar environments (a–b, high- lighted) are identified. Additional sentences in which the first fragment appears (c) are used to synthesize new examples (d) by substituting in the second fragment. &OWJSPONFOU ಉ͡ (She ␣ the wug ␣ in)  ಉ͡FOWJSPONFOUͰ࢖ΘΕΔҟͳΔͭͷGSBHNFOU͕͋Δʜ B  C   ҰํͷGSBHNFOU͕࢖ΘΕΔผͷFOWJSPONFOU͕͋Δʜ D   ΋͏ҰํͷGSBHNFOU΋  ͷFOWJSPONFOUͰ࢖͑Δ͸ͣʜ E ಉ͡ 'SBHNFOU (picks … up) (puts … down) (picks … up) 'SBHNFOU 'SBHNFOU ͜ͷFOWJSPONFOUͰ΋ʜ ͜ͷGSBHNFOU͕࢖͑Δ͸ͣʂ (Pat ␣ cats ␣) (puts … down)
  3. 5 (&$"ͷ࢓૊Έ ems, we often want his dataset and infer

    2b) is not: o an inference about 00). Because cat and a) and (1b), they are ewhere; cat and sang ble. Human learners t novel lexical items two (possibly discontinuous) fragments of training examples appear in some common environment, then any additional environment where the first fragment appears is also a valid environment for the second. She picks the wug up in Fresno. She puts the wug down in Tempe. Pat picks cats up. Pat puts cats down. (a) (b) (c) (d) Figure 1: Visualization of the proposed approach: two discontinuous sentence fragments (a–b, under- lined) which appear in similar environments (a–b, high- lighted) are identified. Additional sentences in which the first fragment appears (c) are used to synthesize new examples (d) by substituting in the second fragment. &OWJSPONFOU ಉ͡ (She ␣ the wug ␣ in)  ಉ͡FOWJSPONFOUͰ࢖ΘΕΔҟͳΔͭͷGSBHNFOU͕͋Δʜ B  C   ҰํͷGSBHNFOU͕࢖ΘΕΔผͷFOWJSPONFOU͕͋Δʜ D   ΋͏ҰํͷGSBHNFOU΋  ͷFOWJSPONFOUͰ࢖͑Δ͸ͣʜ E ͜ͷFOWJSPONFOUͰ΋ʜ ͜ͷGSBHNFOU͕࢖͑Δ͸ͣʂ (Pat ␣ cats ␣) (puts … down) ಉ͡ 'SBHNFOU (picks … up) (puts … down) (picks … up) 'SBHNFOU 'SBHNFOU
  4. wݴޠֶతͳTVQQPSU  $IPNTLZ  TZOUBDUJDDBUFHPSJFT XFMMGPSNFEOFTTSVMFT ۟ߏ଄จ๏ʜ   'JSUI

     ෼෍Ծઆʢ͋Δݴ༿ͷҙຯ͸ͦͷपғʹݱΕΔݴ༿ʹґଘ͢Δʣ w(&$"ͱಉ༷ͷൃ૝ʹΑΔઌߦͷ/-1ٕज़  $MBTTCBTFEMBOHVBHFNPEFMT #SPXOFUBM  ඼ࢺΛߟྀͨ͠OHSBN-.  6OTVQFSWJTFEQBSTFST $MBSL  FUD ղੳϧʔϧͷΫϥελϦϯάֶश 6 ؔ࿈ݚڀ ࡶͳੜ੒ʹΑΔΤϥʔΑΓ΋σʔλෆ଍ ʹΑΔΤϥʔͷํ͕ਂࠁͳΒɼਖ਼֬͞͸ ؾʹͤͣσʔλ૿ิͨ͠Βྑ͍ ͱ͍ͬͯ΋ɼ(&$"͸ݟΔ ͔Βʹಉ௲ҟٛޠʹΑΘ͍ ͯͭ
  5. w#MBDLCPYϞσϧͷ൚ԽੑೳΛߴΊΔσʔλ૿ิҎ֎ͷࢼΈˡλεΫɾϞσϧґଘ  4USVDUVSFESFHVMBSJTBUJPO 0IFUBM    1PTUFSJPSSFHVMBSJTBUJPO (BODIFWFUBM )VFUBM

       &YQMJDJUTUBDLT (SFGFOTUFUUFFUBM    $PNQPTJUJPOPQFSBUPST #PXNBOFUBM 3VTTJOFUBM   w/-1ͷσʔλ૿ิख๏ˡߏ੒ੑΛ΋ͬͱ͏·͘׆༻͍ͨ͠  -BOHVBHFNPEFMCBTFEXPSETXBQQJOH 3BUOFSFUBM    3BOEPNXPSEDIBSBDUFSNPEJpDBUJPOT &%"8FJBOE;IPV    -PHJDTCBTFEDPNQPTJUJPOBMSVMFT +JBBOE-JBOH  7 ؔ࿈ݚڀ ଞͷ༗ྗͳख๏ɿ w#BDLUSBOTMBUJPO w1BSBQISBTJOH
  6. 9 4$"/ sentence, respectively. The encoder (left) ends with the

    first <EOS> symbol, and the decoder (right) begins with <SOS>. 200 hidden units per layer, no attention, and dropout applied at the 0.5 level. Although the detailed analyses to follow focus on this particular model, the top-performing ar- chitecture for each experiment individually is also reported and analyzed. Networks were trained with the following specifications. Training consisted of 100,000 trials, each presenting an input/output sequence and then updating the networks weights.5 The ADAM optimization algorithm was used with default parameters, including a learning rate of 0.001 (Kingma & Welling, 2014). Gradients with a norm larger than 5.0 were clipped. Finally, the decoder requires the previous step’s output as the next step’s input, which was computed in two different ways. During training, for half the time, the network’s self-produced outputs were passed back to the next step, and for the other half of the time, the ground- truth outputs were passed back to the next step (teacher forcing; Williams & Zipser, 1989). The networks were implemented in PyTorch and based on a standard seq2seq implementation.6 Training accuracy was above 99.5% for the overall-best network in each of the key experiments, and it was at least 95% for the top-performers in each experiment specifically. 5Note that, in all experiments, the number of distinct training commands is well below 100k: we randomly sampled them with replacement to reach the target size 6The code we used is publicly available at the link: http://pytorch.org/tutorials/intermediate/ seq2seq_translation_tutorial.html tions and produce the appropriate action sequence based solely on extrapolation from the background training. Experiment 1: Generalizing to a random subset of commands In this experiment, the SCAN tasks were randomly split into a training set (80%) and a test set (20%). The training set provides broad coverage of the task space, and the test set examines how networks can decompose and recombine commands from the training set. For instance, the network is asked to perform the new command, “jump opposite right after walk around right thrice,” as a zero-shot generaliza- tion in the test set. Although the conjunction as a whole is novel, the parts are not: The training set features many ex- amples of the parts in other contexts, e.g., “jump opposite right after turn opposite right” and “jump right twice after walk around right thrice” (both bold sub-strings appear 83 times in the training set). To succeed, the network needs to generalize by recombining pieces of existing commands to interpret new ones. Overall, the networks were highly successful at general- ization. The top-performing network for this experiment achieved 99.8% correct on the test set (accuracy values here and below are averaged over the five training runs). The top- performing architecture was a LSTM with no attention, 2 layers of 200 hidden units, and no dropout. The best-overall network achieved 99.7% correct. Interestingly, not every architecture was successful: Classic SRNs performed very poorly, and the best SRN achieved less than 1.5% correct at test time (performance on the training set was equally low). However, attention-augmented SRNs learned the commands much better, achieving 59.7% correct on average for the test set (with a range between 18.4% and 94.0% across SRN Generalization without Systematicity jump ) JUMP jump left ) LTURN JUMP jump around right ) RTURN JUMP RTURN JUMP RTURN JUMP RTURN JUMP turn left twice ) LTURN LTURN jump thrice ) JUMP JUMP JUMP jump opposite left and walk thrice ) LTURN LTURN JUMP WALK WALK WALK jump opposite left after walk around left ) LTURN WALK LTURN WALK LTURN WALK LTURN WALK LTURN LTURN JUMP Figure 1. Examples of SCAN commands (left) and the corresponding action sequences (right). Figure 2. The seq2seq framework is applied to SCAN. The sym- bols <EOS> and <SOS> denote end-of-sentence and start-of- sentence, respectively. The encoder (left) ends with the first <EOS> symbol, and the decoder (right) begins with <SOS>. 200 hidden units per layer, no attention, and dropout applied at the 0.5 level. Although the detailed analyses to 4. Experiments In each of the following experiments, the recurrent networks are trained on a large set of commands from the SCAN tasks to establish background knowledge as outlined above. After training, the networks are then evaluated on new commands designed to test generalization beyond the background set in systematic, compositional ways. In evaluating these new commands, the networks must make zero-shot generaliza- tions and produce the appropriate action sequence based solely on extrapolation from the background training. Experiment 1: Generalizing to a random subset of commands In this experiment, the SCAN tasks were randomly split jump / SCAN jump / NACS right / SCAN right / NACS seq2seq 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 + GECA 0.87 ± 0.02 0.67 ± 0.01 0.82 ± 0.04 0.82 ± 0.03 Table 1: Sequence match accuracies on SCAN datasets, in which the learner must generalize to new compositional uses of a single lexical item (“jump”) or multi-word modifier (“around right”) when mapping instructions to action sequences (SCAN) or vice-versa (NACS, Bastings et al., 2018). While the sequence-to-sequence model is unable to make any correct generalizations at all, applying GECA enables it to succeed most of the time. Scores are averaged over 10 random seeds; the standard deviation across seeds is shown. All improvements are significant (paired binomial test, p ⌧ 0.001). ʢ4$"/ͷݩ࿦จΑΓʣ ݩσʔλ݅ʹ ʢΘ͔ͣ ݅௥Ճ
  7. 10 (FP2VFSZ (e.g. the first example in Figure 3); however,

    it also generates enough new usages of the target concept that the learner generalizes successfully. 5 Semantic parsing Next we turn to the problem of semantic parsing, which has also been a popular subject of study for questions about compositionality, generalization, and data augmentation. For the reasons discussed in Section 3, we expect qualitatively different be- havior from this approach on real language data without the controlled vocabulary of SCAN. We study four versions of the GEOQUERY dataset (Zelle, 1995), which consists of 880 English questions about United States geography, paired with meaning representations in the form of either logical expressions or SQL queries. The standard train–test split for this dataset ensures that no natu- ral language question is repeated between the train and test sets. As Finegan-Dollak et al. (2018) note, this provides only a limited test of generalization, as many test examples feature a logical form that overlaps with the training data; they introduce a more challenging query split to ensure that neither questions nor logical forms are repeated (even after anonymizing named entities). Query Question Logical forms seq2seq 0.62 ± 0.07 0.76 ± 0.02 + Jia et al. 16 0.61 ± 0.03 0.81 ± 0.01 + GECA 0.65†‡± 0.06 0.78†± 0.01 + GECA + concat 0.63 ± 0.04 0.79†± 0.01 SQL queries Iyer et al. 17 0.40 0.66 seq2seq 0.39 ± 0.05 0.68 ± 0.02 + GECA 0.49† ± 0.02 0.68 ± 0.02 Table 2: Meaning representation exact-match accura- cies on the GEOQUERY dataset. On logical forms, GECA approaches the data augmentation approach of Jia and Liang (2016) on the standard split of the data (“Question”) and outperforms it on a split designed to test compositionality (“Query”). On SQL expressions, GECA leads to substantial improvements on the query split and achieves state-of-the-art results. Scores are averaged over 10 random seeds; the standard deviation across seeds is shown.1 †Significant improvement over seq2seq baseline (p < 0.01). ‡Significant improvement over Jia and Liang (2016) (p < 0.001). (A t-test is used for LF experiments and a paired binomial test for SQL.) specialized entity linking machinery, the augmen- tation procedure successfully aligns natural lan- Logical forms what is the lowest point in rhode island ( A , lowest ( A , ( place ( A ) , loc ( A , B ) , const ( B , stateid ( rhode island ) ) ) ) ) what states does the florida run through ( A , ( state ( A ) , const ( B , riverid ( florida ) ) , traverse ( B , A ) ) ) what state borders the state with the lowest population density ( A , ( state ( A ) , next_to ( A , B ) , smallest ( C , ( state ( B ) , density ( B , C ) ) ) ) ) SQL queries what rivers run through west wyoming SELECT RIVER0.NAME FROM RIVER AS RIVER0 WHERE RIVER0.TRAVERSE = " west wyoming " which states have towns major named springfield SELECT CITY0.STATE_NAME FROM CITY AS CITY0 WHERE CITY0.NAME = " springfield " AND CITY0.POP > 150000 what is the population of the area of the largest state SELECT CITY0.POP FROM CITY AS CITY0 WHERE CITY0.NAME = ( SELECT STATE0.AREA FROM STATE AS STATE0 WHERE STATE0.AREA = ( SELECT MAX ( STATE1.AREA ) FROM STATE AS STATE1 ) ) Figure 4: Examples synthesized for semantic parsing on GEOQUERY. Substituted fragments are underlined. GECA aligns named entities to their logical representations and abstracts over predicates. Sometimes (as in the final example) synthesized examples are semantically questionable but have plausible hierarchical structure. Analysis: dataset statistics Applying GECA to the GEOQUERY data increases full example over- to GEOQUERY in size, diversity and complexity. In contrast to GEOQUERY, however, application
  8. 11 8JLJQFEJB ENG KIN LAO NA PUS TOK # train

    tokens 2M 62K 10K 28K 2M 30K 5-MKN 369 241 315 45.4 574 44.3 + GECA 365† 239† 313† 45.4 570† 44.1 Table 4: Perplexities on low-resource language modeling in English (ENG), Kinyarwanda (KIN), Lao, Na, Pashto (PUS) and Tok Pisin (TOK). Even with a Kneser–Ney smoothed 5-gram model (5-MKN) rather than a high-capacity neural model, applying GECA leads to small improvements in perplexity. †Significant improvement over 5-gram MKN baseline (paired binomial test, p < 0.05). ical complexities: for example, Kinyarwanda has a complex noun class system (Kimenyi, 1980) and Pashto has rich derivational morphology (Tegey and Robson, 1996), while Lao and Tok Pisin are comparatively simple morphologically (Enfield, 2008; Verhaar, 1995). Training datasets range from 10K–2M tokens. Like Adams et al., we found that a 5-gram modified Kneser–Ney language model (Ney et al., 1994) outperformed several varieties of RNN language model, so we base our GECA that can appear in productive contexts. In this sense, GECA performs a similar role to the Kneser–Ney smoothing also used in all LM experiments. With GECA, in contrast to Kneser–Ney, the notion of “context” can look forward as well as backward, and capture longer-range interactions. Examples of synthesized sentences are shown in Figure 5. Most sentences are grammatical, and many of the substitutions preserve relevant seman- tic type information (substituting locations for lo- experiments on the n-gram model instead. We use the implementation provided in KenLM (Heafield, 2011). We extract fragments with no gaps and a maxi- mum size of 2 tokens, with the environment taken to be a 2-token window around the extracted frag- ment. New usages are generated only for fragments that occur fewer than 20 times in the data. In Kin- yarwanda, the base dataset contains 3358 sentences. GECA generates an additional 913, using 913 dis- tinct templates and 199 distinct fragments. Rather than training directly on the augmented dataset, as in preceding sections, we found that the best performance came from training one lan- guage model on the original dataset and one on the augmented dataset, then interpolating their final probabilities. The weight for this interpolation is determined on a validation dataset and chosen to be one of 0.05, 0.1 and 0.5. Results are shown in Table 4. Improvements are not universal and are more modest than in preced- ing sections. However, GECA decreases perplexi- ties across multiple languages and never increases them. These results suggest that the substitution principle underlying GECA is a useful mechanism for encouraging compositionality even outside con- ditional tasks and neural models. Analysis: examples and statistics In language modeling, GECA functions as a smoothing scheme: its primary effect is to move mass toward n-grams cations, times for times, etc.). However, some ill- formed sentences are also generated. As in Section 5, we manually inspect 100 synthe- sized sentences. As before, sentences are evaluated for grammaticality; here, since no explicit seman- tics were provided, they are instead evaluated for generic semantic acceptability. In this case, only 51% of synthesized sentences are semantically ac- ceptable, but 79% are grammatical. 7 Discussion We introduced GECA, a simple data augmentation scheme based on identifying local phrase substitu- tions that are licensed by common contexts, and demonstrated that extra training examples gener- ated with GECA lead to substantial improvements on both diagnostic and natural datasets for semantic various copies of portions of the code of hammurabi have been found on baked clay tablets , some possi- bly older than the celebrated basalt stele now in the night sky . the work contains , in an appendix , the german equiva- lents for the technical terms used in the glock $num . payments system in the aclu proposed new directions for the organization . in the late triassic and early nineteenth century , a num- ber of scots-irish traders lived among the choctaw and married high-status women . Figure 5: Sentences synthesized for the English lan- guage modeling task. Most examples are syntactically well-formed; some are also semantically plausible.