Training data: 1,198,149 sentence pairs, Test data: 1,812 sentence pairs ⚫ We used the vocabulary of 100K subword tokens based on BPE for both languages. ⚫ Tokenizer ⚫ Ja: KyTea, En: Moses Tokenizer ⚫ Dependency Parser ⚫ Ja: EDA, En: Stanford Dependencies ⚫ The dependency parsers are NOT used in decoding. 16 Sep. 3, 2019