VAEs with Tree-structured Latent Variables VAE: Variational Auto-Encoding જࡏมͷ֬ΛΈࠐΜͩ NN Ϟσϧ ˠ Seq2Seq ͳڭࢣ͋Γֶशʹద༻ [Miao+ 2016; Kociský+ 2016] StructVAE ࣗવݴޠදݱʹɼજࡏతͳ ߏΛԾఆ ˠ VAE Ͱڭࢣͳֶ͠श ෮ݩϞσϧ pθ (x | z) ਪϞσϧ qφ (z | x) ˠ qφ (·) △ = pφ (·) ͱ͢Ε Semantic Parser ͦͷͷ STRUCTVAE: Tree-structured Latent Variable Models for Semi-supervised Semantic Parsing Pengcheng Yin, Chunting Zhou, Junxian He, Graham Neubig Language Technologies Institute Carnegie Mellon University {pcyin,ctzhou,junxianh,gneubig}@cs.cmu.edu Abstract Semantic parsing is the task of transducing natural language (NL) utterances into for- mal meaning representations (MRs), com- monly represented as tree structures. An- notating NL utterances with their cor- responding MRs is expensive and time- consuming, and thus the limited availabil- ity of labeled data often becomes the bot- tleneck of data-driven, supervised mod- els. We introduce STRUCTVAE, a vari- ational auto-encoding model for semi- supervised semantic parsing, which learns both from limited amounts of parallel data, Structured Latent Semantic Space (MRs) p(z) Inference Model q (z|x) Reconstruction Model p✓(x|z) Sort my_list in descending order z Figure 1: Graphical Representation of STRUCTVAE 5 / 15