Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
文献紹介: Confidence Modeling for Neural Semantic P...
Search
Sponsored
·
Your Podcast. Everywhere. Effortlessly.
Share. Educate. Inspire. Entertain. You do you. We'll handle the rest.
→
Yumeto Inaoka
October 24, 2018
Research
3
240
文献紹介: Confidence Modeling for Neural Semantic Parsing
2018/10/24の文献紹介で発表
Yumeto Inaoka
October 24, 2018
Tweet
Share
More Decks by Yumeto Inaoka
See All by Yumeto Inaoka
文献紹介: Quantity doesn’t buy quality syntax with neural language models
yumeto
1
210
文献紹介: Open Domain Web Keyphrase Extraction Beyond Language Modeling
yumeto
0
270
文献紹介: Self-Supervised_Neural_Machine_Translation
yumeto
0
180
文献紹介: Comparing and Developing Tools to Measure the Readability of Domain-Specific Texts
yumeto
0
190
文献紹介: PAWS: Paraphrase Adversaries from Word Scrambling
yumeto
0
180
文献紹介: Beyond BLEU: Training Neural Machine Translation with Semantic Similarity
yumeto
0
310
文献紹介: EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing
yumeto
0
380
文献紹介: Decomposable Neural Paraphrase Generation
yumeto
0
250
文献紹介: Analyzing the Limitations of Cross-lingual Word Embedding Mappings
yumeto
0
260
Other Decks in Research
See All in Research
データサイエンティストをめぐる環境の違い2025年版〈一般ビジネスパーソン調査の国際比較〉
datascientistsociety
PRO
0
870
Thirty Years of Progress in Speech Synthesis: A Personal Perspective on the Past, Present, and Future
ktokuda
0
180
説明可能な機械学習と数理最適化
kelicht
2
1k
Upgrading Multi-Agent Pathfinding for the Real World
kei18
0
320
生成的情報検索時代におけるAI利用と認知バイアス
trycycle
PRO
0
340
Akamaiのキャッシュ効率を支えるAdaptSizeについての論文を読んでみた
bootjp
1
480
競合や要望に流されない─B2B SaaSでミニマム要件を決めるリアルな取り組み / Don't be swayed by competitors or requests - A real effort to determine minimum requirements for B2B SaaS
kaminashi
0
930
COFFEE-Japan PROJECT Impact Report(海ノ向こうコーヒー)
ontheslope
0
750
SREはサイバネティクスの夢をみるか? / Do SREs Dream of Cybernetics?
yuukit
3
420
HoliTracer:Holistic Vectorization of Geographic Objects from Large-Size Remote Sensing Imagery
satai
3
690
量子コンピュータの紹介
oqtopus
0
220
Proposal of an Information Delivery Method for Electronic Paper Signage Using Human Mobility as the Communication Medium / ICCE-Asia 2025
yumulab
0
220
Featured
See All Featured
Data-driven link building: lessons from a $708K investment (BrightonSEO talk)
szymonslowik
1
940
Digital Ethics as a Driver of Design Innovation
axbom
PRO
1
200
Between Models and Reality
mayunak
1
210
Site-Speed That Sticks
csswizardry
13
1.1k
Raft: Consensus for Rubyists
vanstee
141
7.3k
Building an army of robots
kneath
306
46k
Balancing Empowerment & Direction
lara
5
920
Getting science done with accelerated Python computing platforms
jacobtomlinson
2
130
Bridging the Design Gap: How Collaborative Modelling removes blockers to flow between stakeholders and teams @FastFlow conf
baasie
0
470
Neural Spatial Audio Processing for Sound Field Analysis and Control
skoyamalab
0
190
Testing 201, or: Great Expectations
jmmastey
46
8.1k
Deep Space Network (abreviated)
tonyrice
0
81
Transcript
Confidence Modeling for Neural Semantic Parsing จݙհɹ Ԭٕज़Պֶେֶɹࣗવݴޠॲཧݚڀࣨ ҴԬɹເਓ
Literature Confidence Modeling for Neural Semantic Parsing Li Dong† and
Chris Quirk‡ and Mirella Lapata† †School of Informatics, University of Edinburgh ‡Microsoft Research, Redmond Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 743–753, 2018. !2
Abstract • Neural Semantic Parsing (seq2seq) ʹ͓͚Δ֬৴ϞσϦϯά • ೖྗͷͲ͕͜ෆ͔֬͞ͷཁҼʹͳ͍ͬͯΔ͔Λࣝผ •
ࣄޙ֬ɺΞςϯγϣϯʹґଘ͢Δख๏ΑΓ༏ल !3
Introduction • Neural Semantic ParsingظͰ͖Δ݁ՌΛग़͢ҰํͰ ग़ྗͷݪҼ͕ղऍͮ͠Β͍ϒϥοΫϘοΫεͱͯ͠ಈ࡞ • Ϟσϧͷ༧ଌʹର͢Δ֬৴ͷਪఆʹΑͬͯ༗ҙٛͳ ϑΟʔυόοΫ͕ՄೳʹͳΔͷͰͳ͍͔ •
֬৴ͷείΞϦϯάख๏ࣄޙ֬ p(y|x) ͕Α͘༻͞ΕΔ → ઢܗϞσϧͰ༗ޮ͕ͩχϡʔϥϧϞσϧͰྑ͘ͳ͍ !4
Neural Semantic Parsing • In: Natural Language Out: Logical form
• Seq2seq with LSTM • Attention mechanism • Maximize the likelihood • Beam Search !5 !5
Confidence Estimation • ೖྗqͱ༧ଌͨ͠ҙຯදݱa͔Β֬৴s(q, a) ∈ (0, 1)Λ༧ଌ • ֬৴ͷஅʹʮԿΛΒͳ͍͔ʯΛਪఆ͢Δඞཁ͕͋Δ
• Ϟσϧͷෆ͔֬͞ɺσʔλͷෆ͔֬͞ɺೖྗͷෆ͔֬͞Λجʹ ࡞ΒΕΔࢦඪ͔Β֬৴ΛճؼϞσϧʹΑͬͯٻΊΔ !6
Model Uncertainty • ϞσϧͷύϥϝʔλߏʹΑΔෆ͔֬͞Ͱ֬৴͕Լ ← ྫ͑܇࿅σʔλʹؚ·ΕΔϊΠζ֬తֶशΞϧΰϦζϜ • Dropout Perturbation, Gaussian
Noise, Posterior Probability͔Β ࢦඪΛ࡞͠ɺෆ͔֬͞Λ༧ଌ !7
Dropout Perturbation • DropoutΛςετ࣌ʹ༻ (ਤதͷi, ii, iii, ivͷՕॴ) • จϨϕϧͰͷࢦඪɿ
• τʔΫϯϨϕϧͰͷࢦඪɿ • ɹɹઁಈͤ͞Δύϥϝʔλɹ݁ՌΛूΊͯࢄΛܭࢉ !8
Gaussian Noise • Gaussian NoiseΛϕΫτϧՃ͑ͯDropoutͱಉ༷ʹࢄΛܭࢉ ← DropoutϕϧψʔΠɺ͜ΕΨεʹै͏ϊΠζ • ϊΠζͷՃ͑ํҎԼͷ2ͭ (vݩͷϕΫτϧ,
gGaussian Noise) !9
Posterior Probability • ࣄޙ֬ p(a | q)ΛจϨϕϧͰͷࢦඪʹ༻ • τʔΫϯϨϕϧͰҎԼͷ2ͭΛࢦඪʹ༻ •
ɹɹɹɹɹɹɹɹɹɹɹɹɿ࠷ෆ͔֬ͳ୯ޠʹண • ɹɹɹɹɹɹɹɹɹɹɹɹɹɹɿτʔΫϯຖͷperplexity !10
Data Uncertainty • ܇࿅σʔλͷΧόϨοδෆ͔֬͞ʹӨڹΛ༩͑Δ • ܇࿅σʔλͰݴޠϞσϧΛֶशͤ͞ɺೖྗͷݴޠϞσϧ֬Λ ࢦඪʹ༻͍Δ • ೖྗͷະޠτʔΫϯΛࢦඪʹ༻͍Δ !11
Input Uncertainty • Ϟσϧ͕ᘳͰೖྗ͕ᐆດͩͱෆ͔֬͞ൃੜ͢Δ (e.g. 9 o’clock -> flight_time(9am) or
flight_time(9pm) ) • ্Ґީิͷ֬ͷࢄΛ༻͍Δ • ΤϯτϩϐʔΛ༻͍Δ ← a’αϯϓϦϯάۙࣅ !12
Confidence Storing • ͜ΕΒͷ༷ʑͳࢦඪΛ༻͍ͯ֬৴ͷείΞϦϯάΛߦ͏ • ޯϒʔεςΟϯάϞσϧʹ֤ࢦඪΛ༩ֶ͑ͯशͤ͞Δ ग़ྗ͕0ʙ1ʹͳΔΑ͏ϩδεςΟοΫؔͰϥοϓ • ޯϒʔεςΟϯάϞσϧҎԼͷղઆهࣄ͕͔Γ͍͢ (ʮGradient
Boosting ͱ XGBoostʯ: ɹ https://zaburo-ch.github.io/post/xgboost/ ) !13
Uncertainty Interpretation • Ͳͷೖྗ͕ෆ͔֬͞ʹ࡞༻͍ͯ͠Δ͔Λಛఆ → ͦͷೖྗΛಛผͳέʔεͱͯ͠ѻ͏͕ग़དྷΔ • ༧ଌ͔ΒೖྗτʔΫϯؒ·ͰΛٯൖ → ֤τʔΫϯͷෆ͔֬͞ͷد༩͕Θ͔Δ
!14
Experiments (Datasets) • IFTTT σʔληοτ (train-dev-test : 77,495 - 5,171
- 4,294) • DJANGO σʔληοτ (train-dev-test : 16,000 - 1,000 - 1,805) !15
Experiments (Settings) • Dropout Perturbation Dropout rate0.1ɺ30ճ࣮ߦͯ͠ࢄΛܭࢉ • Gaussian Noise
ඪ४ภࠩΛ0.05ʹઃఆ • Probability of Input ݴޠϞσϧͱͯ͠KenLMΛ༻ • Input Uncertainty 10-best ͷީิ͔ΒࢄΛܭࢉ !16
Experiments (Results) • Model Uncertainty͕࠷ޮՌత • Data UncertaintyӨڹ͕খ͍͞ → In-domainͰ͋ΔͨΊ
!17
Experiments (Results) !18
Experiments (Results) • Model Uncertaintyͷ ࢦඪ͕ॏཁ • ಛʹIFTTT#UNKͱ Var͕ॏཁ !19
Experiments (Results) !20
Experiments (Results) • ϊΠζΛՃ͑ͨτʔΫϯྻͱ ٯൖͰಘͨτʔΫϯྻͷ ΦʔόʔϥοϓͰධՁ • Attentionͱൺֱͯ͠ߴ͍ • K=4ʹ͓͍ͯ80%͕Ұக
!21
Experiments (Results) !22
Conclusions • Neural Semantic ParsingͷͨΊͷ֬৴ਪఆϞσϧΛఏࣔ • ෆ͔֬͞ΛೖྗτʔΫϯϨϕϧͰղऍ͢Δํ๏Λఏࣔ • IFTTT, DJANGOσʔληοτʹ͓͍ͯ༗ޮੑΛ֬ೝ
• ఏҊϞσϧSeq2seqΛ࠾༻͢Δ༷ʑͳλεΫͰద༻Մೳ • Neural Semantic ParsingͷActive Learningʹ͓͍ͯར༻Ͱ͖Δ !23