Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Deep Learning 5章後半
Search
Takahiro Kawashima
May 29, 2018
Science
0
150
Deep Learning 5章後半
ゼミの輪講資料.Goodfellow本5.5節〜5.11節
Takahiro Kawashima
May 29, 2018
Tweet
Share
More Decks by Takahiro Kawashima
See All by Takahiro Kawashima
論文紹介:Precise Expressions for Random Projections
wasyro
0
340
ガウス過程入門
wasyro
0
440
論文紹介:Inter-domain Gaussian Processes
wasyro
0
150
論文紹介:Proximity Variational Inference (近接性変分推論)
wasyro
0
320
機械学習のための行列式点過程:概説
wasyro
0
1.6k
SOLVE-GP: ガウス過程の新しいスパース変分推論法
wasyro
1
1.3k
論文紹介:Stein Variational Gradient Descent
wasyro
0
1.2k
次元削減(主成分分析・線形判別分析・カーネル主成分分析)
wasyro
0
780
論文紹介: Supervised Principal Component Analysis
wasyro
1
890
Other Decks in Science
See All in Science
サイゼミ用因果推論
lw
1
6.7k
テンソル分解による糖尿病の組織特異的遺伝子発現の統合解析を用いた関連疾患の予測
tagtag
2
130
Symfony Console Facelift
chalasr
2
430
How were Quaternion discovered
kinakomoti321
2
1.2k
SciPyDataJapan 2025
schwalbe10
0
220
統計学入門講座 第1回スライド
techmathproject
0
290
butterfly_effect/butterfly_effect_in-house
florets1
1
160
データベース02: データベースの概念
trycycle
PRO
1
670
はじめてのバックドア基準:あるいは、重回帰分析の偏回帰係数を因果効果の推定値として解釈してよいのか問題
takehikoihayashi
2
1.8k
Machine Learning for Materials (Challenge)
aronwalsh
0
260
「美は世界を救う」を心理学で実証したい~クラファンを通じた新しい研究方法
jimpe_hitsuwari
1
110
Explanatory material
yuki1986
0
160
Featured
See All Featured
It's Worth the Effort
3n
184
28k
I Don’t Have Time: Getting Over the Fear to Launch Your Podcast
jcasabona
32
2.2k
The MySQL Ecosystem @ GitHub 2015
samlambert
251
12k
10 Git Anti Patterns You Should be Aware of
lemiorhan
PRO
656
60k
Templates, Plugins, & Blocks: Oh My! Creating the theme that thinks of everything
marktimemedia
30
2.3k
4 Signs Your Business is Dying
shpigford
183
22k
The Web Performance Landscape in 2024 [PerfNow 2024]
tammyeverts
5
530
KATA
mclloyd
29
14k
Mobile First: as difficult as doing things right
swwweet
223
9.6k
RailsConf & Balkan Ruby 2019: The Past, Present, and Future of Rails at GitHub
eileencodes
135
33k
Become a Pro
speakerdeck
PRO
27
5.3k
The Power of CSS Pseudo Elements
geoffreycrofte
75
5.8k
Transcript
5 ষ: Machine Learning Basics ౡوେ May 29, 2018 ిؾ௨৴େֶ
ঙݚڀࣨ B4
࣍ 1. ࠷ਪఆ 2. ϕΠζ౷ܭ 3. ڭࢣ͋Γֶश 4. ڭࢣͳֶ͠श 5.
֬తޯ߱Լ๏ (SGD) 6. Deep Learning ͷಈػ 2
࠷ਪఆ
࠷ਪఆ ൘ॻͰΔ 3
ϕΠζ౷ܭ
ϕΠζ౷ܭ ൘ॻͰΔ 4
ڭࢣ͋Γֶश
ڭࢣ͋Γֶश ֬తڭࢣ͋Γֶश ෮श: ҰൠઢܗϞσϧ y = θT
x + ϵ ϵ ∼ N(ϵ|0, σ2) ⇒ p(y|x; θ) = N(y; θT x, σ2) (5.80) ਖ਼نͷఆٛҬ (−∞, ∞) ˠ {0, 1} ͷೋྨʹ͑ͳ͍ 5
ڭࢣ͋Γֶश ֬తڭࢣ͋Γֶश લड़ͷཧ༝͔Β (0, 1) ͷҬΛͭؔΛߟ͍͑ͨ ˠγάϞΠυؔ f(x) = 1
1 + e−x 6
ڭࢣ͋Γֶश ֬తڭࢣ͋Γֶश ϩδεςΟ οΫճؼ p(y = 1|x; θ) = 1
1 + e−θT x = 1 1 + e−(θ0+θ1x1+θ2x2+··· ) ˠ {0, 1} ͷೋผʹ͑Δ 7
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ಛ্ۭؒͰઢܗՄೳͳೋྨΛߟ͑Δ ˠ͍Ζ͍Ζͳઢ (ฏ໘) ͷҾ͖ํ͕͋Δ 8
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) マージンを最大化 支持超平面 分類超平面 サポートベクトル ࢧ࣋ฏ໘ͷʮϚʔδϯʯΛ࠷େԽ͢ΔΑ͏ʹྨฏ໘Λֶश 9
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ฏ໘ͷํఔࣜ ax + by + c =
0 ͳͷͰ͜ΕΛҰൠԽͯ͠ɼྨฏ໘ͷํఔࣜαϙʔτϕΫτ ϧͷू߹ x∗ Λ༻͍ͯ w0 + wT x∗ = 0 ͱॻ͚Δɽֶश͢Δͷ͜ͷ w Ͱ͋Δ 2 ͭͷࢧ࣋ฏ໘ɼྨฏ໘Λ ±k ͚ͩͣΒͯ͠ w0 + wT x∗ = k w0 + wT x∗ = −k ⇒ |w0 + wT x∗| = k Ͱ͋Δ 10
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ฏ໘ͷࣜఆഒͯ͠ಉ͡ͷΛࣔ͢ͷͰɼֶश݁Ռ͕Ұҙ ʹఆ·Βͳ͍ ˠҰҙੑΛ࣋ͭΑ͏ʹ੍Λ՝͢ ੍: |w0 + wT
x∗| = 1 ॏΈϕΫτϧʹ͍ͭͯඪ४Խ͢Δͱ |w0 + wT x∗| ∥w∥ = 1 ∥w∥ ͜ΕΛ࠷େԽ͢ΔΑ͏ʹֶश͢Δ 11
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ͜Ε·ͰઢܗՄೳͳͷ ˠઢܗෆՄೳͳΛߟ͍͑ͨ ղܾࡦ: ಛʹඇઢܗมΛࢪͯ͠ผͷಛۭؒʹࣹӨ 12
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ྫ: ສ༗Ҿྗͷࣜ ಛ: ࣭ྔ m1, m2 ɼڑ
r f(m1, m2, r) = G m1m2 r2 ͜ΕΛ֤ಛʹؔͯ͠ઢܗʹ͍ͨ͠ ˠରΛͱΔ logf(m1, m2, r) = logG + logm1 + logm2 − logr2 ઢܗʹͳͬͨ 13
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ҰൠʹͱͷಛΑΓߴ࣍ݩͷۭࣹؒӨ͢Δ ˠֶशσʔλ͕ଟ͍ͱܭࢉྔ͕େʹͳΔ ˠ ΧʔωϧτϦοΫͱ͍͏ຐज़Λ༻͍Δͱখ͍͞ܭࢉྔͰߴ࣍ݩ (ແݶ࣍ݩ) ͷࣹӨΛධՁͰ͖Δ 14
ڭࢣͳֶ͠श
ڭࢣͳֶ͠श ओੳ ͬͨͷͰύε 15
ڭࢣͳֶ͠श ΫϥελϦϯά ΫϥελϦϯά ྨࣅͨ͠σʔλΛάϧʔϓʹྨ͢Δ 16
ڭࢣͳֶ͠श k-means ๏ ΞϧΰϦζϜ 1. Ϋϥελத৺ͷॳظͱͯ͠ɼσʔλ͔Β k ݸͷηϯτϩ ΠυΛϥϯμϜʹબͿ (k
ط) 2. ֤αϯϓϧΛ࠷͍ۙηϯτϩΠυʹׂΓͯΔ 3. ֤ηϯτϩΠυΛࣗʹׂΓͯΒΕͨσʔλͷத৺ʹҠಈ ͢Δ 4. 2,3 Λ܁Γฦ͢ 17
ڭࢣͳֶ͠श k-means ๏ σϞΛΕ http://tech.nitoyon.com/ja/blog/2013/11/07/k-means/ 18
ڭࢣͳֶ͠श ิ: k-means++๏ k-means ๏ॳظґଘੑ͕ඇৗʹߴ͍ ˠ֤ηϯτϩΠυͷॳظΛόϥόϥʹࢃ͘͜ͱͰվળ (k-means++๏) σϞΛΕ https://wasyro.github.io/k-meansppVisualizer/ 19
֬తޯ߱Լ๏ (SGD)
֬తޯ߱Լ๏ (SGD) ίετؔ (ςετ) σʔλ͝ͱͷଛࣦؔͷʹղͰ͖Δ͜ ͱ͕ଟ͍ ઢܗճؼͰɼର L(x, y, θ)
Λ༻͍ͯ J(θ) = Ex,y∼ˆ pdata [L(x, y, θ)] = 1 m m ∑ i=1 L(x(i), y(i), θ) (5.96) L(x(i), y(i), θ) = −logp(y|x, θ) ͜ͷίετؔʹؔͯ͠ɼύϥϝʔλ θ ʹ͍ͭͯޯ๏Λద༻ 20
֬తޯ߱Լ๏ (SGD) ∇θJ(θ) = ∇θ [ 1 m m ∑
i=1 L(x(i), y(i), θ) ] = 1 m m ∑ i=1 ∇θL(x(i), y(i), θ) (5.97) ͜ͷܭࢉྔ O(m) Ͱɼσʔλ͕૿͑Δͱ͔ͳΓͭΒ͍ ˠ֬తޯ߱Լ๏ (SGD) 21
֬తޯ߱Լ๏ (SGD) SGD ޯΛظͰදݱͰ͖Δͱߟ͑ɼαϯϓϧͷখ͍͞αϒ ηοτ (ϛχόον) ͷޯ๏Ͱۙࣅతʹٻ·Δͱ͢Δ B = {x(1),
. . . , x(m′)} ͷϛχόονΛҰ༷ϥϯμϜʹֶशσʔλ ηοτ͔Βͬͯ͘Δ m′ ͍͍ͩͨ 100ʙ300 ͘Β͍Ͱɼm ͕ଟͯ͘ಉ༷ ޯͷਪఆྔ g g = 1 m′ ∇θ m′ ∑ i=1 L(x(i), y(i), θ) (5.98) ύϥϝʔλͷਪఆྔ θ ← θ − ϵg 22
Deep Learning ͷಈػ
Deep Learning ͷಈػ ࣍ݩͷढ͍ ಛྔͷ࣍ݩ͕૿͑ΔͱࢦతʹऔΓ͏ΔΈ߹Θ͕ͤ૿͑Δ ্ਤ֤ಛ͕ͦΕͧΕ 10 ݸͷΛऔΓ͏Δ߹ͷ֓೦ਤ 23
Deep Learning ͷಈػ ࣍ݩͷढ͍ ྫͱͯ͠ k ۙ๏ (k-Nearest Neighbour) ͱ͍͏ֶशΞϧΰϦζϜ
Λߟ͑Δ k ۙ๏ ςετσʔλͷೖྗʹରͯ͠ɼಛ্ۭؒͰͬͱ͍ۙ k ݸ ͷֶशσʔλΛ୳͠ɼͦΕΒͷֶशσʔλͷଐ͢ΔΫϥεͷଟ ܾͰςετσʔλʹׂΓৼΔΫϥεΛܾఆ͢Δ k = 3 ͷ߹ ೖྗ˔ͷϥϕϧ˙ 24
Deep Learning ͷಈػ ࣍ݩͷढ͍ ಛۭؒͰσʔλ͕εΧεΧͰ k ۙ๏Ͱ͏·͍͔͘ͳͦ͞͏ ˠಉ༷ʹଟ͘ͷݹయతػցֶशख๏ͰଠଧͪͰ͖ͳ͘ͳΔ 25
References I [1] ਢࢁರࢤ, ϕΠζਪʹΑΔػցֶशೖ. ߨஊࣾ, 2017. [2] খాਸ, αϙʔτϕΫλʔϚγϯ.
ΦʔϜࣾ, 2007. [3] Sebastian Raschka ஶ, גࣜձࣾΫΠʔϓ༁, ୡਓσʔλαΠ ΤϯςΟετʹΑΔཧͱ࣮ફ Python ػցֶशϓϩάϥϛ ϯά, ΠϯϓϨε, 2016.