Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
What is Deep Learning ?
Search
urakarin
May 02, 2017
Technology
1
1k
What is Deep Learning ?
(Japanese document)
History and introductions of Neural Network,
社内ゼミでの資料です。
urakarin
May 02, 2017
Tweet
Share
More Decks by urakarin
See All by urakarin
WiFi講座(3)
urakarin
0
350
BadUSB
urakarin
0
400
Other Decks in Technology
See All in Technology
社内 Web システムのフロントエンド技術刷新: React Router v7 vs. TanStack Router
musasabibyun
0
140
Simplify! 10 ways to reduce complexity in software development
ufried
2
240
kernelvm-brain-net
raspython3
0
510
地に足の付いた現実的な技術選定から魔力のある体験を得る『AIレシート読み取り機能』のケーススタディ / From Grounded Tech Choices to Magical UX: A Case Study of AI Receipt Scanning
moznion
2
600
LangfuseではじめるAIアプリのLLMトレーシング
codenote
0
140
Terraform にコントリビュートしていたら Azure のコストをやらかした話 / How I Messed Up Azure Costs While Contributing to Terraform
nnstt1
1
450
MySQL InnoDB Data Recovery - The Last Resort
lefred
0
110
RubyKaigi NOC 近況 2025
sorah
1
740
TanStack Start 技術選定の裏側 / Findy-Lunch-LT-TanStack-Start
iktakahiro
0
110
雑に疎通確認だけしたい...せや!CloudShell使ったろ!
alchemy1115
0
210
AndroidアプリエンジニアもMCPを触ろう
kgmyshin
2
650
GraphQLを活用したリアーキテクチャに対応するSLI/Oの再設計
coconala_engineer
0
210
Featured
See All Featured
Gamification - CAS2011
davidbonilla
81
5.3k
Visualization
eitanlees
146
16k
Chrome DevTools: State of the Union 2024 - Debugging React & Beyond
addyosmani
5
600
Large-scale JavaScript Application Architecture
addyosmani
512
110k
Build The Right Thing And Hit Your Dates
maggiecrowley
35
2.7k
How to Think Like a Performance Engineer
csswizardry
23
1.6k
A better future with KSS
kneath
239
17k
KATA
mclloyd
29
14k
Imperfection Machines: The Place of Print at Facebook
scottboms
267
13k
Building a Scalable Design System with Sketch
lauravandoore
462
33k
VelocityConf: Rendering Performance Case Studies
addyosmani
329
24k
Bootstrapping a Software Product
garrettdimon
PRO
307
110k
Transcript
σΟʔϓϥʔχϯάͬͯԿʁ
[email protected]
2017.02.08
͢͜ͱɺ͞ͳ͍͜ͱ • ͢͜ͱ • χϡʔϥϧωοτϫʔΫͷֶతͳΈ • ॳظͷܾΊํɺධՁํ๏ • ύϥϝʔλྔɺܭࢉྔͷϘϦϡʔϜײ •
ϗοτͳ • ͞ͳ͍͜ͱ • πʔϧͷ • ࣜͷ • χϡʔϥϧωοτϫʔΫҎ֎ͷػցֶश • γϯΪϡϥϦςΟͳͲͷਓೳͷະདྷ ग़య wedge.ismedia.jp
Agenda • σΟʔϓϥʔχϯάͱʁ • ྺ࢙ • χϡʔϥϧωοτϫʔΫ͔ΒσΟʔϓͳχϡʔϥϧωοτϫʔΫ • ୈҰ࣍AIϒʔϜ •
ୈೋ࣍AIϒʔϜ • ୈࡾ࣍AIϒʔϜ • Ԡ༻ྫ • ·ͱΊ
• ਂֶशͱݴ͏ • ʢਆܦࡉ๔ʣͷಇ͖Λֶͨ͠शΞϧΰϦζϜͰ͋Δ Neural Network(NN)Λ༻͍ͨਓೳͷߏஙٕज़ͷ૯শ • ͦͷதͰਂ͘େنͳߏΛ࣋ͭ͜ͱ͕ಛ σΟʔϓϥʔχϯάͱʁ
GoogLeNet, 22Layers (ILSVRC 2014)
༻ޠͷؔੑ ਓೳʢAIʣ ػցֶश χϡʔϥϧωοτϫʔΫ ਂֶश
දతͳൃද Neural Networkͷ ϒϨʔΫεϧʔͱ ౙͷ࣌ දతͳਓࡐ֫ಘ Google ͕DNN ResearchΛങऩ )JOUPO
Google ͕ Deep MindΛങऩ Baidu ͕Institute of Deep LearningΛઃཱ "OESFX/H Facebook ͕AI Research Lab.Λઃཱ -F$VO SGD (Amari) Neocognitron (Fukushima) Boltzmann Machine (Hinton+) Conv. net (LeCun+) Sparse Coding (Olshausen&Field) 1950 1960 1970 1980 1990 2000 2010 2020 Microsoft ͕MaluubaΛങऩ #FOHJP ୈҰ࣍"*ϒʔϜ ਪɾ୳ࡧ ୈೋ࣍"*ϒʔϜ ࣝදݱ &YQFSU4ZTUFN ୈࡾ࣍"*ϒʔϜ ػցֶश ਂֶश Perceptron (Rosenblatt) Back Propagation (Rumelhart) Deep Learning (Hinton+) Big Data GPU Cloud Computing ઢܗෆՄೳ YPS͕ղ͚ͳ͍ ͍ɺաֶशɺ 47.ਓؾ χϡʔϥϧωοτϫʔΫͷྺ࢙ NN ୈҰͷౙ NN ୈೋͷౙ
Torontoେֶ New Yorkେֶ Montrealେֶ
NN ͔Β DNN Neural Network Deep Neural Network
ୈҰ࣍AIϒʔϜ
୯७ύʔηϓτϩϯ ʹྖҬఆثͱͯ͠ͷχϡʔϩϯ
NAND AND OR XOR ୯७ύʔηϓτϩϯ
ୈҰͷౙ • xor͕දݱͰ͖ͳ͍
ୈೋ࣍AIϒʔϜ
ڭࢣ৴߸ ޡࠩؔ ೖྗ ग़ྗ தؒ 1 1 1 x y
t ⇥w 4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f y1 y2 y3 1. ଟԽ 2. ׆ੑԽؔ 3. ޡࠩؔ 4. ޡࠩٯൖ๏ ଟύʔηϓτϩϯʢMLPʣ
ଟԽʹΑͬͯxorͷ࣮ݱ NAND OR AND s2 s1 x1 x2 y x1
x2 s1 s2 y 0 0 1 0 0 1 0 1 1 1 0 1 1 1 1 1 1 0 1 0 = 1. ଟԽ
γάϞΠυؔɾۂઢਖ਼ؔ ඍ͕Ͱ͖ͳ͍ ֶशͰ͖ͳ͍ ʢޡࠩٯൖ๏ʣ ʹೖྗ৴߸ͷ૯Λग़ྗ৴߸ʹม͢Δؔ ׆ੑԽؔ 2. ׆ੑԽؔ 1 ⇥w
4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f εςοϓؔ ύʔηϓτϩϯͷ߹
3. ଛࣦؔ ޡࠩؔʢଛࣦؔʣ 1 2 N X n=1 ky tk2
N Y n=1 p(dn | x ) d=0/1ͷࣄޙ֬pʹରͯ͠࠷ਪఆΛߦ͏ ೋޡࠩͱ͢Δ ڭࢣ৴߸ ޡࠩؔ ग़ྗ y t y1 y2 y3 ճؼ ೋྨ ଟΫϥεྨ ڭࢣ৴߸ΛOne-hotදݱͱ͠ɺ ࠷ऴஈͷ׆ੑԽؔΛιϑτϚοΫεؔͱ্ͨ͠Ͱ ަࠩΤϯτϩϐʔؔ
ڭࢣ৴߸ ޡࠩؔ ೖྗ ग़ྗ தؒ 1 1 1 x y
t ⇥w 4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f y1 y2 y3 4. ޡࠩٯൖ๏ ޡࠩٯൖ๏
+ ^2 x y t z @z @z @z @z
@z @t @z @z @z @t @t @x ͨͱ͑ z = ( x + y )2 ͱ͍͏ࣜ z = t2 t = x + y ͱ͍͏2ͭͷࣜͰߏ͞ΕΔɻ ࿈ͱɺ߹ؔͷඍʹ͍ͭͯͷੑ࣭Ͱ͋Δ @z @x = @z @t @t @x ޡࠩٯൖ๏ 4. ޡࠩٯൖ๏
ޡࠩٯൖ๏ Ճࢉϊʔυͷٯൖ + x y z + @L @z @L
@z · 1 @L @z · 1 ࢉϊʔυͷٯൖ x y z ⇥ @L @z ⇥ @L @z · x @L @z · y 4. ޡࠩٯൖ๏ 2 100 ⇥ ⇥ 200 1.1 220 1 1.1 200 110 2.2 ΓΜ͝ͷݸ ফඅ੫ ɹ۩ମྫɹ
֬తޯ߱Լ๏ • ϛχόονֶश • ֶशͷߋ৽ํ๏ • Momentum • AdaGrad •
Adam • RMSProp
ୈೋͷౙ • ܭࢉྔ͕ଟ͍͗ͯ͢ • ہॴղɾաֶशʹؕΓ͍͢ • ޯফࣦ
ୈࡾ࣍AIϒʔϜ
Deep Belief Network vs Auto Encoder ہॴղɾաֶश ରࡦ
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
දతͳൃද Neural Networkͷ ϒϨʔΫεϧʔͱ ౙͷ࣌ දతͳਓࡐ֫ಘ Google ͕DNN ResearchΛങऩ )JOUPO
Google ͕ Deep MindΛങऩ Baidu ͕Institute of Deep LearningΛઃཱ "OESFX/H Facebook ͕AI Research Lab.Λઃཱ -F$VO SGD (Amari) Neocognitron (Fukushima) Boltzmann Machine (Hinton+) Conv. net (LeCun+) Sparse Coding (Olshausen&Field) 1950 1960 1970 1980 1990 2000 2010 2020 Microsoft ͕MaluubaΛങऩ #FOJHO ୈҰ࣍"*ϒʔϜ ਪɾ୳ࡧ ୈೋ࣍"*ϒʔϜ ࣝදݱ &YQFSU4ZTUFN ୈࡾ࣍"*ϒʔϜ ػցֶश ਂֶश Perceptron (Rosenblatt) Back Propagation (Rumelhart) Deep Learning (Hinton+) Big Data GPU Cloud Computing ઢܗෆՄೳ YPS͕ղ͚ͳ͍ ͍ɺաֶशɺ 47.ਓؾ χϡʔϥϧωοτϫʔΫͷྺ࢙ NN ୈҰͷౙ NN ୈೋͷౙ
ωο τϫʔΫͷΤωϧΪʔ͕࠷খʹͳΔΑ͏ʹঢ়ଶมԽΛ܁Γฦ͢ %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ
੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning هԱ1 هԱ2 هԱΛࢥ͍ग़͢ ͍ۙ͠σʔλΛ༩͑Δͱ… ը૾ΛهԱͨ͠ωοτϫʔΫ Hopfield Networkͱ هԱΛߦྻܭࢉͰγϛϡϨʔτͯ͠ΈΑ͏ http://www.gaya.jp/spiking_neuron/matrix.htm
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning Boltzmann Machineͱ ֬Ϟσϧͷಋೖ Kullback LeiblerμΠόʔδΣϯε 2ͭͷۂઢʹ͍ͭͯɺॏͳΒͣʹ૬ҧʢμΠόʔδΣϯεʣ͍ͯ͠ΔྖҬʢࠩʣΛ࠷খԽ͢Δɻ ࣮ࡍͷೖྗʹ ΑΔ֬p ෮ݩ͞Εͨq ࠩͷੵ
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning ੍͖Boltzmann Machine (RBN)ͱ v3 h2 v1 h1 v2 Visible Hidden
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning Deep Belief Network (DBN)ͱ Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM pre-training(ڭࢣͳ͠) + fine tuning (ڭࢣ͋Γ)
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output Auto Encoder (AE)ͱ
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning Denoising Auto Encoder (DAE)ͱ ࠾༻ ֶश Input Hidden Output ϊΠζ
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning Stacked Auto Encoder (SAE)ͱ
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
γάϞΠυؔɾۂઢਖ਼ؔ ޯফࣦ ඍ ωοτϫʔΫ͕ਂ͍ͱޯ͕ফ͑ͯ͠·͏ɻɻɻ
γάϞΠυؔɾۂઢਖ਼ؔ ඍ ωοτϫʔΫ͕ਂ͍ͱޯ͕ফ͑ͯ͠·͏ɻɻɻ ReLU (Rectified Linear Unit) ൃՐ͍ͯ͠ͳ͍ ൃՐ͍ͯ͠Δ ޯফࣦͳ͠ʹൃՐ͍ͯ͠Δ
ࡉ๔ͷΈΛ௨ͬͯ͢Δɻ ޯফࣦ
• ϛχόονͷೖྗσʔλΛฏۉ0ɺࢄ1ͷσʔλʹม͢Δ • ׆ੑԽؔͷલɺ͘͠ޙʹૠೖ͢Δ͜ͱͰσʔλͷภΓΛݮΒ͢͜ͱ ͕Մೳ • ޮՌ • ֶशΛେ͖͘͢Δ͜ͱ͕ՄೳʢֶशΛૣ͘ਐߦͤ͞Δʣ •
ॳظʹͦΕ΄Ͳґଘ͠ͳ͍ • աֶशΛ੍͢Δ Batch Normalization
• DropOut (Drop Connect) • ΞϯαϯϒϧֶशʹରԠ • ਖ਼ଇԽ • Weight
DecayʢޡࠩؔʹL2ϊϧϜΛՃ͑Δʣ • εύʔεਖ਼ଇԽ • σʔλ֦ுʢϊΠζɺฏߦҠಈɺճసɺ৭ʣ ͦͷଞͷ
ॳظͷܾΊํ
• 0ʹ͢ΔʁˠॏΈ͕ۉҰʹͳͬͯ͠·͍ॏෳͨ͠ʹͳͬͯ͠·͏ • ϥϯμϜͳॳظ͕ඞཁ • ׆ੑԽؔʹɺγάϞΠυؔtanhؔΛ༻͢Δ߹ɺ ʮXavierͷॳظʯ͕ద • ReLUΛ༻͍Δ߹ɺʮHeͷॳظʯ͕ద ॏΈߦྻͷॳظ
• લͷϊʔυͷ͕ɹ ݸͷ߹ɺɹɹΛඪ४ภࠩͱ͢ΔΨε n r 2 n • લͷϊʔυͷ͕ɹ ݸͷ߹ɺɹɹΛඪ४ภࠩͱ͢ΔΨε n r 1 n
• ֤ͷχϡʔϩϯ • όοναΠζ • ֶशɺֶशͷมԽ • Weight decayʢՙॏݮਰʣ •
DropOut • ͳͲ ϋΠύʔύϥϝʔλ NNʹɺॏΈόΠΞεύϥϝʔλͱผʹɺ ਓ͕ઃఆ͖͢ϋΠύʔύϥϝʔλ͕ଘࡏ͢Δɻ ύϥϝʔλܾఆʹଟ͘ͷࢼߦࡨޡ͕͍ɺ Ϟσϧͷੑೳʹେ͖͘Өڹ͢Δɻ • ઐ༻ͷݕূσʔλΛ༻ҙ͢Δ • ܇࿅σʔλςετσʔλΛͬͯੑೳධՁΛ͍͚ͯ͠ͳ͍ • ରεέʔϧͷൣғ͔ΒϥϯμϜʹαϯϓϦϯάͯ͠ධՁ͠ɺ ൣғΛߜΓࠐΜͰ͍͖ɺ࠷ޙʹͻͱͭΛϐοΫΞοϓ͢Δ σʔληοτ ܇࿅σʔλ ςετσʔλ ݕূσʔλ ֶश༻ ֶश݁Ռͷ ධՁ༻ ϋΠύʔύϥϝʔλͷධՁ༻
༧ଌੑೳͷධՁ
܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ
܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ϗʔϧυΞτݕূ Kׂަࠩݕূ (Cross Validation) ੑೳධՁ
TP rate: ཅੑΛཅੑͱஅׂͨ͠߹ FP rate: ӄੑΛཅੑͱஅׂͨ͠߹ = = ROCۂઢͱAUC ROC:Receiver
Operating Characteristic ʢड৴ऀૢ࡞ಛੑʣ AUC:Area under the curve ʢROCۂઢԼ໘ੵʣ Predicted Condition Positive Negative True Condition Positive TP FN (type II error) Negative FP (Type I error) TN True Positive True Negative False Positive False Negative AUC
True Positive True Negative False Positive False Negative ࠶ݱ: ཅੑΛཅੑͱஅׂͨ͠߹
ʢRecallʣ = ద߹: ཅੑͱ༧ଌͨ͠σʔλͷ͏ͪɼ࣮ࡍʹཅੑͰ͋Δͷͷׂ߹ = ʢPrecisionʣ F: Fͷ࠷େ͓͓ΉͶذਫ਼ͱҰக͢Δɻ ௐฏۉɿٯͷฏۉͷٯ http://www004.upp.so-net.ne.jp/s_honma/mean/harmony2.htm Predicted Condition Positive Negative True Condition Positive TP FN (type II error) Negative FP (Type I error) TN
Ԡ༻ྫ • ը૾ೝࣝ (CNN) • ࣗવݴޠॲཧɺԻೝࣝ (RNN) • ը૾ʹର͢ΔΩϟϓγϣϯੜ (CNN
+ RNN) • ڧԽֶश (CNN + Qֶश) • ਂੜϞσϧ (CNN)
ը૾ೝࣝ • Convolutional Neural Network (ΈࠐΈχϡʔϥϧωοτϫʔΫ) • Convolution + Pooling
খ͞ͳը૾ͳΒ͜Ε·Ͱͷશ݁߹NNͰOK Convolution
ฏۉ ࠨӈʹΔΤοδ ্ԼʹΔΤοδ ͖ʹؔͳ͘Τοδ * ϑΟϧλྫ Convolution
None
None
ը૾ྨਓؒΛ͑ͨ ILSVRC = 2010͔Β࢝·ͬͨେنը૾ೝࣝͷڝٕձ 2012ͷILSVRCͰHintonઌੜͷνʔϜ͕Deep LearningͰѹউ 2015ʹILSVRCͷ݁ՌͰਓؒͷೝࣝੑೳΛ͑ͨɻ
ܭࢉྔ • CPUͱGPUͷੑೳͷҧ͍ • ಉ࣌ԋࢉՄೳʢ୯ਫ਼গʣ • CPU(Intel Core i7) :
AVX256bit -> 8ݸ • nVIDIA Pascal GP100 : 114,688ݸ
ࣗવݴޠॲཧɺԻೝࣝ • Recurrent Neural Network (RNN)
ڧԽֶश • CNN + Qֶश + …
Prisma
Prisma σΟʔϓϥʔχϯάΛͬͨΞʔτܥͷ จɺ͍Ζ͍Ζग़͍ͯΔ͕ Ұ൪ͷجૅͱͳΔͷ Gatys et al. 2016 ༻CNNVGG19ʢը૾ྨ༻ʹ܇࿅ࡁΈʣ͔Βશ݁߹Λ ൈ͍ͨͷ
“Image Style Transfer Using Convolutional Neural Networks”
Prisma ίϯςϯπ ελΠϧ http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf
Prisma ଛࣦؔʹίϯςϯπͷଛࣦʴελΠϧͷଛࣦ ࠷దԽʹ௨ৗೖྗ͕ݻఆͰॏΈ͕ߋ৽͞ΕΔ͕ɺٯͰॏΈ͕ݻఆͰೖྗը૾͕ߋ৽͞ΕΔ
Prisma ੜը૾ͷॳظ A:ίϯςϯπ B:ελΠϧ C:ϗϫΠτϊΠζ4ύλʔϯ ͲΕͰ΄ͱΜͲมΘΒͳ͍ͱ͍͏݁
Prisma
FaceApp
FaceApp VAE (Variational Autoencoder) CVAE (Conditional VA) Facial VAE
·ͱΊ • Deep LearningͱҰޱʹݴͬͯɺٕज़༻్༷ʑ • ը૾ೝࣝʢCNNʣ, ࣗવݴޠʢRNNʣ, ਂੜʢVAE, GANʣ,
ڧԽֶशʢDQNʣ, … • ଞͷٕज़ͪΐͬͱͨ͠ͳͲɺΞϓϩʔνํ๏ʹؔͯ͠ϒϧʔΦʔγϟϯͳ • 2014-2015ͷ2ؒͰɺ1500ͷؔ࿈จ • CNN + RNNͷΑ͏ͳɺֆʴԻɺݴ༿ʴֆɺηϯαʔʴจষɺͳͲɺ͜Ε·Ͱ༥߹ Ͱ͖ͳ͔ͬͨσʔλ͕༥߹͢Δ͜ͱͰ৽͍͠ՁΛੜΈग़͢༧ײ
ࢀߟࢿྉ • ॻ੶ • θϩ͔Β࡞ΔDeep Learning ―PythonͰֶͿσΟʔϓϥʔχϯάͷཧͱ࣮ http://amzn.asia/2CTyY4U • ػցֶशͷͨΊͷ֬ͱ౷ܭ
(ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/5SyEZVV • ΦϯϥΠϯػցֶश (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/2kli98b • ΠϥετͰֶͿ σΟʔϓϥʔχϯά (KSใՊֶઐॻ) http://amzn.asia/8Kz11LV • ΠϥετͰֶͿ ػցֶश ࠷খೋ๏ʹΑΔࣝผϞσϧֶशΛத৺ʹ (KSใՊֶઐॻ) http://amzn.asia/6Zlo0pt • ਂֶश (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/hZqrQ2w • ChainerʹΑΔ࣮ફਂֶश http://amzn.asia/5xDfvVJ • ࣮σΟʔϓϥʔχϯά http://amzn.asia/7YP7FPh • ͜Ε͔ΒͷڧԽֶश http://amzn.asia/gHUDp81 • ITΤϯδχΞͷͨΊͷػցֶशཧೖ http://amzn.asia/7SgiMwN • ҟৗݕͱมԽݕ (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/6RC0jbt • PythonʹΑΔσʔλੳೖ ―NumPyɺpandasΛͬͨσʔλॲཧ http://amzn.asia/4f2ATnL • URL / SlideShare / pdf • ʢଟ͗ͯ͢লུʣ