Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
What is Deep Learning ?
Search
urakarin
May 02, 2017
Technology
1
1k
What is Deep Learning ?
(Japanese document)
History and introductions of Neural Network,
社内ゼミでの資料です。
urakarin
May 02, 2017
Tweet
Share
More Decks by urakarin
See All by urakarin
WiFi講座(3)
urakarin
0
350
BadUSB
urakarin
0
420
Other Decks in Technology
See All in Technology
ViteとTypeScriptのProject Referencesで 大規模モノレポのUIカタログのリリースサイクルを高速化する
shuta13
3
210
様々なファイルシステム
sat
PRO
0
260
Amazon Athena で JSON・Parquet・Iceberg のデータを検索し、性能を比較してみた
shigeruoda
1
140
組織全員で向き合うAI Readyなデータ利活用
gappy50
3
1.1k
もう外には出ない。より快適なフルリモート環境を目指して
mottyzzz
13
11k
マルチエージェントのチームビルディング_2025-10-25
shinoyamada
0
200
AIエージェントによる業務効率化への飽くなき挑戦-AWS上の実開発事例から学んだ効果、現実そしてギャップ-
nasuvitz
5
1.3k
ヘンリー会社紹介資料(エンジニア向け) / company deck for engineer
henryofficial
0
400
プレイドのユニークな技術とインターンのリアル
plaidtech
PRO
1
420
ハノーファーメッセ2025で見た生成AI活用ユースケース.pdf
hamadakoji
1
490
Dify on AWS 環境構築手順
yosse95ai
0
140
OTEPsで知るOpenTelemetryの未来 / Observability Conference Tokyo 2025
arthur1
0
280
Featured
See All Featured
10 Git Anti Patterns You Should be Aware of
lemiorhan
PRO
658
61k
No one is an island. Learnings from fostering a developers community.
thoeni
21
3.5k
Unsuck your backbone
ammeep
671
58k
Gamification - CAS2011
davidbonilla
81
5.5k
How GitHub (no longer) Works
holman
315
140k
Practical Orchestrator
shlominoach
190
11k
Cheating the UX When There Is Nothing More to Optimize - PixelPioneers
stephaniewalter
285
14k
Java REST API Framework Comparison - PWX 2021
mraible
34
8.9k
RailsConf & Balkan Ruby 2019: The Past, Present, and Future of Rails at GitHub
eileencodes
140
34k
It's Worth the Effort
3n
187
28k
Responsive Adventures: Dirty Tricks From The Dark Corners of Front-End
smashingmag
253
22k
Docker and Python
trallard
46
3.6k
Transcript
σΟʔϓϥʔχϯάͬͯԿʁ
[email protected]
2017.02.08
͢͜ͱɺ͞ͳ͍͜ͱ • ͢͜ͱ • χϡʔϥϧωοτϫʔΫͷֶతͳΈ • ॳظͷܾΊํɺධՁํ๏ • ύϥϝʔλྔɺܭࢉྔͷϘϦϡʔϜײ •
ϗοτͳ • ͞ͳ͍͜ͱ • πʔϧͷ • ࣜͷ • χϡʔϥϧωοτϫʔΫҎ֎ͷػցֶश • γϯΪϡϥϦςΟͳͲͷਓೳͷະདྷ ग़య wedge.ismedia.jp
Agenda • σΟʔϓϥʔχϯάͱʁ • ྺ࢙ • χϡʔϥϧωοτϫʔΫ͔ΒσΟʔϓͳχϡʔϥϧωοτϫʔΫ • ୈҰ࣍AIϒʔϜ •
ୈೋ࣍AIϒʔϜ • ୈࡾ࣍AIϒʔϜ • Ԡ༻ྫ • ·ͱΊ
• ਂֶशͱݴ͏ • ʢਆܦࡉ๔ʣͷಇ͖Λֶͨ͠शΞϧΰϦζϜͰ͋Δ Neural Network(NN)Λ༻͍ͨਓೳͷߏஙٕज़ͷ૯শ • ͦͷதͰਂ͘େنͳߏΛ࣋ͭ͜ͱ͕ಛ σΟʔϓϥʔχϯάͱʁ
GoogLeNet, 22Layers (ILSVRC 2014)
༻ޠͷؔੑ ਓೳʢAIʣ ػցֶश χϡʔϥϧωοτϫʔΫ ਂֶश
දతͳൃද Neural Networkͷ ϒϨʔΫεϧʔͱ ౙͷ࣌ දతͳਓࡐ֫ಘ Google ͕DNN ResearchΛങऩ )JOUPO
Google ͕ Deep MindΛങऩ Baidu ͕Institute of Deep LearningΛઃཱ "OESFX/H Facebook ͕AI Research Lab.Λઃཱ -F$VO SGD (Amari) Neocognitron (Fukushima) Boltzmann Machine (Hinton+) Conv. net (LeCun+) Sparse Coding (Olshausen&Field) 1950 1960 1970 1980 1990 2000 2010 2020 Microsoft ͕MaluubaΛങऩ #FOHJP ୈҰ࣍"*ϒʔϜ ਪɾ୳ࡧ ୈೋ࣍"*ϒʔϜ ࣝදݱ &YQFSU4ZTUFN ୈࡾ࣍"*ϒʔϜ ػցֶश ਂֶश Perceptron (Rosenblatt) Back Propagation (Rumelhart) Deep Learning (Hinton+) Big Data GPU Cloud Computing ઢܗෆՄೳ YPS͕ղ͚ͳ͍ ͍ɺաֶशɺ 47.ਓؾ χϡʔϥϧωοτϫʔΫͷྺ࢙ NN ୈҰͷౙ NN ୈೋͷౙ
Torontoେֶ New Yorkେֶ Montrealେֶ
NN ͔Β DNN Neural Network Deep Neural Network
ୈҰ࣍AIϒʔϜ
୯७ύʔηϓτϩϯ ʹྖҬఆثͱͯ͠ͷχϡʔϩϯ
NAND AND OR XOR ୯७ύʔηϓτϩϯ
ୈҰͷౙ • xor͕දݱͰ͖ͳ͍
ୈೋ࣍AIϒʔϜ
ڭࢣ৴߸ ޡࠩؔ ೖྗ ग़ྗ தؒ 1 1 1 x y
t ⇥w 4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f y1 y2 y3 1. ଟԽ 2. ׆ੑԽؔ 3. ޡࠩؔ 4. ޡࠩٯൖ๏ ଟύʔηϓτϩϯʢMLPʣ
ଟԽʹΑͬͯxorͷ࣮ݱ NAND OR AND s2 s1 x1 x2 y x1
x2 s1 s2 y 0 0 1 0 0 1 0 1 1 1 0 1 1 1 1 1 1 0 1 0 = 1. ଟԽ
γάϞΠυؔɾۂઢਖ਼ؔ ඍ͕Ͱ͖ͳ͍ ֶशͰ͖ͳ͍ ʢޡࠩٯൖ๏ʣ ʹೖྗ৴߸ͷ૯Λग़ྗ৴߸ʹม͢Δؔ ׆ੑԽؔ 2. ׆ੑԽؔ 1 ⇥w
4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f εςοϓؔ ύʔηϓτϩϯͷ߹
3. ଛࣦؔ ޡࠩؔʢଛࣦؔʣ 1 2 N X n=1 ky tk2
N Y n=1 p(dn | x ) d=0/1ͷࣄޙ֬pʹରͯ͠࠷ਪఆΛߦ͏ ೋޡࠩͱ͢Δ ڭࢣ৴߸ ޡࠩؔ ग़ྗ y t y1 y2 y3 ճؼ ೋྨ ଟΫϥεྨ ڭࢣ৴߸ΛOne-hotදݱͱ͠ɺ ࠷ऴஈͷ׆ੑԽؔΛιϑτϚοΫεؔͱ্ͨ͠Ͱ ަࠩΤϯτϩϐʔؔ
ڭࢣ৴߸ ޡࠩؔ ೖྗ ग़ྗ தؒ 1 1 1 x y
t ⇥w 4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f y1 y2 y3 4. ޡࠩٯൖ๏ ޡࠩٯൖ๏
+ ^2 x y t z @z @z @z @z
@z @t @z @z @z @t @t @x ͨͱ͑ z = ( x + y )2 ͱ͍͏ࣜ z = t2 t = x + y ͱ͍͏2ͭͷࣜͰߏ͞ΕΔɻ ࿈ͱɺ߹ؔͷඍʹ͍ͭͯͷੑ࣭Ͱ͋Δ @z @x = @z @t @t @x ޡࠩٯൖ๏ 4. ޡࠩٯൖ๏
ޡࠩٯൖ๏ Ճࢉϊʔυͷٯൖ + x y z + @L @z @L
@z · 1 @L @z · 1 ࢉϊʔυͷٯൖ x y z ⇥ @L @z ⇥ @L @z · x @L @z · y 4. ޡࠩٯൖ๏ 2 100 ⇥ ⇥ 200 1.1 220 1 1.1 200 110 2.2 ΓΜ͝ͷݸ ফඅ੫ ɹ۩ମྫɹ
֬తޯ߱Լ๏ • ϛχόονֶश • ֶशͷߋ৽ํ๏ • Momentum • AdaGrad •
Adam • RMSProp
ୈೋͷౙ • ܭࢉྔ͕ଟ͍͗ͯ͢ • ہॴղɾաֶशʹؕΓ͍͢ • ޯফࣦ
ୈࡾ࣍AIϒʔϜ
Deep Belief Network vs Auto Encoder ہॴղɾաֶश ରࡦ
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
දతͳൃද Neural Networkͷ ϒϨʔΫεϧʔͱ ౙͷ࣌ දతͳਓࡐ֫ಘ Google ͕DNN ResearchΛങऩ )JOUPO
Google ͕ Deep MindΛങऩ Baidu ͕Institute of Deep LearningΛઃཱ "OESFX/H Facebook ͕AI Research Lab.Λઃཱ -F$VO SGD (Amari) Neocognitron (Fukushima) Boltzmann Machine (Hinton+) Conv. net (LeCun+) Sparse Coding (Olshausen&Field) 1950 1960 1970 1980 1990 2000 2010 2020 Microsoft ͕MaluubaΛങऩ #FOJHO ୈҰ࣍"*ϒʔϜ ਪɾ୳ࡧ ୈೋ࣍"*ϒʔϜ ࣝදݱ &YQFSU4ZTUFN ୈࡾ࣍"*ϒʔϜ ػցֶश ਂֶश Perceptron (Rosenblatt) Back Propagation (Rumelhart) Deep Learning (Hinton+) Big Data GPU Cloud Computing ઢܗෆՄೳ YPS͕ղ͚ͳ͍ ͍ɺաֶशɺ 47.ਓؾ χϡʔϥϧωοτϫʔΫͷྺ࢙ NN ୈҰͷౙ NN ୈೋͷౙ
ωο τϫʔΫͷΤωϧΪʔ͕࠷খʹͳΔΑ͏ʹঢ়ଶมԽΛ܁Γฦ͢ %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ
੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning هԱ1 هԱ2 هԱΛࢥ͍ग़͢ ͍ۙ͠σʔλΛ༩͑Δͱ… ը૾ΛهԱͨ͠ωοτϫʔΫ Hopfield Networkͱ هԱΛߦྻܭࢉͰγϛϡϨʔτͯ͠ΈΑ͏ http://www.gaya.jp/spiking_neuron/matrix.htm
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning Boltzmann Machineͱ ֬Ϟσϧͷಋೖ Kullback LeiblerμΠόʔδΣϯε 2ͭͷۂઢʹ͍ͭͯɺॏͳΒͣʹ૬ҧʢμΠόʔδΣϯεʣ͍ͯ͠ΔྖҬʢࠩʣΛ࠷খԽ͢Δɻ ࣮ࡍͷೖྗʹ ΑΔ֬p ෮ݩ͞Εͨq ࠩͷੵ
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning ੍͖Boltzmann Machine (RBN)ͱ v3 h2 v1 h1 v2 Visible Hidden
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning Deep Belief Network (DBN)ͱ Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM pre-training(ڭࢣͳ͠) + fine tuning (ڭࢣ͋Γ)
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output Auto Encoder (AE)ͱ
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning Denoising Auto Encoder (DAE)ͱ ࠾༻ ֶश Input Hidden Output ϊΠζ
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning Stacked Auto Encoder (SAE)ͱ
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
γάϞΠυؔɾۂઢਖ਼ؔ ޯফࣦ ඍ ωοτϫʔΫ͕ਂ͍ͱޯ͕ফ͑ͯ͠·͏ɻɻɻ
γάϞΠυؔɾۂઢਖ਼ؔ ඍ ωοτϫʔΫ͕ਂ͍ͱޯ͕ফ͑ͯ͠·͏ɻɻɻ ReLU (Rectified Linear Unit) ൃՐ͍ͯ͠ͳ͍ ൃՐ͍ͯ͠Δ ޯফࣦͳ͠ʹൃՐ͍ͯ͠Δ
ࡉ๔ͷΈΛ௨ͬͯ͢Δɻ ޯফࣦ
• ϛχόονͷೖྗσʔλΛฏۉ0ɺࢄ1ͷσʔλʹม͢Δ • ׆ੑԽؔͷલɺ͘͠ޙʹૠೖ͢Δ͜ͱͰσʔλͷภΓΛݮΒ͢͜ͱ ͕Մೳ • ޮՌ • ֶशΛେ͖͘͢Δ͜ͱ͕ՄೳʢֶशΛૣ͘ਐߦͤ͞Δʣ •
ॳظʹͦΕ΄Ͳґଘ͠ͳ͍ • աֶशΛ੍͢Δ Batch Normalization
• DropOut (Drop Connect) • ΞϯαϯϒϧֶशʹରԠ • ਖ਼ଇԽ • Weight
DecayʢޡࠩؔʹL2ϊϧϜΛՃ͑Δʣ • εύʔεਖ਼ଇԽ • σʔλ֦ுʢϊΠζɺฏߦҠಈɺճసɺ৭ʣ ͦͷଞͷ
ॳظͷܾΊํ
• 0ʹ͢ΔʁˠॏΈ͕ۉҰʹͳͬͯ͠·͍ॏෳͨ͠ʹͳͬͯ͠·͏ • ϥϯμϜͳॳظ͕ඞཁ • ׆ੑԽؔʹɺγάϞΠυؔtanhؔΛ༻͢Δ߹ɺ ʮXavierͷॳظʯ͕ద • ReLUΛ༻͍Δ߹ɺʮHeͷॳظʯ͕ద ॏΈߦྻͷॳظ
• લͷϊʔυͷ͕ɹ ݸͷ߹ɺɹɹΛඪ४ภࠩͱ͢ΔΨε n r 2 n • લͷϊʔυͷ͕ɹ ݸͷ߹ɺɹɹΛඪ४ภࠩͱ͢ΔΨε n r 1 n
• ֤ͷχϡʔϩϯ • όοναΠζ • ֶशɺֶशͷมԽ • Weight decayʢՙॏݮਰʣ •
DropOut • ͳͲ ϋΠύʔύϥϝʔλ NNʹɺॏΈόΠΞεύϥϝʔλͱผʹɺ ਓ͕ઃఆ͖͢ϋΠύʔύϥϝʔλ͕ଘࡏ͢Δɻ ύϥϝʔλܾఆʹଟ͘ͷࢼߦࡨޡ͕͍ɺ Ϟσϧͷੑೳʹେ͖͘Өڹ͢Δɻ • ઐ༻ͷݕূσʔλΛ༻ҙ͢Δ • ܇࿅σʔλςετσʔλΛͬͯੑೳධՁΛ͍͚ͯ͠ͳ͍ • ରεέʔϧͷൣғ͔ΒϥϯμϜʹαϯϓϦϯάͯ͠ධՁ͠ɺ ൣғΛߜΓࠐΜͰ͍͖ɺ࠷ޙʹͻͱͭΛϐοΫΞοϓ͢Δ σʔληοτ ܇࿅σʔλ ςετσʔλ ݕূσʔλ ֶश༻ ֶश݁Ռͷ ධՁ༻ ϋΠύʔύϥϝʔλͷධՁ༻
༧ଌੑೳͷධՁ
܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ
܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ϗʔϧυΞτݕূ Kׂަࠩݕূ (Cross Validation) ੑೳධՁ
TP rate: ཅੑΛཅੑͱஅׂͨ͠߹ FP rate: ӄੑΛཅੑͱஅׂͨ͠߹ = = ROCۂઢͱAUC ROC:Receiver
Operating Characteristic ʢड৴ऀૢ࡞ಛੑʣ AUC:Area under the curve ʢROCۂઢԼ໘ੵʣ Predicted Condition Positive Negative True Condition Positive TP FN (type II error) Negative FP (Type I error) TN True Positive True Negative False Positive False Negative AUC
True Positive True Negative False Positive False Negative ࠶ݱ: ཅੑΛཅੑͱஅׂͨ͠߹
ʢRecallʣ = ద߹: ཅੑͱ༧ଌͨ͠σʔλͷ͏ͪɼ࣮ࡍʹཅੑͰ͋Δͷͷׂ߹ = ʢPrecisionʣ F: Fͷ࠷େ͓͓ΉͶذਫ਼ͱҰக͢Δɻ ௐฏۉɿٯͷฏۉͷٯ http://www004.upp.so-net.ne.jp/s_honma/mean/harmony2.htm Predicted Condition Positive Negative True Condition Positive TP FN (type II error) Negative FP (Type I error) TN
Ԡ༻ྫ • ը૾ೝࣝ (CNN) • ࣗવݴޠॲཧɺԻೝࣝ (RNN) • ը૾ʹର͢ΔΩϟϓγϣϯੜ (CNN
+ RNN) • ڧԽֶश (CNN + Qֶश) • ਂੜϞσϧ (CNN)
ը૾ೝࣝ • Convolutional Neural Network (ΈࠐΈχϡʔϥϧωοτϫʔΫ) • Convolution + Pooling
খ͞ͳը૾ͳΒ͜Ε·Ͱͷશ݁߹NNͰOK Convolution
ฏۉ ࠨӈʹΔΤοδ ্ԼʹΔΤοδ ͖ʹؔͳ͘Τοδ * ϑΟϧλྫ Convolution
None
None
ը૾ྨਓؒΛ͑ͨ ILSVRC = 2010͔Β࢝·ͬͨେنը૾ೝࣝͷڝٕձ 2012ͷILSVRCͰHintonઌੜͷνʔϜ͕Deep LearningͰѹউ 2015ʹILSVRCͷ݁ՌͰਓؒͷೝࣝੑೳΛ͑ͨɻ
ܭࢉྔ • CPUͱGPUͷੑೳͷҧ͍ • ಉ࣌ԋࢉՄೳʢ୯ਫ਼গʣ • CPU(Intel Core i7) :
AVX256bit -> 8ݸ • nVIDIA Pascal GP100 : 114,688ݸ
ࣗવݴޠॲཧɺԻೝࣝ • Recurrent Neural Network (RNN)
ڧԽֶश • CNN + Qֶश + …
Prisma
Prisma σΟʔϓϥʔχϯάΛͬͨΞʔτܥͷ จɺ͍Ζ͍Ζग़͍ͯΔ͕ Ұ൪ͷجૅͱͳΔͷ Gatys et al. 2016 ༻CNNVGG19ʢը૾ྨ༻ʹ܇࿅ࡁΈʣ͔Βશ݁߹Λ ൈ͍ͨͷ
“Image Style Transfer Using Convolutional Neural Networks”
Prisma ίϯςϯπ ελΠϧ http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf
Prisma ଛࣦؔʹίϯςϯπͷଛࣦʴελΠϧͷଛࣦ ࠷దԽʹ௨ৗೖྗ͕ݻఆͰॏΈ͕ߋ৽͞ΕΔ͕ɺٯͰॏΈ͕ݻఆͰೖྗը૾͕ߋ৽͞ΕΔ
Prisma ੜը૾ͷॳظ A:ίϯςϯπ B:ελΠϧ C:ϗϫΠτϊΠζ4ύλʔϯ ͲΕͰ΄ͱΜͲมΘΒͳ͍ͱ͍͏݁
Prisma
FaceApp
FaceApp VAE (Variational Autoencoder) CVAE (Conditional VA) Facial VAE
·ͱΊ • Deep LearningͱҰޱʹݴͬͯɺٕज़༻్༷ʑ • ը૾ೝࣝʢCNNʣ, ࣗવݴޠʢRNNʣ, ਂੜʢVAE, GANʣ,
ڧԽֶशʢDQNʣ, … • ଞͷٕज़ͪΐͬͱͨ͠ͳͲɺΞϓϩʔνํ๏ʹؔͯ͠ϒϧʔΦʔγϟϯͳ • 2014-2015ͷ2ؒͰɺ1500ͷؔ࿈จ • CNN + RNNͷΑ͏ͳɺֆʴԻɺݴ༿ʴֆɺηϯαʔʴจষɺͳͲɺ͜Ε·Ͱ༥߹ Ͱ͖ͳ͔ͬͨσʔλ͕༥߹͢Δ͜ͱͰ৽͍͠ՁΛੜΈग़͢༧ײ
ࢀߟࢿྉ • ॻ੶ • θϩ͔Β࡞ΔDeep Learning ―PythonͰֶͿσΟʔϓϥʔχϯάͷཧͱ࣮ http://amzn.asia/2CTyY4U • ػցֶशͷͨΊͷ֬ͱ౷ܭ
(ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/5SyEZVV • ΦϯϥΠϯػցֶश (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/2kli98b • ΠϥετͰֶͿ σΟʔϓϥʔχϯά (KSใՊֶઐॻ) http://amzn.asia/8Kz11LV • ΠϥετͰֶͿ ػցֶश ࠷খೋ๏ʹΑΔࣝผϞσϧֶशΛத৺ʹ (KSใՊֶઐॻ) http://amzn.asia/6Zlo0pt • ਂֶश (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/hZqrQ2w • ChainerʹΑΔ࣮ફਂֶश http://amzn.asia/5xDfvVJ • ࣮σΟʔϓϥʔχϯά http://amzn.asia/7YP7FPh • ͜Ε͔ΒͷڧԽֶश http://amzn.asia/gHUDp81 • ITΤϯδχΞͷͨΊͷػցֶशཧೖ http://amzn.asia/7SgiMwN • ҟৗݕͱมԽݕ (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/6RC0jbt • PythonʹΑΔσʔλੳೖ ―NumPyɺpandasΛͬͨσʔλॲཧ http://amzn.asia/4f2ATnL • URL / SlideShare / pdf • ʢଟ͗ͯ͢লུʣ