Upgrade to PRO for Only $50/Year—Limited-Time Offer! 🔥
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
What is Deep Learning ?
Search
urakarin
May 02, 2017
Technology
1
1k
What is Deep Learning ?
(Japanese document)
History and introductions of Neural Network,
社内ゼミでの資料です。
urakarin
May 02, 2017
Tweet
Share
More Decks by urakarin
See All by urakarin
WiFi講座(3)
urakarin
0
360
BadUSB
urakarin
0
420
Other Decks in Technology
See All in Technology
Connection-based OAuthから学ぶOAuth for AI Agents
flatt_security
0
370
なぜ あなたはそんなに re:Invent に行くのか?
miu_crescent
PRO
0
210
20251218_AIを活用した開発生産性向上の全社的な取り組みの進め方について / How to proceed with company-wide initiatives to improve development productivity using AI
yayoi_dd
0
680
投資戦略を量産せよ 2 - マケデコセミナー(2025/12/26)
gamella
0
430
通勤手当申請チェックエージェント開発のリアル
whisaiyo
3
470
半年で、AIゼロ知識から AI中心開発組織の変革担当に至るまで
rfdnxbro
0
140
フルカイテン株式会社 エンジニア向け採用資料
fullkaiten
0
9.9k
_第4回__AIxIoTビジネス共創ラボ紹介資料_20251203.pdf
iotcomjpadmin
0
140
アプリにAIを正しく組み込むための アーキテクチャ── 国産LLMの現実と実践
kohju
0
230
Introduce marp-ai-slide-generator
itarutomy
0
130
意外と知らない状態遷移テストの世界
nihonbuson
PRO
1
260
マイクロサービスへの5年間 ぶっちゃけ何をしてどうなったか
joker1007
21
8.2k
Featured
See All Featured
What Being in a Rock Band Can Teach Us About Real World SEO
427marketing
0
150
Building Flexible Design Systems
yeseniaperezcruz
330
39k
10 Git Anti Patterns You Should be Aware of
lemiorhan
PRO
659
61k
How to build a perfect <img>
jonoalderson
0
4.7k
Money Talks: Using Revenue to Get Sh*t Done
nikkihalliwell
0
120
How To Stay Up To Date on Web Technology
chriscoyier
791
250k
Tell your own story through comics
letsgokoyo
0
770
How to Grow Your eCommerce with AI & Automation
katarinadahlin
PRO
0
75
Why Mistakes Are the Best Teachers: Turning Failure into a Pathway for Growth
auna
0
28
個人開発の失敗を避けるイケてる考え方 / tips for indie hackers
panda_program
122
21k
Fireside Chat
paigeccino
41
3.8k
Claude Code のすすめ
schroneko
67
210k
Transcript
σΟʔϓϥʔχϯάͬͯԿʁ
[email protected]
2017.02.08
͢͜ͱɺ͞ͳ͍͜ͱ • ͢͜ͱ • χϡʔϥϧωοτϫʔΫͷֶతͳΈ • ॳظͷܾΊํɺධՁํ๏ • ύϥϝʔλྔɺܭࢉྔͷϘϦϡʔϜײ •
ϗοτͳ • ͞ͳ͍͜ͱ • πʔϧͷ • ࣜͷ • χϡʔϥϧωοτϫʔΫҎ֎ͷػցֶश • γϯΪϡϥϦςΟͳͲͷਓೳͷະདྷ ग़య wedge.ismedia.jp
Agenda • σΟʔϓϥʔχϯάͱʁ • ྺ࢙ • χϡʔϥϧωοτϫʔΫ͔ΒσΟʔϓͳχϡʔϥϧωοτϫʔΫ • ୈҰ࣍AIϒʔϜ •
ୈೋ࣍AIϒʔϜ • ୈࡾ࣍AIϒʔϜ • Ԡ༻ྫ • ·ͱΊ
• ਂֶशͱݴ͏ • ʢਆܦࡉ๔ʣͷಇ͖Λֶͨ͠शΞϧΰϦζϜͰ͋Δ Neural Network(NN)Λ༻͍ͨਓೳͷߏஙٕज़ͷ૯শ • ͦͷதͰਂ͘େنͳߏΛ࣋ͭ͜ͱ͕ಛ σΟʔϓϥʔχϯάͱʁ
GoogLeNet, 22Layers (ILSVRC 2014)
༻ޠͷؔੑ ਓೳʢAIʣ ػցֶश χϡʔϥϧωοτϫʔΫ ਂֶश
දతͳൃද Neural Networkͷ ϒϨʔΫεϧʔͱ ౙͷ࣌ දతͳਓࡐ֫ಘ Google ͕DNN ResearchΛങऩ )JOUPO
Google ͕ Deep MindΛങऩ Baidu ͕Institute of Deep LearningΛઃཱ "OESFX/H Facebook ͕AI Research Lab.Λઃཱ -F$VO SGD (Amari) Neocognitron (Fukushima) Boltzmann Machine (Hinton+) Conv. net (LeCun+) Sparse Coding (Olshausen&Field) 1950 1960 1970 1980 1990 2000 2010 2020 Microsoft ͕MaluubaΛങऩ #FOHJP ୈҰ࣍"*ϒʔϜ ਪɾ୳ࡧ ୈೋ࣍"*ϒʔϜ ࣝදݱ &YQFSU4ZTUFN ୈࡾ࣍"*ϒʔϜ ػցֶश ਂֶश Perceptron (Rosenblatt) Back Propagation (Rumelhart) Deep Learning (Hinton+) Big Data GPU Cloud Computing ઢܗෆՄೳ YPS͕ղ͚ͳ͍ ͍ɺաֶशɺ 47.ਓؾ χϡʔϥϧωοτϫʔΫͷྺ࢙ NN ୈҰͷౙ NN ୈೋͷౙ
Torontoେֶ New Yorkେֶ Montrealେֶ
NN ͔Β DNN Neural Network Deep Neural Network
ୈҰ࣍AIϒʔϜ
୯७ύʔηϓτϩϯ ʹྖҬఆثͱͯ͠ͷχϡʔϩϯ
NAND AND OR XOR ୯७ύʔηϓτϩϯ
ୈҰͷౙ • xor͕දݱͰ͖ͳ͍
ୈೋ࣍AIϒʔϜ
ڭࢣ৴߸ ޡࠩؔ ೖྗ ग़ྗ தؒ 1 1 1 x y
t ⇥w 4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f y1 y2 y3 1. ଟԽ 2. ׆ੑԽؔ 3. ޡࠩؔ 4. ޡࠩٯൖ๏ ଟύʔηϓτϩϯʢMLPʣ
ଟԽʹΑͬͯxorͷ࣮ݱ NAND OR AND s2 s1 x1 x2 y x1
x2 s1 s2 y 0 0 1 0 0 1 0 1 1 1 0 1 1 1 1 1 1 0 1 0 = 1. ଟԽ
γάϞΠυؔɾۂઢਖ਼ؔ ඍ͕Ͱ͖ͳ͍ ֶशͰ͖ͳ͍ ʢޡࠩٯൖ๏ʣ ʹೖྗ৴߸ͷ૯Λग़ྗ৴߸ʹม͢Δؔ ׆ੑԽؔ 2. ׆ੑԽؔ 1 ⇥w
4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f εςοϓؔ ύʔηϓτϩϯͷ߹
3. ଛࣦؔ ޡࠩؔʢଛࣦؔʣ 1 2 N X n=1 ky tk2
N Y n=1 p(dn | x ) d=0/1ͷࣄޙ֬pʹରͯ͠࠷ਪఆΛߦ͏ ೋޡࠩͱ͢Δ ڭࢣ৴߸ ޡࠩؔ ग़ྗ y t y1 y2 y3 ճؼ ೋྨ ଟΫϥεྨ ڭࢣ৴߸ΛOne-hotදݱͱ͠ɺ ࠷ऴஈͷ׆ੑԽؔΛιϑτϚοΫεؔͱ্ͨ͠Ͱ ަࠩΤϯτϩϐʔؔ
ڭࢣ৴߸ ޡࠩؔ ೖྗ ग़ྗ தؒ 1 1 1 x y
t ⇥w 4 ⇥ w3 ⇥w2 ⇥w1 x1 x2 x3 x4 ⇥w 0 ⌃f y1 y2 y3 4. ޡࠩٯൖ๏ ޡࠩٯൖ๏
+ ^2 x y t z @z @z @z @z
@z @t @z @z @z @t @t @x ͨͱ͑ z = ( x + y )2 ͱ͍͏ࣜ z = t2 t = x + y ͱ͍͏2ͭͷࣜͰߏ͞ΕΔɻ ࿈ͱɺ߹ؔͷඍʹ͍ͭͯͷੑ࣭Ͱ͋Δ @z @x = @z @t @t @x ޡࠩٯൖ๏ 4. ޡࠩٯൖ๏
ޡࠩٯൖ๏ Ճࢉϊʔυͷٯൖ + x y z + @L @z @L
@z · 1 @L @z · 1 ࢉϊʔυͷٯൖ x y z ⇥ @L @z ⇥ @L @z · x @L @z · y 4. ޡࠩٯൖ๏ 2 100 ⇥ ⇥ 200 1.1 220 1 1.1 200 110 2.2 ΓΜ͝ͷݸ ফඅ੫ ɹ۩ମྫɹ
֬తޯ߱Լ๏ • ϛχόονֶश • ֶशͷߋ৽ํ๏ • Momentum • AdaGrad •
Adam • RMSProp
ୈೋͷౙ • ܭࢉྔ͕ଟ͍͗ͯ͢ • ہॴղɾաֶशʹؕΓ͍͢ • ޯফࣦ
ୈࡾ࣍AIϒʔϜ
Deep Belief Network vs Auto Encoder ہॴղɾաֶश ରࡦ
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
දతͳൃද Neural Networkͷ ϒϨʔΫεϧʔͱ ౙͷ࣌ දతͳਓࡐ֫ಘ Google ͕DNN ResearchΛങऩ )JOUPO
Google ͕ Deep MindΛങऩ Baidu ͕Institute of Deep LearningΛઃཱ "OESFX/H Facebook ͕AI Research Lab.Λઃཱ -F$VO SGD (Amari) Neocognitron (Fukushima) Boltzmann Machine (Hinton+) Conv. net (LeCun+) Sparse Coding (Olshausen&Field) 1950 1960 1970 1980 1990 2000 2010 2020 Microsoft ͕MaluubaΛങऩ #FOJHO ୈҰ࣍"*ϒʔϜ ਪɾ୳ࡧ ୈೋ࣍"*ϒʔϜ ࣝදݱ &YQFSU4ZTUFN ୈࡾ࣍"*ϒʔϜ ػցֶश ਂֶश Perceptron (Rosenblatt) Back Propagation (Rumelhart) Deep Learning (Hinton+) Big Data GPU Cloud Computing ઢܗෆՄೳ YPS͕ղ͚ͳ͍ ͍ɺաֶशɺ 47.ਓؾ χϡʔϥϧωοτϫʔΫͷྺ࢙ NN ୈҰͷౙ NN ୈೋͷౙ
ωο τϫʔΫͷΤωϧΪʔ͕࠷খʹͳΔΑ͏ʹঢ়ଶมԽΛ܁Γฦ͢ %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ
੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning هԱ1 هԱ2 هԱΛࢥ͍ग़͢ ͍ۙ͠σʔλΛ༩͑Δͱ… ը૾ΛهԱͨ͠ωοτϫʔΫ Hopfield Networkͱ هԱΛߦྻܭࢉͰγϛϡϨʔτͯ͠ΈΑ͏ http://www.gaya.jp/spiking_neuron/matrix.htm
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning Boltzmann Machineͱ ֬Ϟσϧͷಋೖ Kullback LeiblerμΠόʔδΣϯε 2ͭͷۂઢʹ͍ͭͯɺॏͳΒͣʹ૬ҧʢμΠόʔδΣϯεʣ͍ͯ͠ΔྖҬʢࠩʣΛ࠷খԽ͢Δɻ ࣮ࡍͷೖྗʹ ΑΔ֬p ෮ݩ͞Εͨq ࠩͷੵ
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning ੍͖Boltzmann Machine (RBN)ͱ v3 h2 v1 h1 v2 Visible Hidden
%FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO
.BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning Deep Belief Network (DBN)ͱ Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM pre-training(ڭࢣͳ͠) + fine tuning (ڭࢣ͋Γ)
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output Auto Encoder (AE)ͱ
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning Denoising Auto Encoder (DAE)ͱ ࠾༻ ֶश Input Hidden Output ϊΠζ
4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث
ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning Stacked Auto Encoder (SAE)ͱ
Auto Encoder Deep Belief Network v3 h2 v1 h1 v2
Visible Hidden Visible Hidden Visible Hidden RBM RBM RBM 4UBDLFE "VUP&ODPEFS "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث "VUP&ODPEFS ࣗݾූ߸Խث ଟஈԽ ʴϩόετੑ "VUP&ODPEFS ࣗݾූ߸Խث AE %FOPJTJOH "VUP&ODPEFS DAE SAE pre-training + fine tuning ࠾༻ ֶश Input Hidden Output %FFQ#FMJFG /FUXPSL )PQpFME /FUXPSL #PMU[NBOO .BDIJOF ֬Ϟσϧ ܭࢉྔͷݮ ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ੍͖ #PMU[NBOO .BDIJOF ଟஈԽ ੍͖ #PMU[NBOO .BDIJOF RBM DBN pre-training + fine tuning
γάϞΠυؔɾۂઢਖ਼ؔ ޯফࣦ ඍ ωοτϫʔΫ͕ਂ͍ͱޯ͕ফ͑ͯ͠·͏ɻɻɻ
γάϞΠυؔɾۂઢਖ਼ؔ ඍ ωοτϫʔΫ͕ਂ͍ͱޯ͕ফ͑ͯ͠·͏ɻɻɻ ReLU (Rectified Linear Unit) ൃՐ͍ͯ͠ͳ͍ ൃՐ͍ͯ͠Δ ޯফࣦͳ͠ʹൃՐ͍ͯ͠Δ
ࡉ๔ͷΈΛ௨ͬͯ͢Δɻ ޯফࣦ
• ϛχόονͷೖྗσʔλΛฏۉ0ɺࢄ1ͷσʔλʹม͢Δ • ׆ੑԽؔͷલɺ͘͠ޙʹૠೖ͢Δ͜ͱͰσʔλͷภΓΛݮΒ͢͜ͱ ͕Մೳ • ޮՌ • ֶशΛେ͖͘͢Δ͜ͱ͕ՄೳʢֶशΛૣ͘ਐߦͤ͞Δʣ •
ॳظʹͦΕ΄Ͳґଘ͠ͳ͍ • աֶशΛ੍͢Δ Batch Normalization
• DropOut (Drop Connect) • ΞϯαϯϒϧֶशʹରԠ • ਖ਼ଇԽ • Weight
DecayʢޡࠩؔʹL2ϊϧϜΛՃ͑Δʣ • εύʔεਖ਼ଇԽ • σʔλ֦ுʢϊΠζɺฏߦҠಈɺճసɺ৭ʣ ͦͷଞͷ
ॳظͷܾΊํ
• 0ʹ͢ΔʁˠॏΈ͕ۉҰʹͳͬͯ͠·͍ॏෳͨ͠ʹͳͬͯ͠·͏ • ϥϯμϜͳॳظ͕ඞཁ • ׆ੑԽؔʹɺγάϞΠυؔtanhؔΛ༻͢Δ߹ɺ ʮXavierͷॳظʯ͕ద • ReLUΛ༻͍Δ߹ɺʮHeͷॳظʯ͕ద ॏΈߦྻͷॳظ
• લͷϊʔυͷ͕ɹ ݸͷ߹ɺɹɹΛඪ४ภࠩͱ͢ΔΨε n r 2 n • લͷϊʔυͷ͕ɹ ݸͷ߹ɺɹɹΛඪ४ภࠩͱ͢ΔΨε n r 1 n
• ֤ͷχϡʔϩϯ • όοναΠζ • ֶशɺֶशͷมԽ • Weight decayʢՙॏݮਰʣ •
DropOut • ͳͲ ϋΠύʔύϥϝʔλ NNʹɺॏΈόΠΞεύϥϝʔλͱผʹɺ ਓ͕ઃఆ͖͢ϋΠύʔύϥϝʔλ͕ଘࡏ͢Δɻ ύϥϝʔλܾఆʹଟ͘ͷࢼߦࡨޡ͕͍ɺ Ϟσϧͷੑೳʹେ͖͘Өڹ͢Δɻ • ઐ༻ͷݕূσʔλΛ༻ҙ͢Δ • ܇࿅σʔλςετσʔλΛͬͯੑೳධՁΛ͍͚ͯ͠ͳ͍ • ରεέʔϧͷൣғ͔ΒϥϯμϜʹαϯϓϦϯάͯ͠ධՁ͠ɺ ൣғΛߜΓࠐΜͰ͍͖ɺ࠷ޙʹͻͱͭΛϐοΫΞοϓ͢Δ σʔληοτ ܇࿅σʔλ ςετσʔλ ݕূσʔλ ֶश༻ ֶश݁Ռͷ ධՁ༻ ϋΠύʔύϥϝʔλͷධՁ༻
༧ଌੑೳͷධՁ
܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ
܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ςετσʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ܇࿅σʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ݕূσʔλ ϗʔϧυΞτݕূ Kׂަࠩݕূ (Cross Validation) ੑೳධՁ
TP rate: ཅੑΛཅੑͱஅׂͨ͠߹ FP rate: ӄੑΛཅੑͱஅׂͨ͠߹ = = ROCۂઢͱAUC ROC:Receiver
Operating Characteristic ʢड৴ऀૢ࡞ಛੑʣ AUC:Area under the curve ʢROCۂઢԼ໘ੵʣ Predicted Condition Positive Negative True Condition Positive TP FN (type II error) Negative FP (Type I error) TN True Positive True Negative False Positive False Negative AUC
True Positive True Negative False Positive False Negative ࠶ݱ: ཅੑΛཅੑͱஅׂͨ͠߹
ʢRecallʣ = ద߹: ཅੑͱ༧ଌͨ͠σʔλͷ͏ͪɼ࣮ࡍʹཅੑͰ͋Δͷͷׂ߹ = ʢPrecisionʣ F: Fͷ࠷େ͓͓ΉͶذਫ਼ͱҰக͢Δɻ ௐฏۉɿٯͷฏۉͷٯ http://www004.upp.so-net.ne.jp/s_honma/mean/harmony2.htm Predicted Condition Positive Negative True Condition Positive TP FN (type II error) Negative FP (Type I error) TN
Ԡ༻ྫ • ը૾ೝࣝ (CNN) • ࣗવݴޠॲཧɺԻೝࣝ (RNN) • ը૾ʹର͢ΔΩϟϓγϣϯੜ (CNN
+ RNN) • ڧԽֶश (CNN + Qֶश) • ਂੜϞσϧ (CNN)
ը૾ೝࣝ • Convolutional Neural Network (ΈࠐΈχϡʔϥϧωοτϫʔΫ) • Convolution + Pooling
খ͞ͳը૾ͳΒ͜Ε·Ͱͷશ݁߹NNͰOK Convolution
ฏۉ ࠨӈʹΔΤοδ ্ԼʹΔΤοδ ͖ʹؔͳ͘Τοδ * ϑΟϧλྫ Convolution
None
None
ը૾ྨਓؒΛ͑ͨ ILSVRC = 2010͔Β࢝·ͬͨେنը૾ೝࣝͷڝٕձ 2012ͷILSVRCͰHintonઌੜͷνʔϜ͕Deep LearningͰѹউ 2015ʹILSVRCͷ݁ՌͰਓؒͷೝࣝੑೳΛ͑ͨɻ
ܭࢉྔ • CPUͱGPUͷੑೳͷҧ͍ • ಉ࣌ԋࢉՄೳʢ୯ਫ਼গʣ • CPU(Intel Core i7) :
AVX256bit -> 8ݸ • nVIDIA Pascal GP100 : 114,688ݸ
ࣗવݴޠॲཧɺԻೝࣝ • Recurrent Neural Network (RNN)
ڧԽֶश • CNN + Qֶश + …
Prisma
Prisma σΟʔϓϥʔχϯάΛͬͨΞʔτܥͷ จɺ͍Ζ͍Ζग़͍ͯΔ͕ Ұ൪ͷجૅͱͳΔͷ Gatys et al. 2016 ༻CNNVGG19ʢը૾ྨ༻ʹ܇࿅ࡁΈʣ͔Βશ݁߹Λ ൈ͍ͨͷ
“Image Style Transfer Using Convolutional Neural Networks”
Prisma ίϯςϯπ ελΠϧ http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf
Prisma ଛࣦؔʹίϯςϯπͷଛࣦʴελΠϧͷଛࣦ ࠷దԽʹ௨ৗೖྗ͕ݻఆͰॏΈ͕ߋ৽͞ΕΔ͕ɺٯͰॏΈ͕ݻఆͰೖྗը૾͕ߋ৽͞ΕΔ
Prisma ੜը૾ͷॳظ A:ίϯςϯπ B:ελΠϧ C:ϗϫΠτϊΠζ4ύλʔϯ ͲΕͰ΄ͱΜͲมΘΒͳ͍ͱ͍͏݁
Prisma
FaceApp
FaceApp VAE (Variational Autoencoder) CVAE (Conditional VA) Facial VAE
·ͱΊ • Deep LearningͱҰޱʹݴͬͯɺٕज़༻్༷ʑ • ը૾ೝࣝʢCNNʣ, ࣗવݴޠʢRNNʣ, ਂੜʢVAE, GANʣ,
ڧԽֶशʢDQNʣ, … • ଞͷٕज़ͪΐͬͱͨ͠ͳͲɺΞϓϩʔνํ๏ʹؔͯ͠ϒϧʔΦʔγϟϯͳ • 2014-2015ͷ2ؒͰɺ1500ͷؔ࿈จ • CNN + RNNͷΑ͏ͳɺֆʴԻɺݴ༿ʴֆɺηϯαʔʴจষɺͳͲɺ͜Ε·Ͱ༥߹ Ͱ͖ͳ͔ͬͨσʔλ͕༥߹͢Δ͜ͱͰ৽͍͠ՁΛੜΈग़͢༧ײ
ࢀߟࢿྉ • ॻ੶ • θϩ͔Β࡞ΔDeep Learning ―PythonͰֶͿσΟʔϓϥʔχϯάͷཧͱ࣮ http://amzn.asia/2CTyY4U • ػցֶशͷͨΊͷ֬ͱ౷ܭ
(ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/5SyEZVV • ΦϯϥΠϯػցֶश (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/2kli98b • ΠϥετͰֶͿ σΟʔϓϥʔχϯά (KSใՊֶઐॻ) http://amzn.asia/8Kz11LV • ΠϥετͰֶͿ ػցֶश ࠷খೋ๏ʹΑΔࣝผϞσϧֶशΛத৺ʹ (KSใՊֶઐॻ) http://amzn.asia/6Zlo0pt • ਂֶश (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/hZqrQ2w • ChainerʹΑΔ࣮ફਂֶश http://amzn.asia/5xDfvVJ • ࣮σΟʔϓϥʔχϯά http://amzn.asia/7YP7FPh • ͜Ε͔ΒͷڧԽֶश http://amzn.asia/gHUDp81 • ITΤϯδχΞͷͨΊͷػցֶशཧೖ http://amzn.asia/7SgiMwN • ҟৗݕͱมԽݕ (ػցֶशϓϩϑΣογϣφϧγϦʔζ) http://amzn.asia/6RC0jbt • PythonʹΑΔσʔλੳೖ ―NumPyɺpandasΛͬͨσʔλॲཧ http://amzn.asia/4f2ATnL • URL / SlideShare / pdf • ʢଟ͗ͯ͢লུʣ