Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Deep Learning
Search
Abhinav Tushar
September 10, 2015
Research
6
260
Deep Learning
Introductory talk on deep learning
Abhinav Tushar
September 10, 2015
Tweet
Share
More Decks by Abhinav Tushar
See All by Abhinav Tushar
the garden of eden
lepisma
0
83
Technology
lepisma
0
65
Bio-Inspired Computing
lepisma
0
81
Maestro
lepisma
0
100
War and Economics
lepisma
0
100
Other Decks in Research
See All in Research
ブラックボックス機械学習モデルの判断根拠を説明する技術
yuyay
0
210
Online Nonstationary and Nonlinear Bandits with Recursive Weighted Gaussian Process
monochromegane
0
190
Active Adaptive Experimental Design for Treatment Effect Estimation with Covariate Choices
masakat0
0
170
「人間にAIはどのように辿り着けばよいのか?ー 系統的汎化からの第一歩 ー」@第22回 Language and Robotics研究会
maguro27
0
520
医療分野におけるLLMの現状と応用可能性について
kento1109
11
3.1k
SSII2024 [OS1] 研究紹介100連発(オープンニング)
ssii
PRO
0
470
いしかわ暮らしセミナー~移住にまつわるお金の話~
matyuda
0
120
SSII2024 [PD] SSII、次の30年への期待
ssii
PRO
2
1.4k
20240626_金沢大学_新機能集積回路設計特論_配布用 #makelsi
takasumasakazu
0
140
【論文解説】KAN: Kolmogorov-Arnold Networks
tamoharu
1
180
第28回 著者ゼミ:Identification of drug responsible glycogene signature in liver carcinoma from meta-analysis using RNA-seq data
ktatsuya
2
240
WikipediaやYouTubeにおける論文参照 / joss2024
corgies
1
240
Featured
See All Featured
Why You Should Never Use an ORM
jnunemaker
PRO
53
9k
How to Ace a Technical Interview
jacobian
274
23k
4 Signs Your Business is Dying
shpigford
179
21k
Keith and Marios Guide to Fast Websites
keithpitt
408
22k
Building an army of robots
kneath
302
42k
Designing for humans not robots
tammielis
248
25k
The Psychology of Web Performance [Beyond Tellerrand 2023]
tammyeverts
36
2.1k
Save Time (by Creating Custom Rails Generators)
garrettdimon
PRO
24
610
Stop Working from a Prison Cell
hatefulcrawdad
267
20k
What the flash - Photography Introduction
edds
67
11k
Embracing the Ebb and Flow
colly
83
4.4k
Product Roadmaps are Hard
iamctodd
PRO
48
10k
Transcript
D E E P L E A R N I
N G
models AE / SAE RBM / DBN CNN RNN /
LSTM Memnet / NTM agenda questions What ? Why ? How ? Next ?
what why how next What ? AI technique for learning
multiple levels of abstractions directly from raw information
what why how next Primitive rule based AI Tailored systems
Hand Crafted Program Output Input
what why how next Classical machine learning Learning from custom
features Hand Crafted Features Learning System Output Input
what why how next Deep Learning based AI Learn everything
Learned Features (Lower Level) Learned Features (Higher Level) Learning System Output Input
None
https://www.youtube.com/watch?v=Q70ulPJW3Gk PPTX PDF (link to video below)
With the capacity to represent the world in signs and
symbols, comes the capacity to change it Elizabeth Kolbert (The Sixth Extinction) “
Why The buzz ?
what why how next Google Trends Deep Learning
what why how next
Crude timeline of Neural Networks 1950 1980 1990 2000 Perceptron
Backprop & Application NN Winter
2010 Stacking RBMs Deep Learning fuss
HUGE DATA Large Synoptic Survey Telescope (2022) 30 TB/night
HUGE CAPABILITIES GPGPU ~20x speedup Powerful Clusters
HUGE SUCCESS Speech, text understanding Robotics / Computer Vision Business
/ Big Data Artificial General Intelligence (AGI)
How its done ?
what why how next Shallow Network ℎ ℎ = (,
0) = ′(ℎ, 1) = (, ) minimize
what why how next Deep Network
what why how next Deep Network More abstract features Stellar
performance Vanishing Gradient Overfitting
what why how next Autoencoder ℎ Unsupervised Feature Learning
what why how next Stacked Autoencoder Y. Bengio et. all;
Greedy Layer-Wise Training of Deep Networks
what why how next Stacked Autoencoder 1. Unsupervised, layer by
layer pretraining 2. Supervised fine tuning
what why how next Deep Belief Network 2006 breakthrough Stacking
Restricted Boltzmann Machines (RBMs) Hinton, G. E., Osindero, S. and Teh, Y.; A fast learning algorithm for deep belief nets
Rethinking Computer Vision
what why how next Traditional Image Classification pipeline Feature Extraction
(SIFT, SURF etc.) Classifier (SVM, NN etc.)
what why how next Convolutional Neural Network Images taken from
deeplearning.net
what why how next Convolutional Neural Network
what why how next Convolutional Neural Network Images taken from
deeplearning.net
what why how next Convolutional Neural Network
what why how next The Starry Night Vincent van Gogh
Leon A. Gatys, Alexander S. Ecker and Matthias Bethge; A Neural Algorithm of Artistic Style
what why how next
what why how next Scene Description CNN + RNN Oriol
Vinyals et. all; Show and Tell: A Neural Image Caption Generator
Learning Sequences
what why how next Recurrent Neural Network Simple Elman Version
ℎ ℎ = ( , ℎ−1 , 0, 1) = ′(ℎ , 2)
what why how next Long Short Term Memory (LSTM) add
memory cells learn access mechanism Sepp Hochreiter and Jürgen Schmidhuber; Long short-term memory
None
what why how next
what why how next Fooling Deep Networks Anh Nguyen, Jason
Yosinski, Jeff Clune; Deep Neural Networks are Easily Fooled
Next Cool things to try
what why how next Hyperparameter optimization bayesian Optimization methods adadelta,
rmsprop . . . Regularization dropout, dither . . .
what why how next Attention & Memory NTMs, Memory Networks,
Stack RNNs . . . NLP Translation, description
what why how next Cognitive Hardware FPGA, GPU, Neuromorphic Chips
Scalable DL map-reduce, compute clusters
what why how next Deep Reinforcement Learning deepmindish things, deep
Q learning Energy models RBMs, DBNs . . .
https://www.reddit.com/r/MachineLearning/wiki
Theano (Python) | Torch (lua) | Caffe (C++) Github is
a friend
@AbhinavTushar ?