Lock in $30 Savings on PRO—Offer Ends Soon! ⏳
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Deep Learning
Search
Abhinav Tushar
September 10, 2015
Research
6
280
Deep Learning
Introductory talk on deep learning
Abhinav Tushar
September 10, 2015
Tweet
Share
More Decks by Abhinav Tushar
See All by Abhinav Tushar
the garden of eden
lepisma
0
100
Technology
lepisma
0
80
Bio-Inspired Computing
lepisma
0
99
Maestro
lepisma
0
120
War and Economics
lepisma
0
130
Other Decks in Research
See All in Research
SNLP2025:Can Language Models Reason about Individualistic Human Values and Preferences?
yukizenimoto
0
220
MIRU2025 チュートリアル講演「ロボット基盤モデルの最前線」
haraduka
15
11k
一人称視点映像解析の最先端(MIRU2025 チュートリアル)
takumayagi
6
4.3k
情報技術の社会実装に向けた応用と課題:ニュースメディアの事例から / appmech-jsce 2025
upura
0
270
論文紹介:Safety Alignment Should be Made More Than Just a Few Tokens Deep
kazutoshishinoda
0
140
説明可能な機械学習と数理最適化
kelicht
2
680
LLM-jp-3 and beyond: Training Large Language Models
odashi
1
690
[IBIS 2025] 深層基盤モデルのための強化学習驚きから理論にもとづく納得へ
akifumi_wachi
15
8k
J-RAGBench: 日本語RAGにおける Generator評価ベンチマークの構築
koki_itai
0
1.1k
論文紹介:Not All Tokens Are What You Need for Pretraining
kosuken
1
220
Stealing LUKS Keys via TPM and UUID Spoofing in 10 Minutes - BSides 2025
anykeyshik
0
170
[論文紹介] Intuitive Fine-Tuning
ryou0634
0
150
Featured
See All Featured
Optimising Largest Contentful Paint
csswizardry
37
3.5k
The Art of Delivering Value - GDevCon NA Keynote
reverentgeek
16
1.8k
個人開発の失敗を避けるイケてる考え方 / tips for indie hackers
panda_program
122
21k
The Hidden Cost of Media on the Web [PixelPalooza 2025]
tammyeverts
1
98
I Don’t Have Time: Getting Over the Fear to Launch Your Podcast
jcasabona
34
2.5k
Large-scale JavaScript Application Architecture
addyosmani
515
110k
Cheating the UX When There Is Nothing More to Optimize - PixelPioneers
stephaniewalter
285
14k
Producing Creativity
orderedlist
PRO
348
40k
YesSQL, Process and Tooling at Scale
rocio
174
15k
Practical Tips for Bootstrapping Information Extraction Pipelines
honnibal
25
1.6k
Exploring the Power of Turbo Streams & Action Cable | RailsConf2023
kevinliebholz
36
6.2k
JavaScript: Past, Present, and Future - NDC Porto 2020
reverentgeek
52
5.7k
Transcript
D E E P L E A R N I
N G
models AE / SAE RBM / DBN CNN RNN /
LSTM Memnet / NTM agenda questions What ? Why ? How ? Next ?
what why how next What ? AI technique for learning
multiple levels of abstractions directly from raw information
what why how next Primitive rule based AI Tailored systems
Hand Crafted Program Output Input
what why how next Classical machine learning Learning from custom
features Hand Crafted Features Learning System Output Input
what why how next Deep Learning based AI Learn everything
Learned Features (Lower Level) Learned Features (Higher Level) Learning System Output Input
None
https://www.youtube.com/watch?v=Q70ulPJW3Gk PPTX PDF (link to video below)
With the capacity to represent the world in signs and
symbols, comes the capacity to change it Elizabeth Kolbert (The Sixth Extinction) “
Why The buzz ?
what why how next Google Trends Deep Learning
what why how next
Crude timeline of Neural Networks 1950 1980 1990 2000 Perceptron
Backprop & Application NN Winter
2010 Stacking RBMs Deep Learning fuss
HUGE DATA Large Synoptic Survey Telescope (2022) 30 TB/night
HUGE CAPABILITIES GPGPU ~20x speedup Powerful Clusters
HUGE SUCCESS Speech, text understanding Robotics / Computer Vision Business
/ Big Data Artificial General Intelligence (AGI)
How its done ?
what why how next Shallow Network ℎ ℎ = (,
0) = ′(ℎ, 1) = (, ) minimize
what why how next Deep Network
what why how next Deep Network More abstract features Stellar
performance Vanishing Gradient Overfitting
what why how next Autoencoder ℎ Unsupervised Feature Learning
what why how next Stacked Autoencoder Y. Bengio et. all;
Greedy Layer-Wise Training of Deep Networks
what why how next Stacked Autoencoder 1. Unsupervised, layer by
layer pretraining 2. Supervised fine tuning
what why how next Deep Belief Network 2006 breakthrough Stacking
Restricted Boltzmann Machines (RBMs) Hinton, G. E., Osindero, S. and Teh, Y.; A fast learning algorithm for deep belief nets
Rethinking Computer Vision
what why how next Traditional Image Classification pipeline Feature Extraction
(SIFT, SURF etc.) Classifier (SVM, NN etc.)
what why how next Convolutional Neural Network Images taken from
deeplearning.net
what why how next Convolutional Neural Network
what why how next Convolutional Neural Network Images taken from
deeplearning.net
what why how next Convolutional Neural Network
what why how next The Starry Night Vincent van Gogh
Leon A. Gatys, Alexander S. Ecker and Matthias Bethge; A Neural Algorithm of Artistic Style
what why how next
what why how next Scene Description CNN + RNN Oriol
Vinyals et. all; Show and Tell: A Neural Image Caption Generator
Learning Sequences
what why how next Recurrent Neural Network Simple Elman Version
ℎ ℎ = ( , ℎ−1 , 0, 1) = ′(ℎ , 2)
what why how next Long Short Term Memory (LSTM) add
memory cells learn access mechanism Sepp Hochreiter and Jürgen Schmidhuber; Long short-term memory
None
what why how next
what why how next Fooling Deep Networks Anh Nguyen, Jason
Yosinski, Jeff Clune; Deep Neural Networks are Easily Fooled
Next Cool things to try
what why how next Hyperparameter optimization bayesian Optimization methods adadelta,
rmsprop . . . Regularization dropout, dither . . .
what why how next Attention & Memory NTMs, Memory Networks,
Stack RNNs . . . NLP Translation, description
what why how next Cognitive Hardware FPGA, GPU, Neuromorphic Chips
Scalable DL map-reduce, compute clusters
what why how next Deep Reinforcement Learning deepmindish things, deep
Q learning Energy models RBMs, DBNs . . .
https://www.reddit.com/r/MachineLearning/wiki
Theano (Python) | Torch (lua) | Caffe (C++) Github is
a friend
@AbhinavTushar ?