勉強会LT資料の置き場
機械学習におけるknowledgeの有効活用のお話
モデル圧縮とか、転移学習とか
■ モデル圧縮
Yu Cheng, Duo Wang, Pan Zhou, Tao Zhang. 2017
“A Survey of Model Compression and Acceleration for Deep Neural Networks” IEEE
https://arxiv.org/abs/1710.09282
全体像をクイックに俯瞰するサーベイ論文
■ 蒸留
- Deep Learningにおける知識の蒸留
http://codecrafthouse.jp/p/2018/01/knowledge-distillation/
- Chen et al., 2017 “Learning Efficient Object Detection Models with Knowledge Distillation”.
https://papers.nips.cc/paper/6676-learning-efficient-object-detection-models-with-knowledge-distillation.pdf
- Sergey et al., 2017 “PAYING MORE ATTENTION TO ATTENTION: IMPROVING THE PERFORMANCE OF CONVOLUTIONAL NEURAL NETWORKS VIA ATTENTION TRANSFER ”.
https://arxiv.org/pdf/1612.03928.pdf
■ KT
Li et al., 2019 “DELTA: DEEP LEARNING TRANSFER USING FEATURE MAP WITH ATTENTION FOR CONVOLUTIONAL NETWORKS”.
https://arxiv.org/pdf/1901.09229.pdf