Upgrade to Pro — share decks privately, control downloads, hide ads and more …

AI最新論文読み会2021年まとめ

 AI最新論文読み会2021年まとめ

More Decks by 医療AI研究所@大阪公立大学

Other Decks in Education

Transcript

  1. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E ff i cient Sparsity 10.A Survey on Vision Transformer
  2. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E ff i cient Sparsity 10.A Survey on Vision Transformer ౰ษڧձͰѻͬͨ࿦จ
  3. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E ff i cient Sparsity 10.A Survey on Vision Transformer ᶃTransformerͷը૾෼໺΁ͷԠ༻ͷReview
  4. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E ff i cient Sparsity 10.A Survey on Vision Transformer ᶄBNͳ͠ͰEf fi cientNet௒͑ͨ࠷ઌ୺Ϟσϧ
  5. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E fi cientNet௒͑ͨ࠷ઌ୺Ϟσϧ E ffi cientNetV2: Smaller Models and Faster Training ᶄ͞ΒʹͦΕΛ௒͑ͨEf fi cientNetV2
  6. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E ff i cient Sparsity 10.A Survey on Vision Transformer ᶅجຊ: ॳ৺ऀ/ݚڀऀ޲͚
  7. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E ff i cient Sparsity 10.A Survey on Vision Transformer ɾώϯτϯڭतͷϞσϧͷσβΠϯఏএͷPaper
  8. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E ff i cient Sparsity 10.A Survey on Vision Transformer ɾޯ഑߱Լ๏Ͱͷಛ௃நग़͸Χʔωϧ๏ͱࣅ͍ͯΔ
  9. Review of Transformers for Vision CCNet(Criss-cross Attention) Stand-alone Self-Attention Local

    Relation Networks Attention Augmented Convolutional Networks Vectorized Self-Attention ViT(Vision Transformer) DeiT(Data-e ffi cient image Transformer) Classi fi cation Detection DETR(Detection Transformer) D-DETR(Deformable-DETR) Axial-attention for Panoptic Segmentation CMSA(Cross-modal Self-Attention) Segmentation Image generation iGPT(image GPT) Image Transformer High-resolution Image Synthesis SceneFormer Super resolution TTSR