▪ ViTの学習方法の確立、ImageNetだけでもCNN相当に ▪ MLP-Mixer [3] ▪ AttentionではなくMLPでもいいよ! ▪ ViTの改良やattentionの代替(MLP, pool, shift, LSTM) 乱立 背景 [1] A. Dosovitskiy, et al., "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale," in Proc. of ICLR, 2021. [2] H. Touvron, et al., "Training Data-efficient Image Transformers & Distillation Through Attention," in Proc. of ICLR'21. [3] I. Tolstikhin, et al., "MLP-Mixer: An all-MLP Architecture for Vision," in Proc. of NeurIPS'21.
MBConvはstrided convで、attentionはpoolでdownsample 初期の改良例: CoAtNet Z. Dai, et al., "CoAtNet: Marrying Convolution and Attention for All Data Sizes," in Proc. of NeurIPS'21. identity residual
この課題をクリアしたVision Transformerをみんなが考えた結果… 背景 B. Heo, et al., "Rethinking Spatial Dimensions of Vision Transformers," in Proc. of ICCV'21. そして次に⾼速化 とかが流⾏る
色々なアプローチがある ▪ Relative or absolute × 固定(sinusoidal) or learnable [1] ▪ Conditional positional encodings [2](面白いので本資料のappendixで紹介) ▪ FFNのconvで暗にembedする [3] ▪ Absolute positional encodingは入力のtokenに付加する ▪ オリジナルのViTはこれ ▪ Relative positional encodingはattentionの内積部分に付加 Positional Encoding [1] K. Wu, et al., "Rethinking and Improving Relative Position Encoding for Vision Transformer," in Proc. of ICCV'21. [2] X. Chu, et al., "Conditional Positional Encodings for Vision Transformers," in arXiv:2102.10882. [3] Enze Xie, et a., "SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers," in Proc. of NeurIPS'21.
Two Successive Swin Transformer Blocks ココがポイント Z. Liu, et al., "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows," in Proc. of ICCV'21. Swin Transformer (ICCV'21 Best Paper) を 完璧に理解する資料 も見てネ!
(PVT) W. Wang, et al., "Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions," in Proc. of ICCV, 2021. Spatial-Reduction Attention (SRA) がポイント
convを挿入し、 positional embeddingを削除 (暗黙的なpositional encoding) PVTv2 W. Wang, et al., "PVTv2: Improved Baselines with Pyramid Vision Transformer," in Journal of Computational Visual Media, 2022.
E(rel) の追加 ▪ H×W×Tのテーブルを持たず独立を仮定して次元毎に持つ MViTv2 Y. Li, et al., "MViTv2: Improved Multiscale Vision Transformers for Classification and Detection," in Proc. of CVPR'22.
sub-sampled attention (GSA):PVTのspatial-reduction attention Twins X. Chu, et al., "Twins: Revisiting the Design of Spatial Attention in Vision Transformers," in Proc. of NeurIPS'21.
layer, we introduce two levels, one for fine- grain local attention and one for coarse-grain global attention” Focal Transformer(現実) J. Yang, et al., "Focal Self-attention for Local-Global Interactions in Vision Transformers," in Proc. of NeurIPS'21. Level数を L と⼀般化して 図も L=3 なのに実際は 2 levelのみ…
[2] でも利用されている ▪ 古くはCNNにShuffleNetというものがあってじゃな… CrossFormer [1] [1] W. Wang, et al., "CrossFormer: A Versatile Vision Transformer Based on Cross-scale Attention," in Proc. of ICLR'22. [2] Z. Huang, et al., "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer," in arXiv:2106.03650.
Y. Chen, et al., "Mobile-Former: Bridging MobileNet and Transformer," in Proc. of CVPR'22. MobileNetの stream Global tokenの stream cross- attention cross- attention
▪ S2-MLP [2] や AS-MLP [3] といった 先行手法が存在するが ShiftViTは本当にshiftだけ ShiftViT [1] [1] G. Wang, et al., "When Shift Operation Meets Vision Transformer: An Extremely Simple Alternative to Attention Mechanism," in Proc. of AAAI'22. [2] T. Yu, et al., "S2-MLP: Spatial-Shift MLP Architecture for Vision," in Proc. of WACV'22. [3] D. Lian, et al., "AS-MLP: An Axial Shifted MLP Architecture for Vision," in Proc. of ICLR'22.
▪ https://twitter.com/akivajp/status/1442241252204814336 ▪ Rethinking and Improving Relative Position Encoding for Vision Transformer, ICCV’21. thanks to @sasaki_ts ▪ CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows, arXiv’21. thanks to @Ocha_Cocoa Positional Encodingの議論
Zero pad付きconvによりCNNが絶対座標を特徴マップに保持するという報告 [2] ▪ これにinspireされ、PVTv2ではFFNにDWConvを挿入、PE削除 Conditional Positional Encoding (CPE) [1] [1] X. Chu, et al., "Conditional Positional Encodings for Vision Transformers," in arXiv:2102.10882. [2] M. Islam, et al., "How Much Position Information Do Convolutional Neural Networks Encode?," in Proc. of ICLR'20.