Upgrade to Pro — share decks privately, control downloads, hide ads and more …

AI/ML - Quando ter atenção é melhor que ter mem...

AI/ML - Quando ter atenção é melhor que ter memória

This presentation explains Attention mechanism and Transformer approach to create Neural Networks Model to help on Machine Translation use cases.

Leonardo

June 08, 2021
Tweet

More Decks by Leonardo

Other Decks in Technology

Transcript

  1. Globalcode – Open4education Trilha AI / ML Quando ter atenção

    é melhor que ter memória? Lúcio Sanchez Passos Data Science Manager, Santander Leonardo Piedade Solutions Architect, AWS
  2. Globalcode – Open4education Agenda • Background: Sequences • Recurrent Process

    Units: RNN, LSTM and GRU • Seq2Seq Overview • Attention & Transformers: How, When, and Why? • Demo
  3. Globalcode – Open4education Recurrent Process Units credits: Michael Phi Recurrent

    Neural Network (RNN) Long Short-Term Memory (LSTM) Gated Recurrent Units (GRU) Short-Term Memory Problem More Complex Training Process More Complex Training Process Short-Term Memory Problem Good at Modeling Sequence Data
  4. Globalcode – Open4education Applications of RNNs credits: Andrej Karpathy One-to-one

    One-to-many Many-to-one Many-to-many Many-to-many Object Classification Music generation Sentiment analysis Name entity recognition Machine translation
  5. Globalcode – Open4education Attention – Definition source: Attn: Illustrated Attention

    encoder hidden state Encoder Decoder Attention Layer to decoder “Attention is an interface between the encoder and decoder that provides the decoder with information from every encoder hidden state”
  6. Globalcode – Open4education Attention Mechanism source: Attn: Illustrated Attention multiplication

    score encoder hidden state Encoder Decoder addition Attention Layer softmax multiplication score softmax multiplication score softmax multiplication score softmax decoder hidden state to decoder context vector
  7. Globalcode – Open4education Attention is great... • Attention significantly improves

    performance (in many applications) • Attention solves the bottleneck problem • Attention helps with vanishing gradient problem • Attention provides some interpretability credits: Abigail See, Stanford CS224n
  8. Globalcode – Open4education Seq2Seq + Attention - Drawback • Sequential

    computation of data prevents parallelism • Even with LSTM/GRU + Attention, the gradient vanishing problem is not completely solved But if we have all states with Attention…why use RNN?
  9. Globalcode – Open4education Positional Encoding credits: Amirhossein Kazemnejad pos –

    position in the sequence d – size of token vector i – position in the tokenized vector
  10. Globalcode – Open4education Self-Attention source: https://arxiv.org/pdf/1706.03762.pdf multiplication scale query softmax

    dot product key value Self-attention measures the relevance of interaction among all inputs.
  11. Globalcode – Open4education Multi-headed Attention “Multi-head attention allows the model

    to jointly attend to information from different representation subspaces at different positions.” source: https://arxiv.org/pdf/1706.03762.pdf
  12. Globalcode – Open4education Transformers Summary • Easier to train (parallel

    training) • No gradient vanishing and explosion • Allows Transfer Learning
  13. Globalcode – Open4education Original Papers and Presentations... • Attention Is

    All You Need • Long Short-Term Memory • Attn: Illustrated Attention • Illustrated Guide to Transformers • Attentional Neural Network Model • Transcoder: Facebook's Unsupervised Programming Language Translator
  14. Globalcode – Open4education Quando ter atenção é melhor que ter

    memória? linkedin.com/in/luciopassos/ linkedin.com/in/leoap/ Obrigado!