
Code 7 Landmark NLP Papers in PyTorch (Full NMT Course)
This course is a comprehensive journey through the evolution of sequence models and neural machine translation (NMT). It blends historical breakthroughs, architectural innovations, mathematical insights, and hands-on PyTorch replications of landmark papers that shaped modern NLP and AI.
The course features:
- A detailed narrative tracing the history and breakthroughs of RNNs, LSTMs, GRUs, Seq2Seq, Attention, GNMT, and Multilingual NMT.
- Replications of 7 landmark NMT papers in PyTorch, so learners can code along and rebuild history step by step.
- Explanations of the math behind RNNs, LSTMs, GRUs, and Transformers.
- Conceptual clarity with architectural comparisons, visual explanations, and interactive demos like the Transformer Playground.
? Atlas Page: https://programming-ocean.com/knowledge-hub/neural-machine-translation-atlas.php
? Code Source on Github: https://github.com/MOHAMMEDFAHD/Pytorch-Collections/tree/main/Neural-Machine-Translation
❤️ Support for this channel comes from our friends at Scrimba – the coding platform that's reinvented interactive learning: https://scrimba.com/freecodecamp
⭐️ Chapters ⭐️
– 0:01:06 Welcome
– 0:04:27 Intro to Atlas
– 0:09:25 Evolution of RNN
– 0:15:08 Evolution of Machine Translation
– 0:26:56 Machine Translation Techniques
– 0:34:28 Long Short-Term Memory (Overview)
– 0:52:36 Learning Phrase Representation using RNN (Encoder–Decoder for SMT)
– 1:00:46 Learning Phrase Representation (PyTorch Lab – Replicating Cho et al., 2014)
– 1:23:45 Seq2Seq Learning with Neural Networks
– 1:45:06 Seq2Seq (PyTorch Lab – Replicating Sutskever et al., 2014)
– 2:01:45 NMT by Jointly Learning to Align (Bahdanau et al., 2015)
– 2:32:36 NMT by Jointly Learning to Align & Translate (PyTorch Lab – Replicating Bahdanau et al., 2015)
– 2:42:45 On Using Very Large Target Vocabulary
– 3:03:45 Large Vocabulary NMT (PyTorch Lab – Replicating Jean et al., 2015)
– 3:24:56 Effective Approaches to Attention (Luong et al., 2015)
– 3:44:06 Attention Approaches (PyTorch Lab – Replicating Luong et al., 2015)
– 4:03:17 Long Short-Term Memory Network (Deep Explanation)
– 4:28:13 Attention Is All You Need (Vaswani et al., 2017)
– 4:47:46 Google Neural Machine Translation System (GNMT – Wu et al., 2016)
– 5:12:38 GNMT (PyTorch Lab – Replicating Wu et al., 2016)
– 5:29:46 Google’s Multilingual NMT (Johnson et al., 2017)
– 6:00:46 Multilingual NMT (PyTorch Lab – Replicating Johnson et al., 2017)
– 6:15:49 Transformer vs GPT vs BERT Architectures
– 6:36:38 Transformer Playground (Tool Demo)
– 6:38:31 Seq2Seq Idea from Google Translate Tool
– 6:49:31 RNN, LSTM, GRU Architectures (Comparisons)
– 7:01:08 LSTM & GRU Equations
? Thanks to our Champion and Sponsor supporters:
? Drake Milly
? Ulises Moralez
? Goddard Tan
? David MG
? Matthew Springman
? Claudio
? Oscar R.
? jedi-or-sith
? Nattira Maneerat
? Justin Hual
--
Learn to code for free and get a developer job: https://www.freecodecamp.org
Read hundreds of articles on programming: https://freecodecamp.org/news
The course features:
- A detailed narrative tracing the history and breakthroughs of RNNs, LSTMs, GRUs, Seq2Seq, Attention, GNMT, and Multilingual NMT.
- Replications of 7 landmark NMT papers in PyTorch, so learners can code along and rebuild history step by step.
- Explanations of the math behind RNNs, LSTMs, GRUs, and Transformers.
- Conceptual clarity with architectural comparisons, visual explanations, and interactive demos like the Transformer Playground.
? Atlas Page: https://programming-ocean.com/knowledge-hub/neural-machine-translation-atlas.php
? Code Source on Github: https://github.com/MOHAMMEDFAHD/Pytorch-Collections/tree/main/Neural-Machine-Translation
❤️ Support for this channel comes from our friends at Scrimba – the coding platform that's reinvented interactive learning: https://scrimba.com/freecodecamp
⭐️ Chapters ⭐️
– 0:01:06 Welcome
– 0:04:27 Intro to Atlas
– 0:09:25 Evolution of RNN
– 0:15:08 Evolution of Machine Translation
– 0:26:56 Machine Translation Techniques
– 0:34:28 Long Short-Term Memory (Overview)
– 0:52:36 Learning Phrase Representation using RNN (Encoder–Decoder for SMT)
– 1:00:46 Learning Phrase Representation (PyTorch Lab – Replicating Cho et al., 2014)
– 1:23:45 Seq2Seq Learning with Neural Networks
– 1:45:06 Seq2Seq (PyTorch Lab – Replicating Sutskever et al., 2014)
– 2:01:45 NMT by Jointly Learning to Align (Bahdanau et al., 2015)
– 2:32:36 NMT by Jointly Learning to Align & Translate (PyTorch Lab – Replicating Bahdanau et al., 2015)
– 2:42:45 On Using Very Large Target Vocabulary
– 3:03:45 Large Vocabulary NMT (PyTorch Lab – Replicating Jean et al., 2015)
– 3:24:56 Effective Approaches to Attention (Luong et al., 2015)
– 3:44:06 Attention Approaches (PyTorch Lab – Replicating Luong et al., 2015)
– 4:03:17 Long Short-Term Memory Network (Deep Explanation)
– 4:28:13 Attention Is All You Need (Vaswani et al., 2017)
– 4:47:46 Google Neural Machine Translation System (GNMT – Wu et al., 2016)
– 5:12:38 GNMT (PyTorch Lab – Replicating Wu et al., 2016)
– 5:29:46 Google’s Multilingual NMT (Johnson et al., 2017)
– 6:00:46 Multilingual NMT (PyTorch Lab – Replicating Johnson et al., 2017)
– 6:15:49 Transformer vs GPT vs BERT Architectures
– 6:36:38 Transformer Playground (Tool Demo)
– 6:38:31 Seq2Seq Idea from Google Translate Tool
– 6:49:31 RNN, LSTM, GRU Architectures (Comparisons)
– 7:01:08 LSTM & GRU Equations
? Thanks to our Champion and Sponsor supporters:
? Drake Milly
? Ulises Moralez
? Goddard Tan
? David MG
? Matthew Springman
? Claudio
? Oscar R.
? jedi-or-sith
? Nattira Maneerat
? Justin Hual
--
Learn to code for free and get a developer job: https://www.freecodecamp.org
Read hundreds of articles on programming: https://freecodecamp.org/news
freeCodeCamp.org
Learn to code for free....