English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 52 Lessons (10h 13m) | 2.53 GB
Learn how to apply state-of-the-art transformer-based models including BERT and GPT to solve modern NLP tasks.
Introduction to Transformer Models for NLP LiveLessons provides a comprehensive overview of transformers and the mechanisms–attention, embedding, and tokenization–that set the stage for state-of-the-art NLP models like BERT and GPT to flourish. The focus for these lessons is providing a practical, comprehensive, and functional understanding of transformer architectures and how they are used to create modern NLP pipelines. Throughout this series, instructor Sinan Ozdemir will bring theory to life through illustrations, solved mathematical examples, and straightforward Python examples within Jupyter notebooks.
All lessons in the course are grounded by real-life case studies and hands-on code examples. After completing this lesson, you will be in a great position to understand and build cutting-edge NLP pipelines using transformers. You will also be provided with extensive resources and curriculum detail which can all be found at the courses GitHub repository.
Learn How To:
- Recognize which type of transformer-based model is best for a given task
- Understand how transformers process text and make predictions
- Fine-tune a transformer-based model
- Create pipelines using fine-tuned models
- Deploy fine-tuned models and use them in production
Who Should Take This Course:
- Intermediate/advanced machine learning engineers with experience with ML, neural networks, and NLP
- Those interested in state-of-the art NLP architecture
- Those interested in productionizing NLP models
- Those comfortable using libraries like Tensorflow or PyTorch
- Those comfortable with linear algebra and vector/matrix operations
Topics:
- Introduction to Attention and Language Models
- How Transformers Use Attention to Process Text
- Transfer Learning
- Natural Language Understanding with BERT
- Pre-training and Fine-tuning BERT
- Hands on BERT
- Natural Language Generation with GPT
- Hands on GPT
- Further Applications of BERT + GPT
- T5 – Back to Basics
- Hands-on T5
- The Vision Transformer
- Deploying Transformer Models
Table of Contents
Introduction
1 Introduction to Transformer Models for NLP: Introduction
Lesson 1: Introduction to Attention and Language Models
2 Topics
3 1.1 A brief history of NLP
4 1.2 Paying attention with attention
5 1.3 Encoder-decoder architectures
6 1.4 How language models look at text
Lesson 2: How Transformers Use Attention to Process Text
7 Topics
8 2.1 Introduction to transformers
9 2.2 Scaled dot product attention
10 2.3 Multi-headed attention
Lesson 3: Transfer Learning
11 Topics
12 3.1 Introduction to Transfer Learning
13 3.2 Introduction to PyTorch
14 3.3 Fine-tuning transformers with PyTorch
Lesson 4: Natural Language Understanding with BERT
15 Topics
16 4.1 Introduction to BERT
17 4.2 Wordpiece tokenization
18 4.3 The many embeddings of BERT
Lesson 5: Pre-training and Fine-tuning BERT
19 Topics
20 5.1 The Masked Language Modeling Task
21 5.2 The Next Sentence Prediction Task
22 5.3 Fine-tuning BERT to solve NLP tasks
Lesson 6: Hands-on BERT
23 Topics
24 6.1 Flavors of BERT
25 6.2 BERT for sequence classification
26 6.3 BERT for token classification
27 6.4 BERT for question/answering
Lesson 7: Natural Language Generation with GPT
28 Topics
29 7.1 Introduction to the GPT family
30 7.2 Masked multi-headed attention
31 7.3 Pre-training GPT
32 7.4 Few-shot learning
Lesson 8: Hands-on GPT
33 Topics
34 8.1 GPT for style completion
35 8.2 GPT for code dictation
Lesson 9: Further Applications of BERT + GPT
36 Topics
37 9.1 Siamese BERT-networks for semantic searching
38 9.2 Teaching GPT multiple tasks at once with prompt engineering
Lesson 10: T5 – Back to Basics
39 Topics
40 10.1 Encoders and decoders welcome: T5’s architecture
41 10.2 Cross-attention
Lesson 11: Hands-on T5
42 Topics
43 11.1 Off the shelf results with T5
44 11.2 Using T5 for abstractive summarization
Lesson 12: The Vision Transformer
45 Topics
46 12.1 Introduction to the Vision Transformer (ViT)
47 12.2 Fine-tuning an image captioning system
Lesson 13: Deploying Transformer Models
48 Topics
49 13.1 Introduction to MLOps
50 13.2 Sharing our models on HuggingFace
51 13.3 Deploying a fine-tuned BERT model using FastAPI
Summary
52 Introduction to Transformer Models for NLP: Summary
Resolve the captcha to access the links!