English | MP4 | AVC 1920×1080 | AAC 44KHz 2ch | 39 Lessons (6h 33m) | 1.91 GB
Learn the Hugging Face ecosystem from scratch including Transformers, Datasets, Hub/Spaces, and more by building and customizing your own AI text classification model and launch it for use in the real-world!
Gain real-world experience by learning how to utilize Hugging Face to solve practical problems with your own AI text classification model!
What you’ll learn
- How to prepare and process datasets using Hugging Face Datasets
- Techniques for training and fine-tuning text classification models with Hugging Face Transformers
- Methods for evaluating model performance using Hugging Face Evaluate
- Steps to deploy your trained model to the Hugging Face Hub
- How to create interactive demos for machine learning models using Gradio
- Practical experience in the full lifecycle of a machine learning project, from data preparation to deployment
Table of Contents
1 Introduction (Hugging Face Ecosystem and Text Classification)
2 More Text Classification Examples
3 What We’re Going To Build!
4 Getting Setup: Adding Hugging Face Tokens to Google Colab
5 Getting Setup: Importing Necessary Libraries to Google Colab
6 Downloading a Text Classification Dataset from Hugging Face Datasets
7 Preparing Text Data for Use with a Model – Part 1: Turning Our Labels into Numbers
8 Preparing Text Data for Use with a Model – Part 2: Creating Train and Test Sets
9 Preparing Text Data for Use with a Model – Part 3: Getting a Tokenizer
10 Preparing Text Data for Use with a Model – Part 4: Exploring Our Tokenizer
11 Preparing Text Data for Use with a Model – Part 5: Creating a Function to Tokenize Our Data
12 Setting Up an Evaluation Metric (to measure how well our model performs)
13 Introduction to Transfer Learning (a powerful technique to get good results quickly)
14 Model Training – Part 1: Setting Up a Pretrained Model from the Hugging Face Hub
15 Model Training – Part 2: Counting the Parameters in Our Model
16 Model Training – Part 3: Creating a Folder to Save Our Model
17 Model Training – Part 4: Setting Up Our Training Arguments with TrainingArguments
18 Model Training – Part 5: Setting Up an Instance of Trainer with Hugging Face Transformers
19 Model Training – Part 6: Training Our Model and Fixing Errors Along the Way
20 Model Training – Part 7: Inspecting Our Models Loss Curves
21 Model Training – Part 8: Uploading Our Model to the Hugging Face Hub
22 Making Predictions on the Test Data with Our Trained Model
23 Turning Our Predictions into Prediction Probabilities with PyTorch
24 Sorting Our Model’s Predictions by Their Probability
25 Performing Inference – Part 1: Discussing Our Options
26 Performing Inference – Part 2: Using a Transformers Pipeline (one sample at a time)
27 Performing Inference – Part 3: Using a Transformers Pipeline on Multiple Samples at a Time (Batching)
28 Performing Inference – Part 4: Running Speed Tests to Compare One at a Time vs. Batched Predictions
29 Performing Inference – Part 5: Performing Inference with PyTorch
30 OPTIONAL – Putting It All Together: from Data Loading, to Model Training, to making Predictions on Custom Data
31 Turning Our Model into a Demo – Part 1: Gradio Overview
32 Turning Our Model into a Demo – Part 2: Building a Function to Map Inputs to Outputs
33 Turning Our Model into a Demo – Part 3: Getting Our Gradio Demo Running Locally
34 Making Our Demo Publicly Accessible – Part 1: Introduction to Hugging Face Spaces and Creating a Demos Directory
35 Making Our Demo Publicly Accessible – Part 2: Creating an App File
36 Making Our Demo Publicly Accessible – Part 3: Creating a README File
37 Making Our Demo Publicly Accessible – Part 4: Making a Requirements File
38 Making Our Demo Publicly Accessible – Part 5: Uploading Our Demo to Hugging Face Spaces and Making it Publicly Available
39 Summary Exercises and Extensions
Resolve the captcha to access the links!