English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 230 lectures (25h 16m) | 20.87 GB
Become an LLM Engineer in 8 weeks: Build and deploy 8 LLM apps, mastering Generative AI and key theoretical concepts.
Mastering Generative AI and LLMs: An 8-Week Hands-On Journey
Accelerate your career in AI with practical, real-world projects led by industry veteran Ed Donner. Build advanced Generative AI products, experiment with over 20 groundbreaking models, and master state-of-the-art techniques like RAG, QLoRA, and Agents.
What you’ll learn
- Build advanced Generative AI products using cutting-edge models and frameworks.
- Experiment with over 20 groundbreaking AI models, including Frontier and Open-Source models.
- Develop proficiency with platforms like HuggingFace, LangChain, and Gradio.
- Implement state-of-the-art techniques such as RAG (Retrieval-Augmented Generation), QLoRA fine-tuning, and Agents.
- Create real-world AI applications, including:
- A multi-modal customer support assistant that interacts with text, sound, and images.
- An AI knowledge worker that can answer any question about a company based on its shared drive.
- An AI programmer that optimizes software, achieving performance improvements of over 60,000 times.
- An ecommerce application that accurately predicts prices of unseen products.
- Transition from inference to training, fine-tuning both Frontier and Open-Source models.
- Deploy AI products to production with polished user interfaces and advanced capabilities.
- Level up your AI and LLM engineering skills to be at the forefront of the industry.
Projects:
- Project 1: AI-powered brochure generator that scrapes and navigates company websites intelligently.
- Project 2: Multi-modal customer support agent for an airline with UI and function-calling.
- Project 3: Tool that creates meeting minutes and action items from audio using both open- and closed-source models.
- Project 4: AI that converts Python code to optimized C++, boosting performance by 60,000x!
- Project 5: AI knowledge-worker using RAG to become an expert on all company-related matters.
- Project 6: Capstone Part A – Predict product prices from short descriptions using Frontier models.
- Project 7: Capstone Part B – Fine-tuned open-source model to compete with Frontier in price prediction.
- Project 8: Capstone Part C – Autonomous agent system collaborating with models to spot deals and notify you of special bargains.
Table of Contents
Five Pre Black Friday Extras (Download them now)
1 Data Science & LLM Career Guide – Extra Video
2 Difference Between ML DL AI
3 Future AI Trends – Extra Video
4 Practical guide Building LLM apps using tools like PyCharm and Streamlit
5 Types of Neural Networks
Week 1 – Build Your First LLM Product Exploring Top Models & Transformers
6 Day 1 – Cold Open Jumping Right into LLM Engineering
7 Day 1 – Setting Up Ollama Running Your First LLM Locally
8 Day 1 – Exploring Ollama and Building a Language Tutor
9 Day 1 – Overview of the 8-Week LLM Engineering Journey
10 Day 1 – Commercial Projects and Exercises Applying Your Skills
11 Day 1 – Introduction to Your Instructor Ed Donner’s Background
12 Day 1 – Starting Your LLM Environment Setup
13 Day 1 – Mac Setup Creating a Data Science Environment with Anaconda
14 Day 1 – Windows Setup Building Your Data Science Environment
15 Day 1 – Alternative Setup Using Virtualenv for Python Environment
16 Day 1 – Setting Up OpenAI API Connecting to Powerful Frontier Models
17 Day 1 – Creating a .env File for Storing API Keys Safely
18 Day 1 – Running JupyterLab for Interactive Coding Projects
19 Day 1 – First LLM Experiment Building a Summarization Project
20 Day 1 – Wrapping Up Day 1 Key Takeaways and Next Steps in LLM Engineering
21 Day 2 – Recap and Day’s Overview Building on Day 1 Achievements
22 Day 2 – Introduction to Frontier LLMs Exploring Cutting-Edge Models
23 Day 2 – Exercise Using Python to Call Ollama Locally for Summarization
24 Day 2 – Wrapping Up Summarization with OpenAI and Ollama Compared
25 Day 3 – Introduction to Frontier Models Understanding LLM Capabilities
26 Day 3 – Comparing Leading LLMs Strengths and Business Applications
27 Day 3 – Exploring GPT-4o vs O1 Preview Key Differences in Performance
28 Day 3 – Creativity and Coding Leveraging GPT-4o’s Canvas Feature
29 Day 3 – Claude 3.5’s Alignment and Artifact Creation A Deep Dive
30 Day 3 – Insights into Gemini and Cohere Strengths and Limitations
31 Day 3 – Evaluating Meta AI and Perplexity Nuances of Model Outputs
32 Day 3 – Wrapping Up Day 3 Key Learnings and Comparative Analysis
33 Day 4 – Revealing the Leadership Winner A Fun LLM Challenge
34 Day 4 – Exploring the Journey of AI From Early Models to Transformers
35 Day 4 – Understanding LLM Parameters From Weights to Model Complexity
36 Day 4 – Tokens Explained The Building Blocks of LLM Input and Output
37 Day 4 – The Context Window Managing Conversation Length in LLMs
38 Day 4 – API Costs Demystified Budgeting for Your LLM Projects
39 Day 4 – Comparing Model Costs and Context Window Capabilities
40 Day 4 – Wrapping Up Day 4 Key Takeaways and Practical Insights
41 Day 5 – Kicking Off Day 5 Overview of Brochure Project and Objectives
42 Day 5 – Back to JupyterLab Preparing for the Business Brochure Project
43 Day 5 – Gathering Information from Web Links Using LLMs
44 Day 5 – Creating and Formatting Responses for Brochure Content
45 Day 5 – Final Adjustments Optimizing Markdown and Streaming in JupyterLab
46 Day 5 – Challenge Enhancing the Project with Multi-shot Prompting
47 Day 5 – Assignment Developing Your Customized LLM-Based Tutor
48 Day 5 – Wrapping Up Week 1 Achievements and Next Steps
Week 2 – Build a Multi-Modal Chatbot LLMs, Gradio UI, and Agents in Action
49 Day 1 – Mastering Multiple AI APIs OpenAI, Claude, and Gemini for LLM Engineers
50 Day 1 – Streaming AI Responses Implementing Real-Time LLM Output in Python
51 Day 1 – How to Create Adversarial AI Conversations Using OpenAI and Claude APIs
52 Day 1 – AI Tools Exploring Transformers & Frontier LLMs for Developers
53 Day 2 – Building AI UIs with Gradio Quick Prototyping for LLM Engineers
54 Day 2 – Gradio Tutorial Create Interactive AI Interfaces for OpenAI GPT Models
55 Day 2 – Implementing Streaming Responses with GPT and Claude in Gradio UI
56 Day 2 – Building a Multi-Model AI Chat Interface with Gradio GPT vs Claude
57 Day 2 – Building Advanced AI UIs From OpenAI API to Chat Interfaces with Gradio
58 Day 3 – Building AI Chatbots Mastering Gradio for Customer Support Assistants
59 Day 3 – Build a Conversational AI Chatbot with OpenAI & Gradio Step-by-Step
60 Day 3 – Enhancing Chatbots with Multi-Shot Prompting and Context Enrichment
61 Day 3 – Mastering AI Tools Empowering LLMs to Run Code on Your Machine
62 Day 4 – Using AI Tools with LLMs Enhancing Large Language Model Capabilities
63 Day 4 – Building an AI Airline Assistant Implementing Tools with OpenAI GPT-4
64 Day 4 – How to Equip LLMs with Custom Tools OpenAI Function Calling Tutorial
65 Day 4 – Mastering AI Tools Building Advanced LLM-Powered Assistants with APIs
66 Day 5 – Multimodal AI Assistants Integrating Image and Sound Generation
67 Day 5 – Multimodal AI Integrating DALL-E 3 Image Generation in JupyterLab
68 Day 5 – Build a Multimodal AI Agent Integrating Audio & Image Tools
69 Day 5 – How to Build a Multimodal AI Assistant Integrating Tools and Agents
Week 3 – Open-Source Gen AI Building Automated Solutions with HuggingFace
70 Day 1 – Hugging Face Tutorial Exploring Open-Source AI Models and Datasets
71 Day 1 – Exploring HuggingFace Hub Models, Datasets & Spaces for AI Developers
72 Day 1 – Intro to Google Colab Cloud Jupyter Notebooks for Machine Learning
73 Day 1 – Hugging Face Integration with Google Colab Secrets and API Keys Setup
74 Day 1 – Mastering Google Colab Run Open-Source AI Models with Hugging Face
75 Day 2 – Hugging Face Transformers Using Pipelines for AI Tasks in Python
76 Day 2 – Hugging Face Pipelines Simplifying AI Tasks with Transformers Library
77 Day 2 – Mastering HuggingFace Pipelines Efficient AI Inference for ML Tasks
78 Day 3 – Exploring Tokenizers in Open-Source AI Llama, Phi-2, Qwen, & Starcoder
79 Day 3 – Tokenization Techniques in AI Using AutoTokenizer with LLAMA 3.1 Model
80 Day 3 – Comparing Tokenizers Llama, PHI-3, and QWEN2 for Open-Source AI Models
81 Day 3 – Hugging Face Tokenizers Preparing for Advanced AI Text Generation
82 Day 4 – Hugging Face Model Class Running Inference on Open-Source AI Models
83 Day 4 – Hugging Face Transformers Loading & Quantizing LLMs with Bits & Bytes
84 Day 4 – Hugging Face Transformers Generating Jokes with Open-Source AI Models
85 Day 4 – Mastering Hugging Face Transformers Models, Pipelines, and Tokenizers
86 Day 5 – Combining Frontier & Open-Source Models for Audio-to-Text Summarization
87 Day 5 – Using Hugging Face & OpenAI for AI-Powered Meeting Minutes Generation
88 Day 5 – Build a Synthetic Test Data Generator Open-Source AI Model for Business
Week 4 – LLM Showdown Evaluating Models for Code Generation & Business Tasks
89 Day 1 – How to Choose the Right LLM Comparing Open and Closed Source Models
90 Day 1 – Chinchilla Scaling Law Optimizing LLM Parameters and Training Data Size
91 Day 1 – Limitations of LLM Benchmarks Overfitting and Training Data Leakage
92 Day 1 – Evaluating Large Language Models 6 Next-Level Benchmarks Unveiled
93 Day 1 – HuggingFace OpenLLM Leaderboard Comparing Open-Source Language Models
94 Day 1 – Master LLM Leaderboards Comparing Open Source and Closed Source Models
95 Day 2 – Comparing LLMs Top 6 Leaderboards for Evaluating Language Models
96 Day 2 – Specialized LLM Leaderboards Finding the Best Model for Your Use Case
97 Day 2 – LLAMA vs GPT-4 Benchmarking Large Language Models for Code Generation
98 Day 2 – Human-Rated Language Models Understanding the LM Sys Chatbot Arena
99 Day 2 – Commercial Applications of Large Language Models From Law to Education
100 Day 2 – Comparing Frontier and Open-Source LLMs for Code Conversion Projects
101 Day 3 – Leveraging Frontier Models for High-Performance Code Generation in C++
102 Day 3 – Comparing Top LLMs for Code Generation GPT-4 vs Claude 3.5 Sonnet
103 Day 3 – Optimizing Python Code with Large Language Models GPT-4 vs Claude 3.5
104 Day 3 – Code Generation Pitfalls When Large Language Models Produce Errors
105 Day 3 – Blazing Fast Code Generation How Claude Outperforms Python by 13,000x
106 Day 3 – Building a Gradio UI for Code Generation with Large Language Models
107 Day 3 – Optimizing C++ Code Generation Comparing GPT and Claude Performance
108 Day 3 – Comparing GPT-4 and Claude for Code Generation Performance Benchmarks
109 Day 4 – Open Source LLMs for Code Generation Hugging Face Endpoints Explored
110 Day 4 – How to Use HuggingFace Inference Endpoints for Code Generation Models
111 Day 4 – Integrating Open-Source Models with Frontier LLMs for Code Generation
112 Day 4 – Comparing Code Generation GPT-4, Claude, and CodeQuen LLMs
113 Day 4 – Mastering Code Generation with LLMs Techniques and Model Selection
114 Day 5 – Evaluating LLM Performance Model-Centric vs Business-Centric Metrics
115 Day 5 – Mastering LLM Code Generation Advanced Challenges for Python Developers
Mastering RAG Build Advanced Solutions with Vector Embeddings & LangChain
116 Day 1 – RAG Fundamentals Leveraging External Data to Improve LLM Responses
117 Day 1 – Building a DIY RAG System Implementing Retrieval-Augmented Generation
118 Day 1 – Understanding Vector Embeddings The Key to RAG and LLM Retrieval
119 Day 2 – Unveiling LangChain Simplify RAG Implementation for LLM Applications
120 Day 2 – LangChain Text Splitter Tutorial Optimizing Chunks for RAG Systems
121 Day 2 – Preparing for Vector Databases OpenAI Embeddings and Chroma in RAG
122 Day 3 – Mastering Vector Embeddings OpenAI and Chroma for LLM Engineering
123 Day 3 – Visualizing Embeddings Exploring Multi-Dimensional Space with t-SNE
124 Day 3 – Building RAG Pipelines From Vectors to Embeddings with LangChain
125 Day 4 – Mastering Retrieval-Augmented Generation Hands-On LLM Integration
126 Day 4 – Master RAG Pipeline Building Efficient RAG Systems
127 Day 5 – Optimizing RAG Systems Troubleshooting and Fixing Common Problems
128 Day 4 – Implementing RAG Pipeline LLM, Retriever, and Memory in LangChain
129 Day 5 – Switching Vector Stores FAISS vs Chroma in LangChain RAG Pipelines
130 Day 5 – Demystifying LangChain Behind-the-Scenes of RAG Pipeline Construction
131 Day 5 – Debugging RAG Optimizing Context Retrieval in LangChain
132 Day 5 – Build Your Personal AI Knowledge Worker RAG for Productivity Boost
Week 6 Fine-tuning Frontier Large Language Models with LoRAQLoRA
133 Day 1 – Fine-Tuning Large Language Models From Inference to Training
134 Day 1 – Finding and Crafting Datasets for LLM Fine-Tuning Sources & Techniques
135 Day 1 – Data Curation Techniques for Fine-Tuning LLMs on Product Descriptions
136 Day 1 – Optimizing Training Data Scrubbing Techniques for LLM Fine-Tuning
137 Day 1 – Evaluating LLM Performance Model-Centric vs Business-Centric Metrics
138 Day 2 – LLM Deployment Pipeline From Business Problem to Production Solution
139 Day 2 – Prompting, RAG, and Fine-Tuning When to Use Each Approach
140 Day 2 – Productionizing LLMs Best Practices for Deploying AI Models at Scale
141 Day 2 – Optimizing Large Datasets for Model Training Data Curation Strategies
142 Day 2 – How to Create a Balanced Dataset for LLM Training Curation Techniques
143 Day 2 – Finalizing Dataset Curation Analyzing Price-Description Correlations
144 Day 2 – How to Create and Upload a High-Quality Dataset on HuggingFace
145 Day 3 – Feature Engineering and Bag of Words Building ML Baselines for NLP
146 Day 3 – Baseline Models in ML Implementing Simple Prediction Functions
147 Day 3 Feature Engineering Techniques for Amazon Product Price Prediction Models
148 Day 3 – Optimizing LLM Performance Advanced Feature Engineering Strategies
149 Day 3 – Linear Regression for LLM Fine-Tuning Baseline Model Comparison
150 Day 3 – Bag of Words NLP Implementing Count Vectorizer for Text Analysis in ML
151 Day 3 – Support Vector Regression vs Random Forest Machine Learning Face-Off
152 Day 3 – Comparing Traditional ML Models From Random to Random Forest
153 Day 4 – Evaluating Frontier Models Comparing Performance to Baseline Frameworks
154 Day 4 – Human vs AI Evaluating Price Prediction Performance in Frontier Models
155 Day 4 – GPT-4o Mini Frontier AI Model Evaluation for Price Estimation Tasks
156 Day 4 – Comparing GPT-4 and Claude Model Performance in Price Prediction Tasks
157 Day 4 – Frontier AI Capabilities LLMs Outperforming Traditional ML Models
158 Day 5 – Fine-Tuning LLMs with OpenAI Preparing Data, Training, and Evaluation
159 Day 5 – How to Prepare JSONL Files for Fine-Tuning Large Language Models (LLMs)
160 Day 5 – Step-by-Step Guide Launching GPT Fine-Tuning Jobs with OpenAI API
161 Day 5 – Fine-Tuning LLMs Track Training Loss & Progress with Weights & Biases
162 Day 5 – Evaluating Fine-Tuned LLMs Metrics Analyzing Training & Validation Loss
163 Day 5 – LLM Fine-Tuning Challenges When Model Performance Doesn’t Improve
164 Day 5 – Fine-Tuning Frontier LLMs Challenges & Best Practices for Optimization
Fine-tuned open-source model to compete with Frontier in price prediction
165 Day 1 – Mastering Parameter-Efficient Fine-Tuning LoRa, QLoRA & Hyperparameters
166 Day 1 – Introduction to LoRA Adaptors Low-Rank Adaptation Explained
167 Day 1 – QLoRA Quantization for Efficient Fine-Tuning of Large Language Models
168 Day 1 – Optimizing LLMs R, Alpha, and Target Modules in QLoRA Fine-Tuning
169 Day 1 – Parameter-Efficient Fine-Tuning PEFT for LLMs with Hugging Face
170 Day 1 – How to Quantize LLMs Reducing Model Size with 8-bit Precision
171 Day 1 Double Quantization & NF4 Advanced Techniques for 4-Bit LLM Optimization
172 Day 1 – Exploring PEFT Models The Role of LoRA Adapters in LLM Fine-Tuning
173 Day 1 – Model Size Summary Comparing Quantized and Fine-Tuned Models
174 Day 2 – How to Choose the Best Base Model for Fine-Tuning Large Language Models
175 Day 2 – Selecting the Best Base Model Analyzing HuggingFace’s LLM Leaderboard
176 Day 2 – Exploring Tokenizers Comparing LLAMA, QWEN, and Other LLM Models
177 Day 2 – Optimizing LLM Performance Loading and Tokenizing Llama 3.1 Base Model
178 Day 2 – Quantization Impact on LLMs Analyzing Performance Metrics and Errors
179 Day 2 – Comparing LLMs GPT-4 vs LLAMA 3.1 in Parameter-Efficient Tuning
180 Day 3 – QLoRA Hyperparameters Mastering Fine-Tuning for Large Language Models
181 Day 3 – Understanding Epochs and Batch Sizes in Model Training
182 Day 3 – Learning Rate, Gradient Accumulation, and Optimizers Explained
183 Day 3 – Setting Up the Training Process for Fine-Tuning
184 Day 3 – Configuring SFTTrainer for 4-Bit Quantized LoRA Fine-Tuning of LLMs
185 Day 3 – Fine-Tuning LLMs Launching the Training Process with QLoRA
186 Day 3 – Monitoring and Managing Training with Weights & Biases
187 Day 4 – Keeping Training Costs Low Efficient Fine-Tuning Strategies
188 Day 4 – Efficient Fine-Tuning Using Smaller Datasets for QLoRA Training
189 Day 4 – Visualizing LLM Fine-Tuning Progress with Weights and Biases Charts
190 Day 4 – Advanced Weights & Biases Tools and Model Saving on Hugging Face
191 Day 4 – End-to-End LLM Fine-Tuning From Problem Definition to Trained Model
192 Day 5 – The Four Steps in LLM Training From Forward Pass to Optimization
193 Day 5 – QLoRA Training Process Forward Pass, Backward Pass and Loss Calculation
194 Day 5 – Understanding Softmax and Cross-Entropy Loss in Model Training
195 Day 5 – Monitoring Fine-Tuning Weights & Biases for LLM Training Analysis
196 Day 5 – Revisiting the Podium Comparing Model Performance Metrics
197 Day 5 – Evaluation of our Proprietary, Fine-Tuned LLM against Business Metrics
198 Day 5 – Visualization of Results Did We Beat GPT-4
199 Day 5 – Hyperparameter Tuning for LLMs Improving Model Accuracy with PEFT
Week 8 – Build Autonomous multi agent system collaborating with models
200 Day 1 – From Fine-Tuning to Multi-Agent Systems Next-Level LLM Engineering
201 Day 1 Building a Multi-Agent AI Architecture for Automated Deal Finding Systems
202 Day 1 – Unveiling Modal Deploying Serverless Models to the Cloud
203 Day 1 – LLAMA on the Cloud Running Large Models Efficiently
204 Day 1 – Building a Serverless AI Pricing API Step-by-Step Guide with Modal
205 Day 1 – Multiple Production Models Ahead Preparing for Advanced RAG Solutions
206 Day 2 – Implementing Agentic Workflows Frontier Models and Vector Stores in RAG
207 Day 2 – Building a Massive Chroma Vector Datastore for Advanced RAG Pipelines
208 Day 2 – Visualizing Vector Spaces Advanced RAG Techniques for Data Exploration
209 Day 2 – 3D Visualization Techniques for RAG Exploring Vector Embeddings
210 Day 2 – Finding Similar Products Building a RAG Pipeline without LangChain
211 Day 2 – RAG Pipeline Implementation Enhancing LLMs with Retrieval Techniques
212 Day 2 – Random Forest Regression Using Transformers & ML for Price Prediction
213 Day 2 – Building an Ensemble Model Combining LLM, RAG, and Random Forest
214 Day 2 – Wrap-Up Finalizing Multi-Agent Systems and RAG Integration
215 Day 3 – Enhancing AI Agents with Structured Outputs Pydantic & BaseModel Guide
216 Day 3 – Scraping RSS Feeds Building an AI-Powered Deal Selection System
217 Day 3 – Structured Outputs in AI Implementing GPT-4 for Detailed Deal Selection
218 Day 3 – Optimizing AI Workflows Refining Prompts for Accurate Price Recognition
219 Day 3 – Mastering Autonomous Agents Designing Multi-Agent AI Workflows
220 Day 4 – The 5 Hallmarks of Agentic AI Autonomy, Planning, and Memory
221 Day 4 – Building an Agentic AI System Integrating Pushover for Notifications
222 Day 4 Implementing Agentic AI Creating a Planning Agent for Automated Workflows
223 Day 4 – Building an Agent Framework Connecting LLMs and Python Code
224 Day 4 – Completing Agentic Workflows Scaling for Business Applications
225 Day 5 – Autonomous AI Agents Building Intelligent Systems Without Human Input
226 Day 5 – AI Agents with Gradio Advanced UI Techniques for Autonomous Systems
227 Day 5 – Finalizing the Gradio UI for Our Agentic AI Solution
228 Day 5 Enhancing AI Agent UI Gradio Integration for Real-Time Log Visualization
229 Day 5 – Analyzing Results Monitoring Agent Framework Performance
230 Day 5 – AI Project Retrospective 8-Week Journey to Becoming an LLM Engineer
Resolve the captcha to access the links!