English | MP4 | AVC 3840×2160 | AAC 44KHz 2ch | 24 Lessons (3h 50m) | 2.58 GB
Build production-ready AI apps. Write evals to measure LLM and tool accuracy. Implement a Retrieval Augmented Generation (RAG) pipeline and explore how structured outputs provide a predictable schema for LLM responses. Responsibly manage costs and token limits with proper context memory management. Build better guardrails within the system with human-in-the-loop best practices.
Table of Contents
1 Introduction
2 LLM & Agents Review
3 Evals
4 What to Measure with Evals
5 Setting Up an Eval Framework
6 Creating an Eval
7 Viewing Eval Results
8 Handling Evals on Subjective Inputs
9 Eval Multiple Tools
10 RAG Overview
11 RAG Pipeline
12 Create an Upstash Vector Database
13 Ingesting Data into Vector DB
14 Create a Movies Query
15 Create a Movie Search Tool
16 Using Structured Outputs
17 Limitations of Structured Outputs
18 Using Human in the Loop
19 Interpreting Approvals using LLMs
20 Adding Approvals to Agent
21 History Management Strategies
22 Summarizing Messages
23 Advanced RAG & Fine-Tuning
24 Wrapping Up
Resolve the captcha to access the links!