Open-source LLMs: Uncensored & secure AI locally with RAG

Open-source LLMs: Uncensored & secure AI locally with RAG

English | MP4 | AVC 1920×1080 | AAC 44KHz 2ch | 86 lectures (10h 1m) | 10.00 GB

Private ChatGPT Alternatives: Llama3, Mistral a. more with Function Calling, RAG, Vector Databases, LangChain, AI-Agents

ChatGPT is useful, but have you noticed that there are many censored topics, you are pushed in certain political directions, some harmless questions go unanswered, and our data might not be secure with OpenAI? This is where open-source LLMs like Llama3, Mistral, Grok, Falkon, Phi3, and Command R+ can help!

Are you ready to master the nuances of open-source LLMs and harness their full potential for various applications, from data analysis to creating chatbots and AI agents? Then this course is for you!

Introduction to Open-Source LLMs

This course provides a comprehensive introduction to the world of open-source LLMs. You’ll learn about the differences between open-source and closed-source models and discover why open-source LLMs are an attractive alternative. Topics such as ChatGPT, Llama, and Mistral will be covered in detail. Additionally, you’ll learn about the available LLMs and how to choose the best models for your needs. The course places special emphasis on the disadvantages of closed-source LLMs and the pros and cons of open-source LLMs like Llama3 and Mistral.

Practical Application of Open-Source LLMs

The course guides you through the simplest way to run open-source LLMs locally and what you need for this setup. You will learn about the prerequisites, the installation of LM Studio, and alternative methods for operating LLMs. Furthermore, you will learn how to use open-source models in LM Studio, understand the difference between censored and uncensored LLMs, and explore various use cases. The course also covers finetuning an open-source model with Huggingface or Google Colab and using vision models for image recognition.

Prompt Engineering and Cloud Deployment

An important part of the course is prompt engineering for open-source LLMs. You will learn how to use HuggingChat as an interface, utilize system prompts in prompt engineering, and apply both basic and advanced prompt engineering techniques. The course also provides insights into creating your own assistants in HuggingChat and using open-source LLMs with fast LPU chips instead of GPUs.

Function Calling, RAG, and Vector Databases

Learn what function calling is in LLMs and how to implement vector databases, embedding models, and retrieval-augmented generation (RAG). The course shows you how to install Anything LLM, set up a local server, and create a RAG chatbot with Anything LLM and LM Studio. You will also learn to perform function calling with Llama 3 and Anything LLM, summarize data, store it, and visualize it with Python.

Optimization and AI Agents

For optimizing your RAG apps, you will receive tips on data preparation and efficient use of tools like LlamaIndex and LlamaParse. Additionally, you will be introduced to the world of AI agents. You will learn what AI agents are, what tools are available, and how to install and use Flowise locally with Node.js. The course also offers practical insights into creating an AI agent that generates Python code and documentation, as well as using function calling and internet access.

Additional Applications and Tips

Finally, the course introduces text-to-speech (TTS) with Google Colab and finetuning open-source LLMs with Google Colab. You will learn how to rent GPUs from providers like Runpod or Massed Compute if your local PC isn’t sufficient. Additionally, you will explore innovative tools like Microsoft Autogen and CrewAI and how to use LangChain for developing AI agents.

Harness the transformative power of open-source LLM technology to develop innovative solutions and expand your understanding of their diverse applications. Sign up today and start your journey to becoming an expert in the world of large language models!

What you’ll learn

  • Why Open-Source LLMs? Differences, Advantages, and Disadvantages of Open-Source and Closed-Source LLMs
  • What are LLMs like ChatGPT, Llama, Mistral, Phi3, Qwen2-72B-Instruct, Grok, Gemma, etc.
  • Which LLMs are available and what should I use? Finding “The Best LLMs”
  • Requirements for Using Open-Source LLMs Locally
  • Installation and Usage of LM Studio, Anything LLM, Ollama, and Alternative Methods for Operating LLMs
  • Censored vs. Uncensored LLMs
  • Finetuning an Open-Source Model with Huggingface or Google Colab
  • Vision (Image Recognition) with Open-Source LLMs: Llama3, Llava & Phi3 Vision
  • Hardware Details: GPU Offload, CPU, RAM, and VRAM
  • All About HuggingChat: An Interface for Using Open-Source LLMs
  • System Prompts in Prompt Engineering + Function Calling
  • Prompt Engineering Basics: Semantic Association, Structured & Role Prompts
  • Groq: Using Open-Source LLMs with a Fast LPU Chip Instead of a GPU
  • Vector Databases, Embedding Models & Retrieval-Augmented Generation (RAG)
  • Creating a Local RAG Chatbot with Anything LLM & LM Studio
  • Linking Ollama & Llama 3, and Using Function Calling with Llama 3 & Anything LLM
  • Function Calling for Summarizing Data, Storing, and Creating Charts with Python
  • Using Other Features of Anything LLM and External APIs
  • Tips for Better RAG Apps with Firecrawl for Website Data, More Efficient RAG with LlamaIndex & LlamaParse for PDFs and CSVs
  • Definition and Available Tools for AI Agents, Installation and Usage of Flowise Locally with Node (Easier Than Langchain and LangGraph)
  • Creating an AI Agent that Generates Python Code and Documentation, and Using AI Agents with Function Calling,
  • Internet Access, and Three Experts
  • Hosting and Usage: Which AI Agent Should You Build and External Hosting, Text-to-Speech (TTS) with Google Colab
  • Finetuning Open-Source LLMs with Google Colab (Alpaca + Llama-3 8b, Unsloth)
  • Renting GPUs with Runpod or Massed Compute
  • Security Aspects: Jailbreaks and Security Risks from Attacks on LLMs with Jailbreaks, Prompt Injections, and Data Poisoning
  • Data Privacy and Security of Your Data, as well as Policies for Commercial Use and Selling Generated Content
Table of Contents

Introduction and Overview
1 Welcome
2 Course Overview
3 My Goal and Some Tips
4 Explanation of the Links

Why Open-Source LLMs Differences Advantages and Disadvantages
5 What is this Section about
6 What are LLMs like ChatGPT Llama Mistral etc
7 Which LLMs are available and what should I use Finding The Best LLMs
8 Disadvantages of Closed-Source LLMs like ChatGPT Gemini and Claude
9 Advantages and Disadvantages of Open-Source LLMs like Llama3 Mistral and more
10 OpenSoure LLMs get betther DeepSeek R1 Infos
11 Recap Dont Forget This

The Easiest Way to Run Open-Source LLMs Locally and What You Need
12 Requirements for Using Open-Source LLMs Locally GPU CPU and Quantization
13 Installing LM Studio and Alternative Methods for Running LLMs
14 Using Open-Source Models in LM Studio Llama 3 Mistral Phi-3 and more
15 Censored vs Uncensored LLMs Llama3 with Dolphin Finetuning
16 The Use Cases of classic LLMs like Phi-3 Llama and more
17 Vision Image Recognition with Open-Source LLMs Llama3 Llava and Phi3 Vision
18 Some Examples of Image Recognition Vision
19 More Details on Hardware GPU Offload CPU RAM and VRAM
20 Summary of What You Learned and an Outlook to Lokal Servers and Prompt Engineering

Prompt Engineering for Open-Source LLMs and Their Use in the Cloud
21 HuggingChat An Interface for Using Open-Source LLMs
22 System Prompts An Important Part of Prompt Engineering
23 Why is Prompt Engineering Important A example
24 Semantic Association The most Importnant Concept you need to understand
25 The structured Prompt Copy my Prompts
26 Instruction Prompting and some Cool Tricks
27 Role Prompting for LLMs
28 Shot Prompting Zero-Shot One-Shot and Few-Shot Prompts
29 Reverse Prompt Engineering and the OK Trick
30 Chain of Thought Prompting Lets think Step by Step
31 Tree of Thoughts ToT Prompting in LLMs
32 The Combination of Prompting Concepts
33 Creating Your Own Assistants in HuggingChat
34 Groq Using Open-Source LLMs with a Fast LPU Chip Instead of a GPU
35 Recap What You Should Remember

Function Calling RAG and Vector Databases with Open-Source LLMs
36 What Will Be Covered in This Section
37 What is Function Calling in LLMs
38 Vector Databases Embedding Models and Retrieval-Augmented Generation RAG
39 Installing Anything LLM and Setting Up a Local Server for a RAG Pipeline
40 Local RAG Chatbot with Anything LLM and LM Studio
41 Function Calling with Llama 3 and Anything LLM Searching the Internet
42 Function Calling Summarizing Data Storing and Creating Charts with Python
43 Other Features of Anything LLM TTS and External APIs
44 Downloading Ollama and Llama 3 Creating and Linking a Local Server
45 Recap Dont Forget This

Optimizing RAG Apps Tips for Data Preparation
46 What Will Be Covered in This Section Better RAG Data and Chunking
47 Tips for Better RAG Apps Firecrawl for Your Data from Websites
48 More Efficient RAG with LlamaIndex and LlamaParse Data Preparation for PDFs and more
49 LlamaIndex Update LlamaParse made easy
50 Chunk Size and Chunk Overlap for a Better RAG Application
51 Recap What You Learned in This Section

Local AI Agents with Open-Source LLMs
52 What Will Be Covered in This Section on AI Agents
53 AI Agents Definition and Available Tools for Creating Opensource AI-Agents
54 We use Langchain with Flowise Locally with Node js
55 Installing Flowise with Node js JavaScript Runtime Environment
56 Problems with Flowise installation
57 How to Fix Problems on the Installation with Node
58 The Flowise Interface for AI-Agents and RAG ChatBots
59 Local RAG Chatbot with Flowise LLama3 and Ollama A Local Langchain App
60 Our First AI Agent Python Code and Documentation with Superwicer and 2 Worker
61 AI Agents with Function Calling Internet and Three Experts for Social Media
62 Which AI Agent Should You Build and External Hosting with Render
63 Chatbot with Open-Source Models from Huggingface and Embeddings in HTML Mixtral
64 Insanely fast inference with the Groq API
65 How to use DeepSeek R1 Locally in Browser and the API
66 Recap What You Should Remember

Finetuning Renting GPUs Open-Source TTS Finding the BEST LLM and More Tips
67 What Is This Section About
68 Text-to-Speech TTS with Google Colab
69 Moshi Talk to an Open-Source AI
70 Finetuning an Open-Source Model with Huggingface or Google Colab
71 Finetuning Open-Source LLMs with Google Colab Alpaca Llama-3 8b from Unsloth
72 What is the Best Open-Source LLM I Should Use
73 Llama 3 1 Infos and What Models should you use
74 Grok from xAI
75 Renting a GPU with Runpod or Massed Compute if Your Local PC Isnt Enough
76 Recap What You Should Remember

Data Privacy Security and What Comes Next
77 THE LAST SECTION What is This About
78 Jailbreaks Security Risks from Attacks on LLMs with Prompts
79 Prompt Injections Security Problem of LLMs
80 Data Poisoning and Backdoor Attacks
81 Data Privacy and Security Is Your Data at Risk
82 Commercial Use and Selling of AI-Generated Content
83 My Thanks and Whats Next
84 Bonus

Homepage