Securing Generative AI (Video Course)

Securing Generative AI (Video Course)

English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 3h 31m | 845 MB

Get the strategies, methodologies, tools, and best practices for AI security.

  • Explore security for deploying and developing LLMs, RAGs, and other AI implementations
  • Learn hands-on with practical skills of real-life AI and machine learning cases
  • Incorporate security at every stage of AI development, deployment, and operation

This course offers a comprehensive exploration into the crucial security measures necessary for the deployment and development of various AI implementations, including large language models (LLMs) and Retrieval-Augmented Generation (RAG). It addresses critical considerations and mitigations to reduce the overall risk in organizational AI system development processes. Experienced author and trainer Omar Santos emphasizes secure by design principles, focusing on security outcomes, radical transparency, and building organizational structures that prioritize security. You will be introduced to AI threats, LLM security, prompt injection, insecure output handling, and Red Team AI models. The course concludes by teaching you how to protect RAG implementations.

Table of Contents

Lesson 1: Introduction to AI Threats and LLM Security
1.1 Understanding the Significance of LLMs in the AI Landscape
1.2 Exploring the Resources for this Course – GitHub Repositories and Others
1.3 Introducing Retrieval Augmented Generation (RAG)
1.4 Understanding the OWASP Top-10 Risks for LLMs
1.5 Exploring the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) Framework
1.6 Understanding the NIST Taxonomy and Terminology of Attacks and Mitigations
Lesson 2: Understanding Prompt Injection & Insecure Output Handling
2.1 Defining Prompt Injection Attacks
2.2 Exploring Real-life Prompt Injection Attacks
2.3 Using ChatML for OpenAI API Calls to Indicate to the LLM the Source of Prompt Input
2.4 Enforcing Privilege Control on LLM Access to Backend Systems
2.5 Best Practices Around API Tokens for Plugins, Data Access, and Function-level Permissions
2.6 Understanding Insecure Output Handling Attacks
2.7 Using the OWASP ASVS to Protect Against Insecure Output Handling
Lesson 3: Training Data Poisoning, Model Denial of Service & Supply Chain Vulnerabilities
3.1 Understanding Training Data Poisoning Attacks
3.2 Exploring Model Denial of Service Attacks
3.3 Understanding the Risks of the AI and ML Supply Chain
3.4 Best Practices when Using Open-Source Models from Hugging Face and Other Sources
3.5 Securing Amazon BedRock, SageMaker, Microsoft Azure AI Services, and Other Environments
Lesson 4: Sensitive Information Disclosure, Insecure Plugin Design, and Excessive Agency
4.1 Understanding Sensitive Information Disclosure
4.2 Exploiting Insecure Plugin Design
4.3 Avoiding Excessive Agency
Lesson 5: Overreliance, Model Theft, and Red Teaming AI Models
5.1 Understanding Overreliance
5.2 Exploring Model Theft Attacks
5.3 Understanding Red Teaming of AI Models
Lesson 6: Protecting Retrieval Augmented Generation (RAG) Implementations
6.1 Understanding the RAG, LangChain, Llama Index, and AI Orchestration
6.2 Securing Embedding Models
6.3 Securing Vector Databases
6.4 Monitoring and Incident Response

Homepage