Build a Simple Neural Network & Learn Backpropagation

Build a Simple Neural Network & Learn Backpropagation

English | MP4 | AVC 1920×1080 | AAC 44KHz 2ch | 38 Lessons (4h 34m) | 610 MB

Learn about backpropagation and gradient descent by coding your own simple neural network from scratch in Python – no libraries, just fundamentals. Ideal for aspiring Machine Learning Engineers, Data Scientists, and AI Specialists.

What you’ll learn

  • Coding neural networks from scratch using only Python
  • What backpropagation is and how it helps machines learn
  • How to break down complicated math into simple, doable steps
  • The easiest way to understand gradients and why they matter
  • What’s really happening when a machine makes predictions
  • How to train a smarter model by adjusting tiny details in code

This course strips neural networks to their fundamental core: math and raw Python.

You’ll dive into the inner workings of backpropagation, gradient descent, and the math that powers modern neural networks. No pre-built frameworks, no black boxes. Just you, the math, and your code.

Step-by-step, you’ll build neural networks by hand and implement them from scratch. From partial derivatives to weight updates, every concept is broken down and coded in Python (no libraries like PyTorch required!). If you’re looking to truly understand how machine learning works—and prove it by building your own neural network—this course is your launchpad.

The course is broken down into three main sections:

Introduction
Start by understanding the goals of the course and why backpropagation is central to modern machine learning. This section sets expectations and explains how mastering the math will give you a competitive edge.

Foundational Concepts and Simple Neural Network Implementation
Get hands-on with the theory. Learn how neural networks process data, calculate losses, and update weights using gradient descent. You’ll manually compute everything—forward pass, gradients, and backpropagation—before coding a working network in Python.

Advanced Neural Network Implementation
Scale up your skills. This section walks you through implementing a deeper neural network with non-linear activation functions. You’ll use advanced backpropagation techniques to train more complex models and understand how real-world neural networks are built from the ground up.

Table of Contents

1 Introduction
2 Introduction to Our Simple Neural Network
3 Why We Use Computational Graphs
4 Conducting the Forward Pass
5 Roadmap to Understanding Backpropagation
6 Derivatives Theory
7 Numerical Example of Derivatives
8 Partial Derivatives
9 Gradients
10 Understanding What Partial Derivatives Dо
11 Introduction to Backpropagation
12 (Optional) Chain Rule
13 Gradient Derivation of Mean Squared Error Loss Function
14 Visualizing the Loss Function and Understanding Gradients
15 Using the Chain Rule to See how w2 Affects the Final Loss
16 Backpropagation of w1
17 Introduction to Gradient Descent Visually
18 Gradient Descent
19 Understanding the Learning Rate (Alpha)
20 Moving in the Opposite Direction of the Gradient
21 b . Calculating Gradient Descent by Hand
22 Coding our Simple Neural Network Part 1
23 Coding our Simple Neural Network Part 2
24 Coding our Simple Neural Network Part 3
25 Coding our Simple Neural Network Part 4
26 Coding our Simple Neural Network Part 5
27 Introduction to Our Complex Neural Network
28 Conducting the Forward Pass
29 Getting Started with Backpropagation
30 Getting the Derivative of the Sigmoid Activation Function(Op
31 Implementing Backpropagation with the Chain Rule
32 Understanding How w3 Affects the Final Loss
33 Calculating Gradients for Z1
34 Understanding How w1 and w2 Affect the Loss
35 Implementing Gradient Descent by Handem
36 Coding our Advanced Neural Network Part (Implementing Forwar
37 Coding our Advanced Neural Network Part 2 (Implement Backpro
38 Coding our Advanced Neural Network Part 3 (Implement Gradien
39 Coding our Advanced Neural Network Part 4 (Training our Neur

Homepage