English | MP4 | AVC 3840×2160 | AAC 44KHz 2ch | 21 Lessons (3h 40m) | 5.98 GB
Develop an under-the-hood understanding of the principles behind AI – neural networks, GPTs and LLMs – to stand out as the software engineer that can truly integrate these models into software to build new products, augment your workflows and solve the hardest business problems.
The fullstack engineer (frontend, backend, infrastructure) has been augmented with a new component – prediction – from predicting user behavior to text & pixels – ‘generative’ AI.
To stand out as a fullstack software engineer in this era you need to begin developing an under-the-hood understanding of these new tools – particularly the ‘models’ at their heart – neural networks and transformers.
We’ll cover the nature of data, probability, training and prediction in Machine Learning. We’ll then explore the way these principles play out in neural networks used in deep learning including the core concepts of gradient descent and backpropagation.
We’ll then explore how and why to use large language models (LLMs) by understanding tokenization, embeddings, self-attention, pre-training and fine-tuning, as well as the heuristics necessary for reliable model prompting.
We’ll also explore how software engineering teams are evolving to incorporate this new part of the stack. With your first-principles understanding of the tools involved, you will be able to make informed judgments on how to integrate ML/AI models, speak to that in your teams and have an invaluable edge in tech interviews.
You’ll learn:
- How fullstack engineering is evolving to incorporate prediction (ML/AI) into the stack
- How to use a first-principles understanding of the models involved to make informed judgments in your software engineering work and career
- How data science and ML are used to build products using classical models that don’t use neural networks
- The principles behind neural networks (the core tool of deep learning) – data representation, weights and activation, gradient descent and backpropagation
- How LLMs represent data through tokenization, embeddings, self-attention and the transformer architecture, and how this representation informs our decisions around how and why to use LLMs
- How LLMs are guided to generate text through pre-training and fine-tuning and how to interact with LLMs in the most effective and efficient way
- Which heuristics should guide our iterative process for prompting models to reliably produce our desired outputs
- What knowledge, skills and mindset shifts AI requires for the modern fullstack engineer and how they fit into AI-driven team structures
Table of Contents
1 Introduction
2 Refund Request Filtering
3 Creating the Data Converter
4 Applying the Data Converter
5 Boundaries in a Decision Model
6 Challenges in Creating a Production Model
7 Neural Network Data
8 Training a Pixel Data Model
9 Inferring Using the Pixel Data Model
10 Validating Pixel Data Neural Network
11 Learning Multipliers with Larger Sample
12 Applying All Weight Changes
13 Improve Accuracy of Model
14 Sigmoid Function
15 Applying Gradient Descent
16 Validating Weight Accuracy
17 Preprocessing Sample Data
18 Developing a Production Model
19 Combining Models
20 Extending the Principles of Machine Learning
21 Wrapping Up
Resolve the captcha to access the links!