English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 35 Lessons (4h 37m) | 3.99 GB
Code-along sessions move you through the development of classification and regression methods
Machine learning is moving from futuristic AI projects to data analysis on your desk. You need to go beyond following along in discussions to coding machine learning tasks. Developing Classification and Regression Systems LiveLessons (Machine Learning with Python for Everyone Series) Part 3 shows you how to turn introductory machine learning concepts into concrete code using Python, scikit-learn, and friends.
You will learn about fundamental classification and regression metrics like decision tree classifiers and regressors, support vector classifiers and regression, logistic regression, penalized regression, and discriminant analysis. You will see techniques for feature engineering, including scaling, discretization, and interactions. You will learn how to implement pipelines for more complex processing and nested cross-validation for tuning hyperparameters.
Learn How To
- Use fundamental classification methods including decision trees, support vector classifiers, logistic regression, and discriminant analysis
- Recognize bias and variability in classifiers
- Compare classifiers
- Use fundamental regression methods including penalized regression and regression trees
- Recognize bias and variability in regressors
- Manually engineer features through feature scaling, discretization, categorical coding, analysis of interactions, and target manipulations
- Tune hyperparameters
- Use nested cross-validation
- Develop pipelines
Lesson 1: Fundamental Classification Methods I
In Lesson 1, Mark dives into two important classification methods: decision trees and support vector machines. Decision trees capture the idea of using if-then rules to put examples into buckets. Support vector machines are based on some pretty heavy mathematics, but Mark addresses them from a very graphical point of view.
Lesson 2: Fundamental Classification Methods II
In Lesson 2, Mark continues to add to your classification toolbox. He discusses two methods: logistic regression and discriminant analysis. These methods can classify examples and they can also give us an idea of the certainty of that classification. Mark also compares many classification techniques.
Lesson 3: Fundamental Regression Methods
In Lesson 3, Mark turns his attention toward regression models, also called regressors. He considers modifications of linear regression called penalized regression and a variation of decision trees called decision tree regressors. Decision tree regressors build on the idea of if-then rules but are adapted to numerical outcomes. He also compares many regression techniques.
Lesson 4: Manual Feature Engineering
In Lesson 4, Mark looks at the various ways we can manually modify our learning data. He discusses ways to rewrite features by adjusting their numerical scale, by converting them from numeric to categorical values, or by manipulating the way we encode categories. He also talks about adding new features: feature construction. Finally, Mark covers ways to modify the target to support learning.
Lesson 5: Hyperparameters and Pipelines
In Lesson 5, Mark addresses fine-tuning your models by selecting good hyperparameters and creating longer processing chains with pipelines. Hyperparameters control the inner workings of our models: two models that differ only in their hyperparameters may give very different results. Creating pipelines enables us to couple feature engineering, model selection, and model building into one streamlined object.
Table of Contents
1 Developing Classification and Regression Systems (Machine Learning with Python for Everyone Series), Part 3 LiveLessons – Introduction
2 Topics
3 Revisiting Classification
4 Decision Trees I
5 Decision Trees II
6 Support Vector Classifiers I
7 Support Vector Classifiers II
8 Topics
9 Logistic Regression I
10 Logistic Regression II
11 Discriminant Analysis I
12 Discriminant Analysis II
13 Bias and Variance of Classifiers
14 Comparing Classifiers
15 Topics
16 Penalized Regression I
17 Penalized Regression II
18 Piecewise Constant Regression
19 Regression Trees
20 Bias and Variance of Regressors
21 Comparing Regressors
22 Topics
23 Overview of Feature Engineering
24 Feature Scaling
25 Discretization
26 Categorical Coding
27 Interactions
28 Target Manipulations
29 Topics
30 Models, Parameters, and Hyperparameters
31 Tuning Hyperparameters
32 Nested Cross-validation
33 Pipelines
34 Tuning Pipelines
35 Developing Classification and Regression Systems (Machine Learning with Python for Everyone Series), Part 3 LiveLessons – Summary
Resolve the captcha to access the links!