Accelerate Model Training with PyTorch 2.X: Build more accurate models by boosting the model training process

Accelerate Model Training with PyTorch 2.X: Build more accurate models by boosting the model training process

English | 2024 | ISBN: 978-1805120100 | 230 Pages | PDF, EPUB | 17 MB

Dramatically accelerate the building process of complex models using PyTorch to extract the best performance from any computing environment

Key Features

  • Reduce the model-building time by applying optimization techniques and approaches
  • Harness the computing power of multiple devices and machines to boost the training process
  • Focus on model quality by quickly evaluating different model configurations

Penned by an expert in High-Performance Computing (HPC) with over 25 years of experience, this book is your guide to enhancing the performance of model training using PyTorch, one of the most widely adopted machine learning frameworks.

You’ll start by understanding how model complexity impacts training time before discovering distinct levels of performance tuning to expedite the training process. You’ll also learn how to use a new PyTorch feature to compile the model and train it faster, alongside learning how to benefit from specialized libraries to optimize the training process on the CPU. As you progress, you’ll gain insights into building an efficient data pipeline to keep accelerators occupied during the entire training execution and explore strategies for reducing model complexity and adopting mixed precision to minimize computing time and memory consumption. The book will get you acquainted with distributed training and show you how to use PyTorch to harness the computing power of multicore systems and multi-GPU environments available on single or multiple machines.

By the end of this book, you’ll be equipped with a suite of techniques, approaches, and strategies to speed up training, so you can focus on what really matters-building stunning models!

What you will learn

  • Compile the model to train it faster
  • Use specialized libraries to optimize the training on the CPU
  • Build a data pipeline to boost GPU execution
  • Simplify the model through pruning and compression techniques
  • Adopt automatic mixed precision without penalizing the model’s accuracy
  • Distribute the training step across multiple machines and devices
Homepage