English | MP4 | AVC 1920×1080 | AAC 44KHz 2ch | 10h 58m | 2.46 GB
Accelerate deep learning and other number-intensive tasks with JAX, Google’s awesome high-performance numerical computing library.
The JAX numerical computing library tackles the core performance challenges at the heart of deep learning and other scientific computing tasks. By combining Google’s Accelerated Linear Algebra platform (XLA) with a hyper-optimized version of NumPy and a variety of other high-performance features, JAX delivers a huge performance boost in low-level computations and transformations.
In Deep Learning with JAX you will learn how to:
- Use JAX for numerical calculations
- Build differentiable models with JAX primitives
- Run distributed and parallelized computations with JAX
- Use high-level neural network libraries such as Flax
- Leverage libraries and modules from the JAX ecosystem
Deep Learning with JAX is a hands-on guide to using JAX for deep learning and other mathematically-intensive applications. Google Developer Expert Grigory Sapunov steadily builds your understanding of JAX’s concepts. The engaging examples introduce the fundamental concepts on which JAX relies and then show you how to apply them to real-world tasks. You’ll learn how to use JAX’s ecosystem of high-level libraries and modules, and also how to combine TensorFlow and PyTorch with JAX for data loading and deployment.
Google’s JAX offers a fresh vision for deep learning. This powerful library gives you fine control over low level processes like gradient calculations, delivering fast and efficient model training and inference, especially on large datasets. JAX has transformed how research scientists approach deep learning. Now boasting a robust ecosystem of tools and libraries, JAX makes evolutionary computations, federated learning, and other performance-sensitive tasks approachable for all types of applications.
Deep Learning with JAX teaches you to build effective neural networks with JAX. In this example-rich book, you’ll discover how JAX’s unique features help you tackle important deep learning performance challenges, like distributing computations across a cluster of TPUs. You’ll put the library into action as you create an image classification tool, an image filter application, and other realistic projects. The nicely-annotated code listings demonstrate how JAX’s functional programming mindset improves composability and parallelization.
What’s Inside
- Use JAX for numerical calculations
- Build differentiable models with JAX primitives
- Run distributed and parallelized computations with JAX
- Use high-level neural network libraries such as Flax
Table of Contents
1 Part 1. First steps
2 Chapter 1. When and why to use JAX
3 Chapter 1. How is JAX different from NumPy
4 Chapter 1. How is JAX different from TensorFlow and PyTorch
5 Chapter 1. Summary
6 Chapter 2. Your first program in JAX
7 Chapter 2. An overview of a JAX deep learning project
8 Chapter 2. Loading and preparing the dataset
9 Chapter 2. A simple neural network in JAX
10 Chapter 2. vmap Auto-vectorizing calculations to work with batches
11 Chapter 2. Autodiff How to calculate gradients without knowing about derivatives
12 Chapter 2. JIT Compiling your code to make it faster
13 Chapter 2. Saving and deploying the model
14 Chapter 2. Pure functions and composable transformations Why are they important
15 Chapter 2. Summary
16 Part 2. Core JAX
17 Chapter 3. Working with arrays
18 Chapter 3. Arrays in JAX
19 Chapter 3. Differences from NumPy
20 Chapter 3. High-level and low-level interfaces jax.numpy and jax.lax
21 Chapter 3. Summary
22 Chapter 4. Calculating gradients
23 Chapter 4. Calculating gradients with autodiff
24 Chapter 4. Forward- and reverse-mode autodiff
25 Chapter 4. Summary
26 Chapter 5. Compiling your code
27 Chapter 5. JIT internals
28 Chapter 5. JIT limitations
29 Chapter 5. Summary
30 Chapter 6. Vectorizing your code
31 Chapter 6. Controlling vmap() behavior
32 Chapter 6. Real-life use cases for vmap()
33 Chapter 6. Summary
34 Chapter 7. Parallelizing your computations
35 Chapter 7. Controlling pmap() behavior
36 Chapter 7. Data-parallel neural network training example
37 Chapter 7. Using multihost configurations
38 Chapter 7. Summary
39 Chapter 8. Using tensor sharding
40 Chapter 8. MLP with tensor sharding
41 Chapter 8. Summary
42 Chapter 9. Random numbers in JAX
43 Chapter 9. Differences with NumPy
44 Chapter 9. Generating random numbers in real-life applications
45 Chapter 9. Summary
46 Chapter 10. Working with pytrees
47 Chapter 10. Functions for working with pytrees
48 Chapter 10. Creating custom pytree nodes
49 Chapter 10. Summary
50 Part 3. Ecosystem
51 Chapter 11. Higher-level neural network libraries
52 Chapter 11. Image classification using a ResNet
53 Chapter 11. Using the Hugging Face ecosystem
54 Chapter 11. Summary
55 Chapter 12. Other members of the JAX ecosystem
56 Chapter 12. Machine learning modules
57 Chapter 12. JAX modules for other fields
58 Chapter 12. Summary
59 Appendix A. Installing JAX
60 Appendix A. Installing JAX on GPUs
61 Appendix A. Installing JAX on TPUs
62 Appendix B. Using Google Colab
63 Appendix C. Using Google Cloud TPUs
64 Appendix C. Running a Cloud TPU instance and connecting it to Google Colab
65 Appendix C. Resources
66 Appendix D. Experimental parallelization
67 Appendix D. Using pjit() for tensor parallelism
68 Appendix D. Summary
Resolve the captcha to access the links!