Regularization in Deep Learning
English
By (author): Liu Peng
For data scientists, machine learning engineers, and researchers with basic model development experience who want to improve their training efficiency and avoid overfitting errors.
Regularization in Deep Learning delivers practical techniques to help you build more general and adaptable deep learning models. It goes beyond basic techniques like data augmentation and explores strategies for architecture, objective function, and optimisation.
You will turn regularisation theory into practice using PyTorch, following guided implementations that you can easily adapt and customise to your own model's needs.
Key features include:
- Insights into model generalisability
- A holistic overview of regularisation techniques and strategies
- Classical and modern views of generalisation, including bias and variance tradeoff
- When and where to use different regularisation techniques
- The background knowledge you need to understand cutting-edge research
Along the way, you will get just enough of the theory and mathematics behind regularisation to understand the new research emerging in this important area.
About the technologyDeep learning models that generate highly accurate results on their training data can struggle with messy real-world test datasets. Regularisation strategies help overcome these errors with techniques that help your models handle noisy data and changing requirements. By learning to tweak training data and loss functions, and employ other regularisation approaches, you can ensure a model delivers excellent generalised performance and avoid overfitting errors.
See more