Beställningsvara. Skickas inom 5-8 vardagar. Fri frakt för medlemmar vid köp för minst 249 kr.
Machine learning has led to incredible achievements in many different fields of science and technology. These varied methods of machine learning all offer powerful new tools to scientists and engineers and open new paths in geomechanics.The two volumes of Machine Learning in Geomechanics aim to demystify machine learning. They present the main methods and provide examples of its applications in mechanics and geomechanics. Most of the chapters provide a pedagogical introduction to the most important methods of machine learning and uncover the fundamental notions underlying them.Building from the simplest to the most sophisticated methods of machine learning, the books give several hands-on examples of coding to assist readers in understanding both the methods and their potential and identifying possible pitfalls.
Ioannis Stefanou is Professor at ECN, France, and leads several geomechanics projects. His main research interests include mechanics, geomechanics, control, induced seismicity and machine learning.Félix Darve is Emeritus Professor at the Soils Solids Structures Risks (3SR) laboratory, Grenoble-INP, Grenoble Alpes University, France. His research focuses on computational geomechanics.
Preface ixIoannis STEFANOU and Félix DARVEChapter 1 Overview of Machine Learning in Geomechanics 1Ioannis STEFANOU1.1. What exactly is machine learning? 11.2. Classification of ML methods 71.2.1. Supervised versus unsupervised ml 71.2.2. Batch versus online ml 91.2.3. Instance-based versus model-based ml 101.3. ML and geomechanics 121.4. Libraries for ml 161.5. Bias in ML and limitations 161.6. What to expect from these volumes? 191.7. Acknowledgments 201.8. References 20Chapter 2 Introduction to Regression Methods 31Filippo MASI2.1. Introduction 322.2. Linear regression 342.2.1. Example 382.3. Gradient descent 412.3.1. Batch GD 462.3.2. Stochastic GD 482.3.3. Mini-batch GD 502.4. Data preprocessing and model validation 542.4.1. Feature scaling 542.4.2. Test and validation of a model 562.5. Nonlinear regression 582.5.1. End-to-end example 592.6. Regularization techniques 662.6.1. Over- and under-determined systems 662.6.2. Regularized regression 692.7. Challenges in generalization and extrapolation 722.7.1. Interpretable models and where to find them 742.8. Bayesian regression 812.8.1. Linear Bayesian regression 822.8.2. GP regression 852.9. Conclusions 892.10. References 90Chapter 3 Unsupervised Learning: Basic Concepts and Application to Particle Dynamics 93Noel JAKSE3.1. Introduction 933.2. Basic concepts 953.2.1. Representation of the data: Feature extraction and selection 953.2.2. Distance and similarity metrics 973.3. Unsupervised learning techniques 993.3.1. Clustering 993.3.2. Dimensionality reduction 1023.4. Application to particle dynamics 1043.4.1. Topological description of local structures 1063.4.2. Clustering local environments during nucleation 1093.5. Conclusion 1113.6. Acknowledgements 1123.7. References 112Chapter 4 Classification Techniques in Machine Learning 117Noel JAKSE4.1. Introduction 1174.2. Classification techniques 1194.2.1. General considerations 1194.2.2. Typical workflow for classification 1204.2.3. Evaluation metrics 1224.2.4. Standard classification algorithms 1254.3. AL in classification 1374.3.1. Application of classification 1414.4. Conclusion 1424.5. Acknowledgments 1434.6. References 143Chapter 5 Artificial Neural Networks: Learning the Optimum Statistical Model from Data 145Filippo GATTI5.1 Why PyTorch? 1465.2. Introduction to sampling theory 1505.2.1. Statistical models and maximum likelihood estimator 1565.2.2. On the MLE optimization problem 1735.2.3. The Fisher information: geometric interpretation 1765.2.4. The Fisher information: statistical interpretation 1775.2.5. The principle of maximum entropy (MaxEnt) 1845.3. Optimizing a neural network 1885.3.1. First-order gradient descent for empirical loss minimization 1955.3.2. Second-order gradient descent methods 2025.3.3. The role of BatchNorm 2095.3.4. Stochastic gradient descent 2125.3.5. Beyond SGD: the role of “momentum” 2155.3.6. Beyond classical momentum SGD: the Nesterov algorithm 2185.3.7. Optimizing with adaptive learning rates 2205.4. References 232List of Authors 237Index 239Summary of Volume 2 243
Guillame Drevon, Vincent Kaufmann, Guillame (Luxembourg Institute of Socioeconomic Research (LISER)) Drevon, Switzerland) Kaufmann, Vincent (Polytechnique Federale de Lausanne (EPFL)
Jacques Besson, Jacques Besson, Frederic Lebon, Eric Lorentz, France) Besson, Jacques (CNRS, France) Lebon, Frederic (Aix-Marseille University, Mechanics and Acoustics Laboratory (LMA), France) Lorentz, Eric (EDF R&D
Manon Enjolras, Daniel Galvez, Mauricio Camargo, France) Enjolras, Manon (University of Lorraine, Chile) Galvez, Daniel (University of Santiago, France) Camargo, Mauricio (University of Lorraine
Jacques Besson, Jacques Besson, Frederic Lebon, Eric Lorentz, France) Besson, Jacques (CNRS, France) Lebon, Frederic (Aix-Marseille University, Mechanics and Acoustics Laboratory (LMA), France) Lorentz, Eric (EDF R&D