The aim of this book is to discuss the fundamental ideas which lie behind the statistical theory of learning and generalization
Omitting proofs and technical details, the author concentrates on discussing the main results of learning theory and their connections to fundamental problems in statistics
These include: * the setting of learning problems based on the model of minimizing the risk functional from empirical data * a comprehensive analysis of the empirical risk minimization principle including necessary and sufficient conditions for its consistency * non-asymptotic bounds for the risk achieved using the empirical risk minimization principle * principles for controlling the generalization ability of learning machines using small sample sizes based on these bounds * the Support Vector methods that control the generalization ability when estimating function using small sample size
Written in a readable and concise style, the book is intended for statisticians, mathematicians, physicists, and computer scientists
"The aim of the book is to introduce a wide range of readers to the fundamental ideas of statistical learning theory. … Each chapter is supplemented by ‘Reasoning and Comments’ which describe the relations between classical research in mathematical statistics and research in learning theory. … The book is well suited to promote the ideas of statistical learning theory and can be warmly recommended to all who are interested in computer learning problems."