商品详情

| 商品基本信息 | |
| 商品名称: | 神经网络与机器学习(英文版 第3版) |
| 作者: | (加)Simon Haykin [同作者作品] |
| 市场价: | 69.00 |
| ISBN号: | 9787111265283 |
| 版次: | 1-3 |
| 出版日期: | 2009-04 |
| 页数: | 906 |
| 字数: | |
| 出版社: | 机械工业出版社 |

| 目录 | |
| Preface 0 Acknowledgements xiv Abbreviations and Symbols XVi GLOSSARY XXi Introduction 1 1 Whatis aNeuralNetwork?1 2 TheHnmanBra协6 3. Modelsof aNeuron 10 4 NeuraI NetworksViewed As Dirccted GraDhS 15 5 Feedback 18 6 Network Architecturns 21 7. Knowledge Representation 24 8 Learning Processes 34 9 Learninglbks 38 10 Concluding Remarks 45 Notes and Rcference$46 Chapter 1 Rosenblatt's Perceptrou 47 1 1 Introduction 47 l 2 Perceptron 48 l_3 1he Pcrceptron Convergence Theorem 50 1.4 Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment 55 1 5 Computer Experiment:Pattern Classification 60 1 6. The Batch Perceptron Algorithm 62 1 7 Summary and Discussion 65 NOtCS and Refercnc~s 66 Problems 66 Chapter 2 Model Building through Regression胡 21 Introduction 68 2 2 Linear Regression Model:Preliminary Considerafions 69 2.3 Maximum a Posteriori Estimation ofthe ParameterVector 71 2.4 Relationship Between Regularized Least·Squares Estimation and MAP Estimation 76 2.5 Computer Experiment:Pattern Classification 77 2.6 The Minimum.Description·Length Principle 79 2.7 Rnite Sample—Size Considerations 82 2.8 111e Instrumental,variables Method 86 2 9 Summary and Discussion 88 Notcs and References 89 Problems 89 Chapter 3 The Least—Mean-Square Algorithm 91 3 1 Introduction 91 3 2 Filtering Structure of the LMS Algorithm 92 313 Unconstrained 0ptimization:a Review 94 3.4 ThC Wiener FiIter 1()0 3 5 ne Least.Mean.Square Algorithm 102 3 6 Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter 104 3.7 The Langevin Equation:Characterization ofBrownian Motion 106 3 8 Kushner’S Direct.Averaging Method 107 3 9 Statistical LMS Learning Iheory for Sinail Learning—Rate Parameter 108 3 10 Computer Experiment l:Linear PTediction 110 3 11 Computer Experiment II:Pattern Classification 112 3 12 Virtucs and Limitations of the LMS AIgorithm 113 3 13 Learning.Rate Annealing Schedules 115 3 14 Summary and Discussion 117 Notes and Refefences 118 Problems 1】9 Chapter 4 Multilayer Pereeptrons 122 4.1 IntroductlOn 123 4.2 Some Preliminaries 124 4.3 Batch Learning and 0n.Line Learning 126 4.4 The Back.Propagation Algorithm 129 4 5 XORProblem 141 4.6 Heuristics for Making the Back—Propagation Algorithm PerfoITn Better 144 4.7 Computer Experiment:Pattern Classification 150 4.8 Back Propagation and Differentiation 153 4.9 The Hessian and lIs Role 1n On-Line Learning 155 4.10 Optimal Annealing and Adaptive Control of the Learning Rate 157 4.11 Generalization 164 4.12 Approximations of Functions 166 4.13 Cross.Vjlidation 171 4.14 Complexity Regularization and Network Pruning 175 4.15 Virtues and Limitations of Back-Propagation Learning i80 4.16 Supervised Learning Viewed as an Optimization Problem 186 4.17 COUVOlutionaI Networks 201 4.18 Nonlinear Filtering 203 4.19 Small—Seale VerSus Large+Scale Learning Problems 209 4.20 Summary and Discussion 217 Nores and RCfcreilces 219 Problems 221 Chapter 5 Kernel Methods and Radial·Basis Function Networks 230 5.1 Intreduction 230 5.2 Cover’S Theorem on the Separability of Patterns 231 5.3 1he Interpolation Problem 236 5 4 Radial—Basis—Function Networks 239 5.5 K.Mcans Clustering 242 5.6 Recursive Least-Squares Estimation of the Wei曲t Vector 245 5 7 Hybrid Learning Procedure for RBF Networks 249 5 8 Computer Experiment:Pattern Classification 250 5.9 Interpretations of the Gaussian Hidden Units 252 5.10 Kernel Regression and Its Relation to RBF Networks 255 5.11 Summary and Discussion 259 Notes and References 261 Problems 263 Chapter 6 Support Vector Mat.hines 268 6.1 Introduction 268 6.2 Optimal Hyperplane for Linearly Separable Patterns 269 6.3 Optimal Hyperplane for Nonseparable Patterns 276 6.4 The Suppo~Vector Machine Viewed as a Kernel Machine 281 6.5 Design of Support Vector Maehines 284 6.6 XOR Problem 286 6.7 Computer Experiment:Pattern Classification 289 6.8 Regression:Robnstness Considerations 289 6.9 Optimal Solution of the Linear Regression Problem 293 6.10 The RepresenterTh~orem and Related Issues 296 6 11 Summary and Discnssion 302 Notes and References 304 Problems 307 Chapter 7 Regularization Theory 313 7 l Introduction 313 712 Hadamard~Conditions forWell.Posedness 314 7_3 Tikhonov's Regularization Theory 315 7.4 Regularization Networks 326 7.5 Generalized Radial—Basis-Function Networks 327 7.6 The Regularized Least—Squares Estimator:Revisited 331 7 7 Additional Notes of Interest on Regularlzation 335 7.8 Estimation of the Regularization Parameter 336 7.9 Semisupervised Learning 342 7.10 Manifold Regularization:Preliminary Considerations 343 7.11 Differentiable Manifolds 345 7 12 Generalize |

| 内容简介 | |
| 神经网络是计算智能和机器学习的重要分支,在诸多领域都取得了很大的成功。在众多神经网络著作中,影响最为广泛的是Simon Haykin的《神经网络原理》(第4版更名为《神经网络与机器学习》)。在本书中,作者结合近年来神经网络和机器学习的最新进展,从理论和实际应用出发,全面。系统地介绍了神经网络的基本模型、方法和技术,并将神经网络和机器学习有机地结合在一起。. 本书不但注重对数学分析方法和理论的探讨,而且也非常关注神经网络在模式识别、信号处理以及控制系统等实际工程问题中的应用。本书的可读性非常强,作者举重若轻地对神经网络的基本模型和主要学习理论进行了深入探讨和分析,通过大量的试验报告、例题和习题来帮助读者更好地学习神经网络。 本版在前一版的基础上进行了广泛修订,提供了神经网络和机器学习这两个越来越重要的学科的最新分析。 本书特色 基于随机梯度下降的在线学习算法;小规模和大规模学习问题。.. 核方法,包括支持向量机和表达定理。 信息论学习模型,包括连接、独立分量分析(ICA),一致独立分量分析和信息瓶颈。 随机动态规划,包括逼近和神经动态规划。 逐次状态估计算法,包括Kalman和粒子滤波器。 利用逐次状态估计算法训练递归神经网络。 富有洞察力的面向计算机的试验。... |
- 机械工业出版社旗舰店 (微信公众号认证)
- 扫描二维码,访问我们的微信店铺
- 随时随地的购物、客服咨询、查询订单和物流...