科学出版社旗舰店店铺主页二维码
科学出版社旗舰店 微信认证
科学出版社秉承多年来形成的“高层次、高水平、高质量”和“严肃、严密、严格”的优良传统与作风,始终坚持为科技创新服务、为传播与普及科学知识服务、为科学家和广大读者服务的宗旨。
微信扫描二维码,访问我们的微信店铺
你可以使用微信联系我们,随时随地的购物、客服咨询、查询订单和物流...

应用迭代分析(英文版)

77.40
运费: ¥ 0.00-18.00
应用迭代分析(英文版) 商品图0
应用迭代分析(英文版) 商品图1
应用迭代分析(英文版) 商品图2
应用迭代分析(英文版) 商品图3
应用迭代分析(英文版) 商品缩略图0 应用迭代分析(英文版) 商品缩略图1 应用迭代分析(英文版) 商品缩略图2 应用迭代分析(英文版) 商品缩略图3

商品详情

书名:应用迭代分析(英文版)
定价:98.0
ISBN:9787030420787
作者:袁锦昀 著
版次:1
出版时间:2014-11

内容提要:




媒体评论:

编辑推荐


暂无相关内容



目录:
Preface to the Series in Information and Computational Science Preface Chapter 1 Introduction.....................................................1 1.1 Background in linear algebra............................................1 1.1.1 Basic symbols, notations, and de.nitions .............................1 1.1.2 Vector norm .......................................................2 1.1.3Matrix norm .......................................................4 1.1.4 Spectral radii ......................................................8 1.2 Spectralresultsofmatrix..............................................10 1.3 Specialmatrices.......................................................15 1.3.1 Reducible and irreducible matrices ..................................15 1.3.2 Diagonally dominant matrices ......................................16 1.3.3 Nonnegative matrices ..............................................20 1.3.4 p-cyclic matrices ..................................................22 1.3.5 Toeplitz, Hankel, Cauchy, Cauchy-like and Hessenberg matrices .......24 1.4 Matrix decomposition..................................................27 1.4.1 LU decomposition .................................................27 1.4.2 Singular value decomposition .......................................28 1.4.3 Conjugate decomposition ..........................................30 1.4.4 QZ decomposition .................................................32 1.4.5 S & T decomposition ..............................................33 1.5 Exercises..............................................................37 Chapter 2 Basic Methods and Convergence.............................40 2.1 Basic concepts.........................................................40 2.2 The Jacobi method....................................................43 2.3 The Gauss-Seidel method..............................................46 2.4 The SOR method......................................................49 2.5 M-matrices and splitting methods......................................58 2.5.1 M-matrix .........................................................58 2.5.2 Splitting methods .................................................60 2.5.3 Comparison theorems ..............................................62 2.5.4 Multi-splitting methods ............................................66 2.5.5 Generalized Ostrowski-Reich theorem ...............................67 2.6 Error analysis of iterative methods.....................................69 27 Iterative re.nement....................................................70 2.8 Exercises..............................................................75 Chapter 3 Non-stationary Methods ......................................78 3.1 Conjugategradientmethods...........................................79 3.1.1 Steepest descent method ...........................................79 3.1.2 Conjugate gradient method ........................................80 3.1.3 Preconditioned conjugate gradient method ..........................83 3.1.4 Generalized conjugate gradient method .............................85 3.1.5 Theoretical results on the conjugate gradient method.................85 3.1.6 Generalized product-type methods based on Bi-CG ..................91 3.1.7 Inexact preconditioned conjugate gradient method ...................92 3.2 Lanczos method.......................................................93 3.3 GMRES method and QMR method....................................95 3.3.1 GMRES method ..................................................95 3.3.2 QMR method .....................................................98 3.3.3 Variants of the QMR method .....................................100 3.4 Direct projection method.............................................101 3.4.1 Theory of the direct projection method ............................102 3.4.2 Direct projection algorithms ......................................105 3.5 Semi-conjugate direction method .....................................107 3.5.1 Semi-conjugate vectors ...........................................107 3.5.2 Left conjugate direction method ...................................110 3.5.3 One possible way to .nd left conjugate vector set ...................112 3.5.4 Remedy for breakdown ...........................................117 3.5.5 Relation with Gaussian elimination ................................119 3.6 Krylov subspace methods.............................................121 3.7 Exercises.............................................................122 Chapter 4 Iterative Methods for Least Squares Problems............126 4.1 Introduction..........................................................126 4.2 Basic iterative methods...............................................128 4.3 BlockSOR methods..................................................131 4.3.1 Block SOR algorithms ............................................131 4.3.2 Convergence and optimal factors ..................................132 4.3.3 Example ........................................................135 4.4 Preconditioned conjugate gradient methods...........................136 4.5 Generalized least squares problems....................................138 4.5.1 Block SOR methods ..............................................139 4.5.2 Preconditioned conjugate gradient method .........................142 4.5.3 Comparison ..................................................... 143 4.5.4 SOR-like methods ................................................144 4.6 Rank de.cient problems..............................................148 4.6.1 Augmented system of normal equation .............................149 4.6.2 Block SOR algorithms ............................................150 4.6.3 Convergence and optimal factor ...................................151 4.6.4 Preconditioned conjugate gradient method .........................154 4.6.5 Comparison results ...............................................158 4.7 Exercises.............................................................161 Chapter 5 Preconditioners ...............................................163 5.1 LU decomposition and orthogonal transformations....................164 5.1.1 Gilbert and Peierls algorithm for LU decomposition .................164 5.1.2 Orthogonal transformations .......................................166 5.2 Stationary preconditioners............................................167 5.2.1 Jacobi preconditioner .............................................167 5.2.2 SSOR preconditioner .............................................168 5.3 Incompletefactorization..............................................169 5.3.1 Point incomplete factorization .....................................170 5.3.2 Modi.ed incomplete factorization ..................................172 5.3.3 Block incomplete factorization ....................................172 5.4 Diagonally dominant preconditioner..................................173 5.5 Preconditionerforleastsquaresproblems.............................177 5.5.1 Preconditioner by LU decomposition ...............................179 5.5.2 Preconditioner by direct projection method ........................ 181 5.5.3 Preconditioner by QR decomposition ..............................182 5.6 Exercises.............................................................186 Chapter 6 Singular Linear Systems .....................................188 6.1 Introduction..........................................................188 6.2 Properties of singular systems........................................191 6.3 Splittingmethodsforsingularsystems................................195 6.4 NonstationarymethodsforSingularsystems..........................219 6.4.1 symmetric and positive semide.nite systems ........................219 6.4.2 General systems..................................................222 6.5 Exercises.............................................................225 Bibliography ................................................................228 Index ........................................................................249 《信息与计算科学丛书》............................................253

在线试读:
Chapter 1 Introduction In this chapter, we first give an overview of relevant concepts and some basic results of matrix linear algebra. Materials contained here will be used throughout the book. 1.1 Background in linear algebra 1.1.1 Basic symbols, notations, and definitions Let R be the set of real numbers; C,the set of complex numbers; and i 三 \/^T. The symbol Rn denotes the set of real n-vectors and Cn, the set of complex n-vectors, a, /3, 7,etc., denote real numbers or constants. Vectors are almost always column vectors. We use Rmxn to denote the linear vector space of all m-by-n real matrices and Cmxn, the linear vector space of all m-by-n complex matrices. The symbol dimp) denotes the dimension of a linear vector space S. The upper case letters A, B, C, A, A, etc., denote matrices and the lower case letters x, y, z, etc., denote vectors. Let (A)ij = ctij denote the (i, j)th entry in a matrix A = (aij). For any n-by-n matrix, the indices j go through 1 to n usually but sometimes go through 0 to n — 1 for convenience. Let AT be the transpose of A; A*, the conjugate transpose of A\ rank(yl), the rank of A\ and tr(A)三the trace of A. An n-by-n diagonal matrix is denoted by We use the notation In for the n-by-n identity matrix. When there is no ambiguity, we shall write it as I. The symbol ej denotes the jth unit vector, i.e., the jth column vector of I. A matrix A G Rnxn is symmetric if AT = A, and skew-symmetric if AT = —A. A symmetric matrix A is positive definite (semidefinite) if xTAx > 00) for any nonzero vector x G Rn. A matrix A G Cnxn is Hermitian if A* = A. A Hermitian matrix A is positive definite (semidefinite) if x*Ax ≥ 0( 0) for any nonzero vector x e Cn. A number A e C is an eigenvalue of A G Cnxn if there exists a nonzero vector x G Cn such that Ax = Xx, where x is called the eigenvector of A associated with A. It is well-known that the eigenvalues of all Hermitian matrices are real. Let Amin (A) and Amax(A) denote the smallest and largest eigenvalues of a Hermitian matrix A respectively. We use p(A) = max |Ai(A)| to denote the spectral radius of A where Ai(A) goes through the spectrum of A. Recall that the spectrum of A is the set of all the eigenvalues of A. We use to denote a norm of vector or matrix. The symbols||oo denote the p-novm with p = 1,2, oo, respectively. Also we use ?a(A), which is defined by Ka(A) = ||A||a||A_1||a to denote the condition number of the matrix A. In general, we consider every norm at the definition when a is omitted. But most used norm is 2-norm. We use and 1Z(A) to represent the null space and Image space (or Range) of given matrix A respectively where = {x G Rn : Ax = 0} and 1^(A) = {y G Rm : y = Ax for some x G Rn} and A G Rmxn. For matrix iterative analysis, we need some tools, such as vector norms, matrix norms and their extensions, and spectral radii. 1.1.2 Vector norm In fact, a norm is an extension of length of vector in R2 or absolute value in R. It is well-known that Vx G R, \x\ = satisfies the following properties: We generalize three properties above to vector space Rn as follows. Definition 1.1.1 /i : Rn —j- R is a vector norm on Rn if Example 1.1.1 There are three common norms on Rn defined by There axe some important elementary consequences from Definition 1.1.1 of the vector norm. Proposition 1.1.1 Proof Then, By interchanging x and y, we can obtain The result of (1.1.1) follows from (1.1.3) and (1.1.4) together. We can prove (1.1.2) if y is replaced by —y in (1.1.1). The 2-norm is the natural generalization of the Euclidean length of vector on R2 or R3 and called the Euclidean norm. The oo-norm also sometimes called the maximum norm or the Chebyshev norm. In fact, they are special cases of p-norm defined as , Sometimes, usual norm is not enough for our research. We have to construct a new norm. One useful technique to construct new norms from some well-known norm is given in the following theorem. Theorem 1.1.2 Let v be a norm on Rm and A E Rmxn have linearly inde?pendent columns. Then /i(x) = u(Ax) : Rn is a norm on Rn. The proof is easy, just to check properties of the norm in Definition 1.1.1. Leave it to reader. This technique can work for matrix norm in the next subsection. Corollary 1.1.3 Let A G RnXn be symmetric and positive definite. Then, /i(x) = VxTAx is a norm on Rn? denoted ||尤||^4,and called weighted norm (with A). We have to know if the sequence generated by iterative methods converges to the solution when we study iterative methods. For this purpose, we shall give some concepts about limit of sequence in vector spaces. Definition 1.1.2 Let {x(fc)} be a sequence of n-vectors,and x G Rn. Then, x is a limit of the sequence {x(fc)} (written x = limfc_,00 x^) if where Xi(i = 1,2, … ,n) are components of x. By the definition, Furthermore, it follows from equivalence of vector norms that x = lim a;⑷ lim "(x — a:⑷)=0, where " is a norm on Rn. 1.1.3 Matrix norm Definition 1.1.3 : Rmxn — R is a matrix norm on Rmxn if Example 1.1.2 Let A = (a^) G Rnxn. Suppose that fi(A) = maxi^i):7^n \aij \. Then, fi(A) ^ 0 and fi(A) = 0 if and only if A = 0. Also ^i{aA) = \a\fi(A). Since \aij + bij\《\aij \ + \bij\j fj,(A B) ^ l^(A) + fx(B). Thus,fx is a matrix norm. The most frequently used matrix norm in numerical analysis (F-norm): The p-norm is also called the spectral norm when p — 2. The p-norm depends on the matrix set, for example, the 2-norm on R3x5 is different from the 2-norm on R4x6. Then, the inequality is an observation on the relationship between three different norms. But not all norms have inequality (1.1.8). For example, define a norm \\A\\ as and Then, ||A||||5|| = 1 < 2 = ||AB||. This motives us to introduce the following definition. Definition 1.1.4 Norms ",v and p on Rixm,Rmxn and Rlxn respectively are mutually consistent if for all A G Hlxm, B G Rmxn, holds. A matrix norm on Rnxn is consistent if it is consistent with itself. Theorem 1.1.4 Frobenius norm || ? ||f is consistent. i Analysis For the proof,we note that here ai is the ith i=l column of A G R/Xm. Since C = AB = (A6i, ^Abn), what we need to show is ||A6i||2 ^ ||A||ir||6i||2 where bi is the ith column of B E Rmxn. Proof We first show that for all A G RZxm and x G Rm. Let A be partitioned by row: AT = (ai,以2,…,a{). Then, It follows from the Cauchy inequality \ajx\ 彡 ||叫||2||尤|丨2 that Thus, (1.1.9) is satisfied because of Now let C = AB, where A e Rixm and B G Rmxn. If B = {bub2,--- , K) is partitioned by columns, then For p-norms, we have the following property where A G Rmxn and x G Rn. In fact, F-norm and p-norms have certain inequalities that will be frequently used without justification. For A G Rmxn, there are where Q and Z are orthogonal. Similar to vector case, there are also some definitions for matrix sequence {Ak}. Definition 1.1.5 Let {Ak = (a^)} be a sequence of m x n matrix and A = (aij) G Rmxn. Then,A is a limit of the sequence {Ak} (written A = linifc—oo Ak) if By the definition, Proposition 1.1.5 Let lim Ak = A and lim Bk = B. Then, (1) lim {Ak + Bk) = A-\- B, (2) lim AkBk = AB, (3) If A is nonsingular,then for all sufficiently large k, Ak is nonsingular and lim = A~l.
科学出版社旗舰店店铺主页二维码
科学出版社旗舰店 微信公众号认证
科学出版社秉承多年来形成的“高层次、高水平、高质量”和“严肃、严密、严格”的优良传统与作风,始终坚持为科技创新服务、为传播与普及科学知识服务、为科学家和广大读者服务的宗旨。
扫描二维码,访问我们的微信店铺
随时随地的购物、客服咨询、查询订单和物流...

应用迭代分析(英文版)

手机启动微信
扫一扫购买

收藏到微信 or 发给朋友

1. 打开微信,扫一扫左侧二维码

2. 点击右上角图标

点击右上角分享图标

3. 发送给朋友、分享到朋友圈、收藏

发送给朋友、分享到朋友圈、收藏

微信支付

支付宝

扫一扫购买

打开微信,扫一扫

或搜索微信号:sciencepress-cspm
科学出版社官方微信公众号

收藏到微信 or 发给朋友

1. 打开微信,扫一扫左侧二维码

2. 点击右上角图标

点击右上角分享图标

3. 发送给朋友、分享到朋友圈、收藏

发送给朋友、分享到朋友圈、收藏