Please enable JavaScript.
Coggle requires JavaScript to display documents.
LINEAR ALGEBRA - Coggle Diagram
LINEAR ALGEBRA
Power series
Can be in 3 forms :
:
When we do this we say that we’ve stripped out the n=0, or first, term. We don’t need to stop at the first term either. If we strip out the first three terms we’ll get :
power series converges for x=c if the series ,
converges
this series will converge if the limit of partial sums, exists and is finite.
a power series will converge for x=c if is a finite number.
power series will always converge if X=Xo. In this case the power series will become
Radius of convergence
Ratio test
addition or subtraction the coefficients we get the new series
Derivatives in power series
first derivative
Second derivative
power series written in terms of (x−x0)^n and they often won’t, initially at least, be in that form. To get them into the form we need we will need to perform an index shift
If
for all x then ,
Taylor and Maclaurin Series
Definition : f^n(c)/n!*(x-c)^n
BInomial
f(x)=(1+x)^k
Deriving a power series
f(x)=(cos x)^0.5
just substitute ^0.5 to the known series
Multiplication and division
Multiplication
multiply the 2 series term by term
Division
use long division
bottom into top
Approximation from integral
integrate each terms you get from the series
do it like you do a definite integral
Fourier, Sine and Cosine Series
fourier series equation
f(x) is even
f(x) is odd
Let f(x) be a function defined and integrable on [0,pi]
Fourier sine series
Fourier cosine series
System of linear equation and matrices
Leontif Input-output
step 1
Step 2
Example
diagonal
Symmetric
triangular
system of linear
Matrix equation
many solution
No solution
one solution
Linear system
Augmented matrix
theorem 1.3.1
If A is an m × n matrix, and if x is an n × 1 column vector, then the product Ax can be expressed as a linear combination of the column vectors of A in which the coefficients are the entries of x.
method to slove
Gaus Jordan Elimination
Gaussian Elimination
Elementary Matrix
Matrix operation
Multiplying Matrices
Scalar Multiple
Addition and Substraction
Inverse
Row Operation
Matrix form
Trace
Transpose
Theorem 1.5.3
Equivalent Statements
If A is an n × n matrix, then the following statements are equivalent, that is, all true or all false.
A is invertible.
Ax = 0 has only the trivial solution.
The reduced row echelon form of A is In.
A is expressible as a product of elementary matrices.
Watch Here! :star:
Infinite Series
Definition
An infinite series is the infinite sum of the form
Nth partial sum of a series
P-Series
If p is a real number then the series shown is called a P-Series.
Harmonic Series
The series shown is called the harmonic series
Geometric Series
In a Geometric Sequence each term is found by multiplying the previous term by a constant.
Alternating Series
alternating series is an infinite series of the form shown with an > 0 for all n
Integral Test
integral test for convergence is a method used to test infinite series of non-negative terms for convergence
provides a way of deducing the convergence or divergence of an infinite series or an improper integral
the test works by comparing the given series or integral to one whose convergence properties are known
Ratio Test
Suppose we have the series sums of an. Define,
if L<1, the series is absolutely convergent (and hence convergent)
if L>1, the series is divergent.
if L=1, the series may be divergent, conditionally convergent, or absolutely convergent
Root Test
Suppose we have the series sums of an. Define,
if L<1, the series is absolutely convergent (and hence convergent)
if L>1, the series is divergent
if L=1, the series may be divergent, conditionally convergent, or absolutely convergent
General Vector Space
4.1 (Axiom)
For each u in V, there is an object –u in V, called a negative of u, such that u + (-u) = (-u) +u = 0
If k is any scalar and u is any object in V, then ku is in V.
There is an object 0 in V, called a zero vector for V, such that 0 + u = u + 0 = u for all u in V
k1 (u + v) = ku + kv
u + (v + w) = (u + v) + w
(k1 + k2 )u = k1u + k2u
2.u + v = v + u
k1 (k2u) = (k1k2 ) (u)
If u and v are objects in V, then u + v is in V
1u = u
4.2 (Subspaces)
Subspaces
If u and v are vectors in W, then u + v is in W.
If k is any scalar and u is any vector in W, then ku is in W.
When W is itself a vector space under the addition and scalar multiplication defined on V.
Linear Combination
It can be express with w=k1v1+k2v2+…knvn
k is coefficient
If w is a vector in a vector space V then it is a Linear Combination
Span of S
Formed from all possible linear combinations of the vectors in a nonempty set S
If S = {w1 , w2 , …, wr }, then it will denote to span{w1 , w2 , …, wr } or span(S)
4.5 (Coordinates and basis)
If S = {v1, v2, …, vn} is a set of vectors in a finite‐dimensional vector space V, then S is called a basis for V if:
S is linearly independent.
S spans V.
Theorem 4.5.1 - All bases for a finite-dimensional vector space have the same number of vectors.
Theorem 4.5.2 - Let V be a finite-dimensional vector space, and let {v_(1 ),v_2,…,v_3} be any basis.
(b) If a set has fewer than n vectors, then it does not span V.
(a) If a set has more than n vectors, then it is linearly dependent.
4.6 (Dimension)
Dimension
defined to be the number of vectors in a basis for V.
The dimension of a finite-dimensional vector space V is denoted by dim(V).
Zero vector space = dimension zero
Familiar Vector Spaces
Dim(R^n) = n (The standard basis has n vectors)
Dim(P_n) = n+1 (The standard basis has n + 1 vectors)
Dim(M_(M x N)) = mn (The standard basis has mn vectors)
Theorem 4.6.3 - Plus/Minus Theorem
Let S be a nonempty set of vectors in a vector space V.
If S is a linearly independent set, and if v is a vector in V that is outside of span(S), then the set S ∪ {v} that results by inserting v into S is still linearly independent.
) If v is a vector in S that is expressible as a linear combination of other vectors in S, and if S − {v} denotes the set obtained by removing v from S, then S and S − {v} span the same space; that is, span (S) = span (S – {v})
Theorem 4.6.5 - Let S be a finite set of vectors in a finite‐dimensional vector space V.
If S spans V but is not a basis for V, then S can be reduced to a basis for V by removing appropriate vectors from S
If S is a linearly independent set that is not already a basis for V, then S can be enlarged to a basis for V by inserting appropriate vectors into S
4.3 (Linear Independence)
S = {v1 , v2 , …,vr } in vector space V is called linearly independent if the vector equation k1v1+k2v2+…Krvr=0 has the only trivial solution k1=0, k2=0, … kr=0
If there are also nontrivial solution, then S is called linearly dependent.
4.7 (Change of basis)
If P is the transition matrix from a basis B to a basis B′ for a finite‐dimensional vector space V, then P is invertible and P −1 is the transition matrix from B′ to B.
A Procedure for Computing Transition Matrices
Step 1. Form the partitioned matrix [new basis | old basis] in which the basis vectors are in column form.
Step 2. Use elementary row operations to reduce the matrix in Step 1 to reduced row echelon form.
Step 3. The resulting matrix will be [I | transition matrix from old to new] where I is an identity matrix.
Step 4. Extract the matrix on the right side of the matrix obtained in Step 3.
Let B = {u1, u2, …, un} be any basis for the vector space Rn and let S = {e1, e2, …, en} be the standard basis for Rn. If the vectors in these bases are written in column form, then P_(B→S)=[u_1 ┤| u_2 | …|u_n]
4.4 (Basis for Vector Space)
If S = { v1,v2, …,vr} is a finite set of vectors in V, then S is called a basis for V if the following two conditions hold:
(a) S is linearly independent.
(b) S spans V
4.8 (Row space, column space and null space)
Theorem 4.8.1
A system of linear equations Ax = b is consistent if and only if b is in the column space of A.
Theorem 4.8.2 - If x0 is any solution of a consistent linear system Ax = b, and if S = {v1, v2, …, vk} is a basis for the null space of A, then every solution of Ax = b can be expressed in the form
Theorem 4.8.3 - Row equivalent matrices have the same row space and row equivalent matrices have the same null space.
Theorem 4.8.4 - If a matrix R is in row echelon form, then the row vectors with the leading 1's (the nonzero row vectors) form a basis for the row space of R, and the column vectors with the leading 1's of the row vectors form a basis for the column space of R.
Theorem 4.8.5 - If A and B are row equivalent matrices, then:
(a) A given set of column vectors of A is linearly independent if and only if the corresponding column vectors of B are linearly independent.
(b) A given set of column vectors of A forms a basis for the column space of A if and only if the corresponding column vectors of B form a basis for the column space of B.
INNER PRODUCT
6.3 Gram-Schmidt Process; QR-Decomposition
Theorem 6.3.1
〈vi, vi〉 ≠ 0
Theorem 6.3.2
orthogonal basis S
orthonormal basis S
Orthogonal and Orthonormal Sets
Orthogonal Projections
Theorem 6.3.3 @ Projection Theorem
orthogonal projection of u on W⊥
orthogonal projection of u on W
Theorem 6.3.4
If {v1, v2, …, vr} is an orthogonal basis for W,
If {v1, v2, …, vr} is an orthonormal basis for W
Theorem 6.3.5
Every nonzero finite‐dimensional inner product space has an orthonormal basis.
The Gram–Schmidt Process
step 1 :
step 2 :
step 3 :
step 4 :
6.4 Best Approximations; Least Squares
Finding Least Squares Solutions
Theorem 6.4.2
normal equation
least squares solutions
Conditions for Uniqueness of Least Squares Solutions
Theorem 6.4.3
If A is an m × n matrix
so, The column vectors of A are linearly independent
and ATA is invertible
Theorem 6.4.4
If A is an m × n matrix with linearly independent column vectors, then for every m × 1 matrix b, the linear system Ax = b has a unique least squares solution
Least Squares Solutions of Linear Systems
least squares error vector b − Ax
least squares error ‖b − Ax‖
least squares solution Ax = b
Theorem 6.4.1
Best Approximation Theorem
More on the Equivalence Theorem
Theorem 6.4.6
If A is an m × n matrix with linearly independent column vectors, and if A = QR is a QR‐decomposition of
Theorem 6.4.5
If A is an n × n matrix in which there are no duplicate rows and no duplicate columns
6.2 Angle and Orthogonality in Inner Product Spaces
Properties of Length and Distance
Triangle inequality for vectors ‖u + v‖ ≤ ‖u‖ + ‖v‖
Triangle inequality for distances d(u, v) ≤ d(u, w) + d(w, v)
Orthogonality
if u and v is 〈u, v〉 = 0.
Cauchy–Schwarz Inequality
(u, v)² ≤(u, u) (v, v)
(u, v)² ≤ ||u||²||v||²
Angle Between Vectors
0 = cos -1
Theorem of Pythagoras
||u+v||² = ||u||² + ||v||²
Orthogonal Complements
Theorem 6.2.4
If W is a subspace of a real inner product space V, then:
W ⊥ is a subspace of V
W ∩ W ⊥ = {0}.
Theorem 6.2.5
If W is a subspace of a real finite‐dimensional inner product space V
then the orthogonal complement of W ⊥ is W;
6.5 Mathematical Modeling Using Least Squares
Fitting a Curve to Data
A quadratic polynomial
A straight line
A cubic polynomial
Theorem 6.5.1
6.1 Inner Products
Symmetry axiom 〈u, v〉 = 〈v, u〉
Additivity axiom 〈u + v, w〉 = 〈u, w〉 + 〈v, w〉
Homogeneity axiom 〈ku, v〉 = k〈u, v〉
Positivity axiom 〈v, v〉 ≥ 0 and 〈v, v〉 = 0 if and only if v = 0
6.6 Application
Measurements of Error
The deviation between f and g at x0.
General Linear Transformation
Define matrix tarnsformation in
Tranformation with linearity properties
Definition 1 :
Theorem 8.1.1
Matrix transformation
The zero transformation
The identity operator : The mapping I : V → V defined by I(v) = v is called the identity operator on V.
Dilation and contraction operators :
Linear Transformation from Pn to Pn+1 :
Linear transformation using an inner product :
Translation is not linear :
Theorem 8.1.2
If T : V → W is a linear transformation, then the set of vectors in V that T maps into 0 is called the kernel of T and is denoted by ker(T). The set of all vectors in W that are images under T of at least one vector in V is called the range of T and is denoted by R(T).
If TA : Rn → Rm is multiplication by the m × n matrix A, then the kernel of TA is the null space of A, and the range of TA is the column space of A.
If TA : Rn → Rm is multiplication by the m × n matrix A, then the kernel of TA is the null space of A, and the range of TA is the column space of A. (K & R of a zero transformation)
Let I : V → V be the identity operator. Since I(v) = v for all vectors in V, every vector in V is the image of some vector (namely, itself); thus R(I) = V. Since the only vector that I maps into 0 is 0, it follows that ker(I) = {0}.
Theorem 8.1.4
Eigenvalues & Eigenvectors
If A is an n × n matrix, then a nonzero vector x in Rn is called an eigenvector of A
if Ax is a scalar multiple of x
The scalar λ is called an eigenvalue of A
If A is an n × n matrix, then λ is an eigenvalue of A if and only if it satisfies the equation
If A is an n × n triangular matrix,the eigenvalues of A are the entries on the main diagonal of A.
the eigenvalues are :
ʎ = 1/2, ʎ =2/3, ʎ = -1/4
Diagonalization
A square matrix A is said to be diagonalizable if it is similar to some diagonal matrix; that is, if there exists an invertible matrix P such that P^(-1)AP is diagonal. In this case the matrix P is said to diagonalize A.
If λ1, λ2, …, λk are distinct eigenvalues of a matrix A, and if v1, v2, …, vk are corresponding eigenvectors, then {v1, v2, …, vk} is a linearly independent set.
An n × n matrix with n distinct eigenvalues is diagonalizable.
Finding a Matrix P that diagonalizas a Matrix A
step 1: find the characteristic equation.
step 2: find the bases. (total the basis in matrix form)
step 3: verify using the formula,
If k is a positive integer, λ is an eigenvalue of a matrix A, and x is a corresponding eigenvector, then λk is an eigenvalue of Ak and x is a corresponding eigenvector.
Finding Power of a Matrix
step 1:find that the matrix is diagonalized.
step 2: calculate using the formula,
Dynamical System and Markov Chains
A dynamical system is a finite set of variables whose values change with time. The value of a variable at a point in time is called the state of the variable at that time, and the vector formed from these states is called the state vector (or state) of the dynamical system at that time.
What is Dynamical system? :star:
A Markov chain is a dynamical system whose state vectors at a succession of equally spaced times are probability vectors and for which the state vectors at successive times are related by an equation of the form
Watch Markov Chain Here! :star:
Diagonalization And Quadratic Forms
Optimization Using Quadratic Forms
Constrained Extremum Theorem
The quadratic form xTAx has a maximum value of λ1 and a minimum value of λn, both of which are obtained on the set of vectors for which ‖x‖ = 1.
The maximum value of xTAx occurs at an eigenvector corresponding to the eigenvalue λ1.
The minimum value of xTAx occurs at an eigenvector corresponding to the eigenvalue λn.
Second Derivative Test
if f ' (c) = 0 and f '' (c) >0
there is a local minimum at x=c
if f ' (c) = 0 and f '' (c) <0
there is a local maximum at x=c
if f ' (c) = 0 and f '' (c) =0 or f '' (c) does not exist
the test in conclusive.
there might be a local min/max or a point of inflection.
Hessian Form of the Second Derivative Test
Suppose that (x0, y0) is a critical point of f(x, y) and that f has continuous second‐order partial derivatives in some circular region centered at (x0, y0). If H(x0, y0) is the Hessian of f at (x0, y0), then:
f has a relative minimum at (x0, y0) if
H(x0, y0) is positive definite.
f has a relative maximum at (x0, y0) if
H(x0, y0) is negative definite.
f has a saddle point at (x0, y0) if H(x0, y0) is indefinite.
The test is inconclusive otherwise.
Watch Example Here! :star:
Orthogonal Matrices
A square matrix A is said to be orthogonal if its transpose is the same as
its inverse
[Watch Example of Orthogonal Matrices Here!] :star:(
https://www.youtube.com/watch?v=2kKEow9Q6-Q
)
Orthogonal Diagonalizations
If A and B are square matrices, then we say that A and B are orthogonally
similar if there is an orthogonal matrix P such that
Watch Example of Orthogonal Diagonalization Here ! :star:
Quadratic Forms
Watch Here!
The Principal Axes Theorem
If A is a symmetric n x n matrix, then there is an orthogonal change of variable that transforms the quadratic form x TAx into a quadratic form y TDy with no cross product terms. Specifically, if P orthogonally diagonalizes A, then making the change of variable x = Py in the quadratic form x TAx yields the quadratic form
DETERMINANT
Properties :check:
THEOREM 2.3.3
A square matrix A is invertable if and only if det (A) not equal to zero
THEOREM 2.2.1
If A (square matrix) has a row of zero/ a column of zero, then det (A)=0
THEOREM 2.2.2
Let A is a square matrix, then
det (A) = det (A^T)
THEOREM 2.2.3
when two rows or two column are interchanged, then
det (B) = - det (B)
How To Find :question:
Cofactor
Row Reduction
Arrrow
det= [(1)(1)(6)+(3)(0)(2)+(7)(1)(4)] -
[(7)(1)(2)-(1)(0)(4)-(3)(1)(6)]
det = 24
Cramer's Rule
Numerical Methods
9.1(LU decomposition)
A factorization of a square matrix A as A=LU
If A is a square matrix that can be reduced to a row echelon form U by Gaussian elimination without row interchanges, then A can be factored as A = LU, where L is a lower triangular matrix.
The preceding example illustrates that once an LU‐decomposition of A is obtained, a linear system Ax = b can be solved by one forward substitution and one backward substitution. The main advantage of this method over Gaussian and Gauss–Jordan elimination is that it “decouples” A from b so that for solving a sequence of linear systems with the same coefficient matrix A, say
9.2 (The Power Method)
If the distinct eigenvalues of a matrix A are λ1, λ2, …, λk, and if |λ1| is larger than |λ2|, …, |λk|, then λ1 is called a dominant eigenvalue of A. Any eigenvector corresponding to a dominant eigenvalue is called a dominant eigenvector of A.
Let A be a symmetric n × n matrix that has a positive* dominant eigenvalue λ. If x0 is a unit vector in Rn that is not orthogonal to the eigenspace corresponding to λ, then the normalized power sequence