MAS1602 - Introductory Algebra

Vectors and matrices

Complex numbers

Geometry and operations

z1z2 is analogous to a counterclockwise rotation in the complex plane by θ2, and a scalar multiplication by r2

de Moivre's theorem states that \[ z^n = (\cos\theta+i\sin\theta)^n = (\cos(n\theta)+i\sin(n\theta))=e^{in\theta}\]

Complex numbers can be expressed in their modulus-argument or polar form such that\[ x + iy = r(\cos\theta + i\sin\theta)\]
...where \(r=|z|=\sqrt{x^2+y^2}\) and \(\theta = \arg{z}\)


Using the Taylor series of \(e^x\), \(\sin\theta\) and \(\cos\theta\), it can in turn be derived that \[ \cos\theta + i\sin\theta = e^{i\theta} \therefore x+iy\iff re^{i\theta}\]


...where the former is known as Euler's formula

Roots

Any non-zero complex or real number \(z\) has \(n\) \(n\)th roots denoted \(\zeta_0 \rightarrow \zeta_{n-1}\), where \[\zeta_0=r^{\frac{1}{n}}e^{\frac{i\theta}{n}}\]

The remaining \(n-1\) roots can be determined by multiplying \(\zeta_0\) by the roots of unity, the complex roots of 1. These are contained in the set \[ \{\zeta_m:\zeta_m^n=1\} \Leftarrow \zeta_m \in \{e^{i\frac{2k\pi}{\theta}}:k=0,1,2,...,n-1\} \]

Vectors

\(\)

Operations

Dot product

The dot product of two vectors \(\underline{u},\underline{v}\in \mathbb{R}^n\) is defined as \[\underline{u}\cdot\underline{v}=u_1v_1+u_2v_2+...+u_nv_n\]

Its properties include

\((\underline{u}\cdot \underline{v})\cdot \underline{w}=\underline{u}\cdot \underline{w} + \underline{v}\cdot \underline{w}\)

\(\underline{u}\cdot \underline{v}=\underline{v}\cdot \underline{u}\)

Cross product

\(\)

The cross product (or vector product) is exclusively \(\in\mathbb{R}^3\) and is defined as \( \underline{u}\times\underline{v}=\begin{pmatrix} u_2v_3-u_3v_2\\ u_3v_1-u_1v_3\\ u_1v_2-u_2v_1 \end{pmatrix} \)

\(\underline{u}\times\underline{u}=\underline{0}\)

\((\underline{v}+\underline{w})\times\underline{u}=(\underline{v}\times\underline{u})+(\underline{w}\times\underline{u})\)

\(\underline{u}\times(k\underline{v})=k(\underline{u}\times\underline{v})=(k\underline{u})\times\underline{v}\)

\(\underline{u}\times\underline{v}=-\underline{v}\times\underline{u}\)

\(\underline{u}\times(\underline{v}+\underline{w})=(\underline{u}\times\underline{v})+(\underline{u}\times\underline{w})\)

\(\)

It can further be shown that \(||\underline{u}\times\underline{v}||^2=||\underline{u}||^2||\underline{v}||^2-(\underline{u}\cdot\underline{v})^2\) and by extension \[ ||\underline{u}\times\underline{v}||=||u||\,||v||\sin\theta \]

\(\underline{u}\cdot\underline{v}=||\underline{u}||\,||\underline{v}||\cos\theta\)

Matrices

Geometry

Planes in \(\mathbb{R}^3\) can be determined by \(P=\{x\in\mathbb{R}^3:\underline{n}\cdot(\underline{x}-\underline{p_0})=0\}\) where \(\underline{p_0}=\begin{pmatrix}x_0\\y_0\\z_0\end{pmatrix}\) is a point on the plane, \(\underline{n}=\begin{pmatrix}a\\b\\c\end{pmatrix}\) is a normal vector and \(\underline{x}=\begin{pmatrix}x\\y\\z\end{pmatrix}\)

Hence the point-normal equation of a plane (or line by negating the \(z\) term) is \[ \underline{n}\cdot(\underline{x}-\underline{p_0})=0\iff a(x-x_0)+b(y-y_0)+c(z-z_0)=0 \]

\(k(\underline{u}\cdot\underline{v}) = (k\underline{u})\cdot \underline{v} = \underline{u}\cdot (k\underline{v})\)

\(\underline{u}\times\underline{v}\) is orthogonal to \(\underline{u}\) and \(\underline{v}\), hence \( \underline{u}\cdot(\underline{u}\times\underline{v})=0 \)

\(\)

\(\)

\(\)

Determinant

\(\)

The cofactor \(C_{ij} = (-1)^{i+j}A_{ij}\) (chessboard pattern)

\(\)

If \(B\) is obtained from \(A\) by interchanging two rows, then \(\det(B)=-\det(A)\)

If \(B\) is obtained from \(A\) by multiplying a row by a constant \(k\), then \(\det(B)= k\cdot\det(A)\)

If \(B\) is obtained from \(A\) by adding any row to another row, then \(\det(B)=\det(A)\)

The minor \(A_{ij}\) of a matrix \(A\) is the determinant of \(A\) having deleted row \(i\) and column \(j\)

\(\)

The determinant of a matrix holds the property that \(\det(A)\neq 0\iff \exists\; A^{-1}:A^{-1}A = I_n\)


It can be shown that for a fixed \(i\in\{1,2,...,n\}\) (namely down any row or column) \[\det(A)=\sum_{j=1}^na_{ij}C_{ij}=\sum_{j=1}^na_{ji}C_{ji}\]

The inverse matrix \(A^{-1}\) of a matrix \(A\) can be found by performing elementary row operations on an \(n\times 2n\) augmented matrix of \(A\) and \(I_n\) until the left hand side has become \(I_n\)

\(\)

\(\)

The transpose matrix \(A^t\) of an \(m\times n\) matrix \(A\) is an \(n\times m\) matrix where \(a^t_{ij}=a_{ji}\)

\((AB)^t = B^tA^t\)

\(\underline{v}^t\underline{w} = \underline{v}\cdot \underline{w}\)

If \(A\) has dimensions \(n\times n\), \((A\underline{v})\cdot\underline{w}=\underline{v}\cdot(A^t\underline{w})\)

An \(n\times n\) matrix is orthogonal if \(||Ax||=||x||\iff (A\underline{v})\cdot(A\underline{w})=\underline{v}\cdot\underline{w}\), and holds the property \[AA^t=I_n=A^tA\]

\(\)

Rotation

For any \(\underline{n}=\begin{pmatrix}a\\b\\c\end{pmatrix}\), the infinitesimal rotation matrix \(M_{\underline{n}}= \begin{pmatrix} 0 & -c & b \\ c & 0 & - a \\ -b & a & 0 \\ \end{pmatrix} \)


\(\Rightarrow r_{n,\theta}(\underline{x})=A\underline{x}\) with \(A=I_3 + \sin\theta M_n + (1-\cos\theta){M_n}^2\)

\(r_{\underline{n},\theta}:\mathbb{R}^3\rightarrow\mathbb{R}^3\) denotes the function that rotates any point \(\in\mathbb{R}^3\) counterclockwise about line \(L\) by \(\theta\), where \(L\) has equation \(\underline{x}=t\underline{n}\)

\(\)

Eigenvectors

A vector \(\underline{x}\in\mathbb{R}^n\neq \underline{0}\) is an eigenvector of an \(n\times n\) matrix \(A\) if \(A\underline{x}=\lambda \underline{x}\) with a corresponding eigenvalue \(\lambda\in\mathbb{R}\)

If \(\underline{x}\) is an eigenvector of \(A\) then so is \(c\underline{x}\) for any \(c\in\mathbb{R}\neq 0\)

\(A\underline{x}=\lambda\underline{x} \Rightarrow \det(A-\lambda I_n)=\underline{0}\) can be used to solve a system for its eigenvalues and hence determine its eigenvectors