In [17]:
from IPython.display import display
from sympy import init_printing, pprint

init_printing(use_latex='mathjax')

Vector Spaces

Definition: A non-empty space $E$ is called a $\underline{\text{Vector Space (vs)}}$ on a field $ \mathbb{K}$ if between its elements ($\underline{\text{Vectors}}$), the following $\underline{\text{Compositional Laws}}$ are true:

1. (Inner) (+) → sum (+): $ E \times E \rightarrow E$

$ (\underline{u},\underline{v}) \rightarrow \underline{w} = \underline{u} + \underline{v}$

with the following properties:

  1. Commutative:

    $ \underline{u} + \underline{v} = \underline{v} + \underline{u}, \quad \forall \underline{u}, \underline{v} \in E$

  2. Associative:

    $ \underline{u} + \underline{v} + \underline{w} = \underline{u} + (\underline{v} + \underline{w}), \quad \forall \underline{u},\underline{v},\underline{w} \in E$

  3. Neutral Element $\underline{0}$ s.t.

    $\underline{u} + \underline{0} = \underline{0} + \underline{u} = \underline{u}, \quad \forall \underline{u} \in E$

  4. Inverse Element $(-\underline{u})$ s.t.

    $\underline{u} + -\underline{u} = -\underline{u} + \underline{u} = \underline{0}, \quad \forall \underline{u} \in E$

2. (External on the field $\mathbb{K}$): $\mathbb{K} \times E \rightarrow E$

$(\lambda, \underline{u}) \rightarrow \underline{w} = \lambda\underline{u}$

with the following properties:

  1. Distributive w.r.t. the sum:

    $\lambda(\underline{u} + \underline{v}) = \lambda\underline{u} + \lambda\underline{v}, \quad \forall \lambda \in \mathbb{K}, \quad \forall \underline{u},\underline{v} \in E$

  2. Distributive w.r.t. the sum in $\mathbb{K}$:

    $(\lambda + \mu)\underline{v} = \lambda\underline{v} + \mu\underline{v}$

  3. Associative

    $\lambda (\mu \underline{v}) = (\lambda \mu) \underline{v}, \quad \forall \lambda,\mu \in \mathbb{K}, \quad \forall \underline{v} \in E$

  4. Neutral element w.r.t. field $\mathbb{K}$

    $1 \cdot \underline{u} = \underline{u} \cdot 1 = \underline{u}, \quad \forall \underline{u} \in E$

Note: $E$ has an algebraic structure of "Albelian Group"

Note: $\mathbb{K}$ can be real $\mathbb{R}$ or complex $\mathbb{C}$

Example A


$\mathbb{R}^n = \begin{equation} \underbrace{ \mathbb{R} \times \mathbb{R} \times \dots \times \mathbb{R} }_{n-times} \end{equation} $ is the set of all $n$-ple of real numbers. $(x_1, \dots, x_n) \in \mathbb{R}^n \Rightarrow $ it is a Vector Space $\rightarrow$ indeed:

$(x_1, \dots, x_n) + (y_1, \dots, y_n) = (x_1 + y_1, \dots, x_n + y_n)$ with $(0, \dots, 0)$ as netral element and $(-x_1, \dots, -x_m)$ as inverse element.


Example B


Consider the set $C(a, b)$ of all functions that are continuous & real in $(a, b) \in \mathbb{R}$. Such set is a vector space

$C(a, b) \times C(a, b) \rightarrow C(a, b)$

$\big(f(x), g(x)\big) \rightarrow h(x) = f(x) + g(x)$

with $0(x) = 0$ in $(a, b)$ is the neutral function and $-f(x)$ s.t. $f(x) + (-f(x)) = 0(x)$ is the inverse fun.


Linear dependence & independence

Definition: Let $\lbrace \underline{v}_1, \dots, \underline{v}_r \rbrace$ be a set of r-vectors of $\mathbb{E}$. The vector $\lambda_1\underline{v}_1 + \dots + \lambda_r\underline{v}_r$ is called the $\underline{\text{Linear Combination}}$ of the vectors on $\mathbb{K}$.

Definition: $r$ vectors of $\mathbb{E} \lbrace \underline{v}_1, \dots, \underline{v}_r \rbrace$ are said to be $\underline{\text{Linearly Dependent (LD)}}$ if $\exists \, \lambda_1, \dots, \lambda_r$ in $\mathbb{K}$ not all zero s.t.: $\lambda_1 \underline{v}_1 + \dots + \lambda_r \underline{v}_r = \underline{0}$

Definition: If $\lambda_1 = \dotsb \lambda_r = 0 \Rightarrow \lbrace \underline{v}_1, \dots, \underline{v}_2 \rbrace$ are $\underline{\text{Linearly Independent (LI)}}$.

Definition: We call the $\underline{\text{Basis}}$ of a vector space $\mathbb{E}$ to be any LI systems of vectors capable of generating the all space by linear combination: $\mathbb{B} = \lbrace \underline{e}_1, \dots, \underline{e}_n \rbrace \Rightarrow \underline{v} = v_1\underline{e}_1 + \dots + v_n\underline{e}_n, \quad \forall \underline{v} \in \mathbb{E}$

$v_1, \dots v_n$ are the components of $\underline{v}$ w.r.t. the basis $\mathbb{B}$. $n$ is the $\underline{\text{Dimension}}$ of the vector space $\mathbb{E}$.

Theorem: Let $\mathbb{E}_n$ be a $vs^1$ of dimension $r$. The vectors $\lbrace \underline{v}_1, ..., \underline{v}_r \rbrace$ are LI iff the rank of the following matrix is $r$:

$ \begin{vmatrix} v_1^1 & \dots & v_r^1 \\ \vdots & \ddots & \dots \\ v_1^n & \dots & v_r^n \\ \end{vmatrix}\rightarrow$ Matrix formed with the components of the vector $\underline{v}_i\,, i = 1, \dots, r$ with respect to any basis $\mathbb{B} = \lbrace \underline{e}_1, \dots, \underline{e}_n \rbrace$

$ \underline{v}_i = \displaystyle\sum_{j=1}^n v_i^j \thinspace \underline{e}_j, \quad i = 1, \dots, r $

Change of Basis

Consider $\mathbb{E}_n$ to be a vector space and let $\lbrace \underline{e}_1, \dotsb, \underline{e_n}\rbrace = B \notin \lbrace \underline{e}_{1'}, \dotsb, \underline{e}_{n'} \rbrace = {B'}$ be two basis. By definition, every vector of ${B'}$ can be expressed as a linear combination of $B$, i.e.

$\displaystyle\underline{e}_{i'} = \sum_{r=1}^n A_i^r, \underline{e}_r, \quad i' = 1, \dots, n$

Inversely we have:

$\displaystyle\underline{e}_s = \sum_{s=1}^n A_s^{i'} \underline{e}_{i'}, \quad s = 1, \dots, n$ (From ${B'} \rightarrow B$)

Substituting we have:

$\displaystyle\underline{e}_s = \sum_{s=1}^n \sum_{r=1}^n A_{i'}^r A_s^{i'} \underline{e}_r \Rightarrow$ but since $B$ & ${B'}$ are LI we have:

$\displaystyle \sum_{s=1}^n \sum_{r=1}^r A_{i'}^r A_s^{i'} = \delta_s^r = \begin{cases} 1 & \quad \text{if } r = s\\ 0 & \quad \text{if } r \neq s \end{cases} $

The matrices $\lbrace A_s^{i'} \rbrace$ and $\lbrace A_{i'}^r \rbrace$ are inverse to each other $\rightarrow$ we derive the following laws for change of basis:

$\lbrace B' = B\underline{\underline{A}} \Leftrightarrow B = B'\underline{\underline{A}}^{-1} \rbrace \text{& } \underline{\underline{A}}\underline{\underline{A}}^{-1} = \underline{\underline{I}}$

It is easy to show that the components fo a vector $v=E_n$ change in a Controvariant fashion with changing the basis, i.e.:

$\underline{u} = v' \underline{e}_1 + \dots + v^n \underline{e}_n = \displaystyle\sum_{i=1}^n v^i \underline{e}_i = \sum_{i'=1}^n v^{i'} \underline{e}_{i'}, \quad \text{and } \quad \lbrace V' = A^{-1}V \Leftrightarrow V = AV' \rbrace \Rightarrow$ Controvariant!

Linear Transformation between Vector Spaces

Definition: Let $E$ and $F$ be two Vector Spaces on $\mathbb{K}$. A $\underline{\text{Linear Transformation}}$ of $E$ in $F: L: E \rightarrow F$ is a map such that $\forall \underline{x}, \underline{y} \in E, \text{& } \forall \lambda \in \mathbb{K}$

  1. $L(\underline{x} + \underline{y}) = L(\underline{x}) + L(\underline{y})$
  2. $L(\lambda\underline{x}) = \lambda L(\underline{x})$

or equivalently,

$L(\lambda_1 \underline{x} + \lambda_2 \underline{y} = \lambda_1 L(\underline{x} + \lambda_2 L(\underline{y})$

Example A


$L: \begin{matrix} C^1(0, 1) \rightarrow C^0(0, 1) \\ f(x) \rightarrow \frac{\mathrm d}{\mathrm d x}f(x) \end{matrix} $ is linear

$\frac{\mathrm d}{\mathrm d x} \big(\lambda_1 f(x) + \lambda_2 g(x) \big) = \lambda_1 f'(x) + \lambda_2 g'(x)$


Example B


$L: \begin{matrix} C^0(0,1) \rightarrow C^0(0,1) \\ f(x) \rightarrow \displaystyle\int_0^x f(x')dx' \end{matrix} $ is linear.

$\displaystyle \int_0^x \big(\lambda_1 f(x') + \lambda_2 g(x')\big) dx' = \lambda_1 \int_0^x f(x')dx' + \lambda_2 \int_0^x g(x') dx'$

Nullspace and Range of a Linear Transformation

Definition: Let $L: E_n \rightarrow F_m$ be a linear map. The $\underline{\text{nullspace}} \text{ } N(L)$ (also called kernel) is the subspace of $E_n$ s.t. $N(L) = \big\lbrace \underline{x} \in E_n \, | \, L \underline{x} = \underline{0} \big\rbrace$

Definition: Let $L: E_n \rightarrow F_m$ be a linear map. The $\underline{\text{Raupe}}$ space of $L$, ($R(L)$) is the subspace of $F_m$ s.t.

$R(L) = \big\lbrace \underline{y} \in F_m \, | \, L \underline{x} = \underline{y} \big\rbrace$

The dimension of $R(L)$ is called $\underline{\text{rank}}$

Theorem: If Let $L: E_n \rightarrow F_m$ and $dim \, E_n = n, \, dim \, F_m = m$ then, $dim \, E_n = dim \, N(L) + dim \, R(L)$

Theorem: Two spaces are isomorphic if they have the same dimension.

Isomorphism: Linear application $L: \begin{matrix} E \rightarrow F \\ \underline{x} \rightarrow y = L \underline{x} \end{matrix} $ s.t.

  1. one-to-one:

    $L(\underline{x}_1) \neq L(\underline{x}_2)$ if $\underline{x}_1 \neq \underline{x}_2 \quad \forall \underline{x}_1, \underline{x}_2 \in E_n$

  2. Onto:

    For any $\underline{y} \in F, \exists\, \underline{x} \in E$ s.t. $L(\underline{x}) = \underline{y}$

    Note that the "onto" implies that $L(\underline{x})$ (Roupe) covers the all space $F$. But then $dim \, R(L) = dim\, F = m = n \Rightarrow dim\, E_n = n = dim\, N(L) + dim\, R(L) \Rightarrow dim\, N(L) = 0$

TODO: Add figures or python graphics

Linear Transformation and Matrices: Relationships

Let $E_n$ and $F_m$ be two vector spaces on $\mathbb{K}$

Theorem: if $\lbrace \underline{e}_1, \dots, \underline{e}_n \rbrace$ is a basis of $E_n$, the linear map $L: E_n \rightarrow F_m$ is uniquiely determined by the $n$-transformed vectors of the bases $\lbrace \underline{f}, \dots , \underline{f}_m \rbrace$. The linear map $L$ is uniquely represented by a matrix $m \times n$. Conversely, any matrix $(m \times n)$ represents a linear map $L: E_n \rightarrow F_m$

Set the following basis in $E_n \lbrace \underline{e}_1, \dots, \underline{e}_n \rbrace$. Any vector $\underline{x}$ in $E_n$ can be written as: $\displaystyle \sum_{i=1}^n x_i \underline{e}_i$. The transformed under $L$ are the following: $L \underline{x} = \displaystyle \sum_{i=1}^n x^i \begin{equation}\underbrace{L \underline{e}_i}_{\underline{\mathcal{E}}_i}\end{equation} = \sum_{i=1}^n x^i \underline{\mathcal{E}}_i$

$\mathcal{E}_i = L \underline{e}_i$ can be represented in the basis $\big\lbrace \underline{f}_1, \dots, \underline{f}_m \big\rbrace$

$\mathcal{E}_i = \displaystyle \sum_j^m \alpha_i^j\, \underline{f}_j, \quad i = 1, \dots, n \quad \quad \lbrace x_i^j \rbrace_{ \begin{matrix} i\, = 1,\, \dots\,,\, n \\ j\, = 1,\, \dots\,,\, m \end{matrix} }$ is a $m \times n$ matrix!

The $\lbrace x_i^j \rbrace$ matrix uniqutely represents the transformation of L.

Application to Linear Systems: Geometric Interpretation

$ \begin{cases} \begin{matrix} a_1^1 x^1 + & \dots & a_n^1 x^n = y^1 \end{matrix} \\ \begin{matrix} a_1^m x_1 + & \dots & a_m^n x^n = y^m \end{matrix} \end{cases} \Rightarrow \quad \overbrace{A}^{(m \times n)} \underbrace{\underline{x}}_{(n \times 1)} = \underbrace{\underline{y}}_{(m \times 1)}$ is a linear system.

Problem: Does the System Have a Solution?

If $A$ represents a map $\underline{x} \rightarrow \underline{y}$, solutions can exist if and only if $\underline{y}$ is in the range of $L$ or $\underline{y} \in R(A)$. Since the range of $A$ is represented by the columns of $A$, that implies that $\underline{y}$ must be expressed as linear combinations of the column of $A$.

$ \begin{bmatrix} A\underline{e}_1 & A \underline{e}_2 & \dots & A \underline{e}_n \\ \vdots & \vdots & \vdots& \vdots \end{bmatrix} \quad \underline{x} = \underline{y} \Rightarrow x' (A \underline{e}_1) + \dots + x^n (A \underline{e}_m) = \underline{y} $

Homegenous Systems: $A \underline{x} = \underline{0} \rightarrow$ Solutions of $m \times n$ homegenous systems form a vector space $\rightarrow$ Null space of the application.

TODO: Insert Figure here

$A \underline{x} = \underline{0}$ if $m = n \notin \mathbb{R}(A) = rnk(A) = n \Rightarrow$

$\Rightarrow dim E_n = \dim N(A) + \dim \mathbb{R}(A) \Rightarrow n = \dim N(A) + n$

$\Rightarrow \dim N(A) = 0 \Rightarrow $ only possible solution for $A \underline{x} = \underline{0}$ is $\underline{x} = \underline{0}$

General Solution for $A \underline{x} = \underline{y}$:

The general solution is the sum of any solution for $A \underline{x} = \underline{y}$ plus the general solution for the homogeneous system $A \underline{x} = \underline{0}$

Indeed: if $\underline{x} = \underline{x}_P + \underline{x}_H$, we have:

$A \underline{x} = A \underline{x}_P + A \underline{x}_H = \underline{y} + \underline{0}$

thus the solution can be expressed as:

$\underline{x} = \underline{x}_P + \underbrace{c_1 \underline{x}_1 + \dots + c_k \underline{x}_k}_{\text{Fundamental (LI) solutions of } N(A)} \quad \text{where } k = n - \dim \mathbb{R}(A)$

Representation of an Operator in different basis

A linear transformation $L: E_n \rightarrow E_n$ is also called an $\underline{Operator}$.

Let ${\lbrace \underline{e}_i \rbrace}_{i = 1, \dots, n}$ and ${\lbrace \underline{e}_i' \rbrace}_{i' = 1, \dots, n}$ two basises of $E_n$. Let $C$ be the matrix $(n \times n)$ representing the change of basis $B \rightarrow B'$. That is:

$\lbrace \underline{e}_1', \dots, \underline{e}_n \rbrace = \lbrace \underline{e}_1, \dots, \underline{e}_n \rbrace C$

Let $\underline{\underline{A}}$ be the matrix represetnting a linear operator $L$ in $B$. Conversely, let $\underline{\underline{A}}'$ be the matrix representing the smae operator in $B'$. The following relationship holds:

$ \begin{cases} X' = C^{-1}X \\ X = CX' \end{cases} \text{,}\quad \text{if } Y = \underline{\underline{A}}X \text{ it is also true that } \begin{cases} Y' = C^{-1}Y \\ Y = CY' \end{cases} $

Thus we have:

$ Y = \underline{\underline{A}} X \Rightarrow CY' = \underline{\underline{A}} CX' \Rightarrow Y' = C^{-1} \underline{\underline{A}} C X' = \underline{\underline{A}}' X'$

thus the law of transformation is the following:

$\big\lbrace A' = C^{-1} A C \Longleftrightarrow A = C A'C^{-1}\big\rbrace$

Definition: Two $(n \times n)$ matrices $A, B$ s.t. $A = CBC^{-1}$ where $C$ is an $(n \times n)$ invertible matrix, are said to be $\underline{similar}$

Eigenvalue and Eigenvectors

Definition: For a given linear map $A: E_n \rightarrow E_n$ (operator), an element $\lambda \in \mathbb{K}$ is called an $\underline{eigenvalue}$ if $\exists \, {v} \in E_n, \,\, \underline{v} \neq \underline{0} \,\,$ s.t. $A \underline{v} = \lambda \underline{v}$ where $\underline{v}$ is called the $\underline{eigenvector}$

Theroem: Let $E_\lambda$ be the set of all vectors $\underline{u}$, s.t. $A \underline{u}_i = \lambda \underline{u}_i$. The set $E_\lambda$ plus $\underline{0}$ is a vector subspace of $E$. $E_\lambda$ is called the $\underline{eigenspace}$ of $\lambda$

How to find eigen-values (e-values):

Set the basis $\lbrace \underline{e}_i \rbrace_{i=1}^n$ for $E_n$. $A$ is then represented by a matrix $A (n \times n)$. To find the evalues, solve for the following:

$A \underline{v} = \lambda \underline{v}$ or $(A - \lambda I) \underline{v} = \underline{0} \rightarrow \text{ homogenous system of linear equations}.$

The solution is non-trivial iff $\det(A - \lambda I) = 0$

The equation $\det (A - \lambda I) = 0$ is an equation of depre $n$ called the $\underline{\text{characteristic equation}}$ and the $\det(A-\lambda I)$ is the $\underline{\text{characteristic polynomial}}$ of $A$, sometimes indicated with $P_A(\lambda)$

Theroem: Similar matrices $A' = C^{-1} A C$ have the same characteristic Polynomial, i.e.:

$P_{A'}(\lambda) = P_A(\lambda)$

Finding Eigenvalue and Eigenvectors using python:

Using the numpy linear algebra library, the eigen values and eigen vectors can be determined:


In [3]:
import numpy as np

A = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
v, k = np.linalg.eig(A)

print("The eigen values are: {v[0]}, {v[1]}, {v[2]}".format(v=v))


The eigen values are: 16.116843969807043, -1.1168439698070427, -1.3036777264747022e-15

In [21]:
from sympy import Matrix
K = Matrix(k)
display(K)  # use print or pprint in iPython console


$$\left[\begin{matrix}-0.231970687246286 & -0.785830238742067 & 0.408248290463864\\-0.525322093301234 & -0.0867513392566283 & -0.816496580927726\\-0.818673499356181 & 0.61232756022881 & 0.408248290463863\end{matrix}\right]$$

Diagonal form of an operator

Question: Is it possible to choose a basis in $E_n$ s.t. $A: E_n \rightarrow E_n$ is represented by the simplest matrix?

Definition: A operator $A$ is aid to be $\underline{diagonizable}$ if $\exists$ a basis of $E_n$ s.t. the matrix $A \, (n \times n)$ is diagonal.

Thereom: $A$ is diagonizable iff it admits $n$ linearly indpendent e-vectors, i.e. there exists a basis $E_n$ formed by e-vectors.

Definition: We call $\underline{\text{Geometric Multiplicity}} \text{ (GM)}$ of e-value $\lambda$ for $A$, the dimension of $E(\lambda) = \lbrace \underline{x} \in E | A \underline{x} = \lambda\underline{x} \rbrace$ Since $A \underline{x} = \lambda \underline{x} \Rightarrow GM = \dim N(A - \lambda I)$

Definition: We call $\underline{\text{Algebraic Multiplicity}} \text{ (AM)}$ of e-value $\lambda$ of $A$ the multiplicity of $\lambda$ a sroot of the characteristic equation $\det (A - \lambda I) = 0$

Theroem: Let $A$ be an operator $A: E_n \rightarrow E_n$, and let $\lambda_1$ be an e-value. Then:

$\begin{equation}AM(\lambda_1) \leq AM(\lambda_1)\end{equation}$

Theroem: Let $A$ be an operator in $E_n$. $A$ is $\underline{diagonizable}$ iff, for any evalue of $A \, \lambda_i$, then:

$GM(\lambda_i) = AM(\lambda_i) \quad \forall \lambda_i$ s.t. $A \underline{x} = \lambda_i \underline{x}$

If $A$ is diagonal then $\exists$ a basis of e-vectors such that $A$ is similar to a diagonal matrix:

$\begin{cases} D = C^{-1} A C && \underline{x} = \mathbb{R}\underline{z} \\ A = C D C \end{cases}$

Jordan form of a linear operator

Definition: We vall a $\underline{\text{Jordan matrix}}$ of order $n$, a matrix with the structure:

$J_n = \begin{bmatrix} \lambda && \dots && 0 \\ 0 && \ddots && 0 \\ 0 && \dots && \lambda \end{bmatrix} \quad$ Where $J_n$ is a $n \times n$ matrix.

Theroem: Let $A$ be a linear operator in $E_n$ an $\displaystyle P_A(\lambda) = \prod_{i=1}^z(\lambda - \lambda_i)^{z_i}$. Then $A$ is similar to a matrix comprising Jordan blocks. I.e. there exists a basis of vectors of $E_n$ (called generalize e-vectors) represented by a matrix $C$, such that the operator $A$ is similar to the following:

$\displaystyle J = \begin{bmatrix} \begin{matrix} \lambda_1 && \dots && 0 \\ 0 && \ddots && 0 \\ 0 && \dots && \lambda_1 \end{matrix} && \dots && 0\\ 0 && \ddots && 0 \\ 0 && 0 && \begin{matrix} \lambda_z && \dots && 0 \\ 0 && \ddots && 0 \\ 0 && \dots && \lambda_z \end{matrix} \\ \end{bmatrix} $

The similarity condition is expressed as:

$J = C^{-1} A C \Longleftrightarrow A = CJC^{-1}$

Theorem: (Cayley - Hamilton)

Definition: For a given polynomial $P(t) = a_0 t^n + a_1 t^{n-1} + \dots + a_{n-1}t + a_n$ and an operator $A$ in $E_n$, we define the $\underline{\text{polynomial of operator}} \, p(A)$, the following operator:

$p(A) = a_0 A^{n} + a_1 A^{n-1} + \dots + a_{n-1} A + a_n$

Consider the operator $A$ represented by matrix $\underline{\underline{A}}$ in a specified basis $\displaystyle \lbrace \underline{e}_i \rbrace_{i=1}^n$. Then the following is true:

$P(\underline{\underline{A}}) = 0 \text{ where } P(\underline{\underline{A}}) = \det (\underline{\underline{A}} - \lambda \underline{\underline{I}})$

Euclidean Vector Space

Let $E$ be a vector space on $\mathbb{R}$.

Definition: We define the scalar product or inner product of $E$ as a bilinear transformation: $g: E \times E \rightarrow \mathbb{R}$ s.t. the following properties are true:

  1. $(\underline{u}, \underline{v} = (\underline{v}, \underline{u}), \quad \forall \underline{u}, \underline{v} \in E$ (Commulative)
  2. $(\alpha \underline{u}, \underline{v}) = (\underline{u}, \alpha \underline{v}), \quad \forall \underline{u}, \underline{v} \in E, \quad \forall \alpha \in \mathbb{R}$
  3. $(\underline{u}, \underline{v} + \underline{w}) = (\underline{u}, \underline{v}) + (\underline{u}, \underline{w}), \quad \forall \underline{u}, \underline{v}, \underline{w} \in E$
  4. $(\underline{u}, \underline{u}) \geq 0, \quad\forall \underline{u} \in E$ and if $(\underline{u}, \underline{u}) = 0 \Rightarrow \underline{u} = \underline{0}$

Basically $ g(\underline{u}, \underline{v}) = \underline{u} \cdot \underline{v}$ is a bilinear form on $E \times E$ that is symmectical and positive definite.

Definition: A vector space $E$ is said to be $\underline{Euclidean}$ if a scalar/inner product is defined on it.

Definition: Two vectors in $E$, $\underline{u}, \underline{v}$ are said to be orthogonal iff $(\underline{u}, \underline{v}) = \underline{u} \cdot \underline{v} = 0$

Definition: We define $\underline{norm}$ of a vector: $\underline{u} \in E$ the following $||\underline{u}|| = \sqrt{(\underline{u}, \underline{v})} = \sqrt{\underline{u} \cdot \underline{v}}$

Definition: A set of vectors $\lbrace \underline{e}_i \rbrace_{i=1}^n$ is said to be orthonormal iff:

$(\underline{e}_i, \underline{e}_k) = \delta_{ik} \begin{cases} 1, && \text{if } i=k\\0, && \text{if } i \neq k\end{cases}$

Adjoint Operators

Note: A vector space on $C$ is said to be a Hilbert space if there is a inner product $g: E \times E \rightarrow C$ that satisfies the following properties:

  1. $(\underline{u}, \underline{v}) = (\overline{\underline{u}, \underline{v}}) \quad \forall \underline{u}, \underline{v} \in E$
  2. $(\lambda \underline{u} + \mu \underline{v}, \underline{w}) = \lambda (\underline{u}, \underline{w}) + \mu (\underline{v}, \underline{w}) \quad \forall \underline{u}, \underline{v} \in E, \quad \forall \lambda, \mu \in C$
  3. $(\underline{u}, \underline{u}) \geq 0 \text{ and } (\underline{u}, \underline{u} = 0 \Longleftrightarrow \underline{u} = \underline{0}$

Consider now a Hilbert (Euclidean) space equipped with inner product.

Definition: We call $A^*$ adjoint operator of $A$, an operator that satisfies the following property:

$(A \underline{x}, \underline{y}) = (\underline{x}, A^*\underline{y}), \quad \forall \underline{x}, \underline{y} \in E$

An important class of operators are the $\underline{self-adjoint}$ operators, i.e. operators like $A = A^*$:

$(A\underline{x}, \underline{y}) = (\underline{x}, A \underline{y})$

Note that if the space $E$ is Euclidean, then $A^* = A^T$ and $(A\underline{x},\underline{y}) = (\underline{x}, A^T\underline{y})$

Self-adjoint operators are therefore $\underline{Symmetric}$ $A=A^T$

Spectral Theorem

A symmetric operator $A=A^T$ is always diagonizable. Its e-values are all real and the e-vectors are orthogonal, i.e. $\exists$ matrix $C$ made of e-vectors $C = \big[\underline{v}_1, \dots, \underline{v}_n\big]$ s.t. the operator $A$ is similar to a diagonal operator (can be represented by a diagonal matrix in the basis of e-vectors).

For $A\underline{v} = \lambda \underline{v}$ we can consider a transformation $x = M\underline{v}$ s.t.

$AM^{-1}\underline{z} = \lambda M^{-1}\underline{z} \Rightarrow MAM^{-1}\underline{z} = \lambda MM^{-1}\underline{z} \Rightarrow D\underline{z} = \lambda \underline{z}$

OR

$\big\lbrace MAM^{-1} = D \Longleftrightarrow A = M^{-1}DM \big\rbrace \rightarrow \text{ similarity condition}$