Singular Value Decomposition - Perturbation Theory

Prerequisites

The reader should be familiar with eigenvalue decomposition, singular value decompostion, and perturbation theory for eigenvalue decomposition.

Competences

The reader should be able to understand and check the facts about perturbations of singular values and vectors.

Peturbation bounds

For more details and the proofs of the Facts below, see R.-C. Li, Matrix Perturbation Theory, and the references therein.

Definitions

Let $A\in\mathbb{C}^{m\times n}$ and let $A=U\Sigma V^*$ be its SVD.

The set of $A$'s singular values is $sv(B)=\{\sigma_1,\sigma_2,\ldots)$, with $\sigma_1\geq \sigma_2\geq \cdots\geq 0$, and let $sv_{ext}(B)=sv(B)$ unless $m>n$ for which $sv_{ext}(B)=sv(B)\cup \{0,\ldots,0\}$ (additional $|m-n|$ zeros).

Triplet $(u,\sigma,v)\in\times\mathbb{C}^{m}\times\mathbb{R}\times\mathbb{C}^{n}$ is a singular triplet of $A$ if $\|u\|_2=1$, $\|v\|_2=1$, $\sigma\geq 0$, and $Av=\sigma u$ and $A^*u=\sigma v$.

$\tilde A=A+\Delta A$ is a perturbed matrix, where $\Delta A$ is perturbation. The same notation is adopted to $\tilde A$, except all symbols are with tildes.

Spectral condition number of $A$ is $\kappa_2(A)=\sigma_{\max}(A)/ \sigma_{\min}(A)$.

Let $X,Y\in\mathbb{C}^{n\times k}$ with $\mathop{\mathrm{rank}}(X)=\mathop{\mathrm{rank}}(Y)=k$. The canonical angles between their column spaces are $\theta_i=\cos^{-1}\sigma_i$, where $\sigma_i$ are the singular values of $(Y^*Y)^{-1/2}Y^*X(X^*X)^{-1/2}$. The canonical angle matrix between $X$ and $Y$ is

$$\Theta(X,Y)=\mathop{\mathrm{diag}}(\theta_1,\theta_2,\ldots,\theta_k). $$

Facts

  1. Mirsky Theorem. $\|\Sigma-\tilde\Sigma\|_2\leq \|\Delta A\|_2$ and $\|\Sigma-\tilde\Sigma\|_F\leq \|\Delta A\|_F$.

  2. Residual bounds. Let $\|\tilde u\|_2=\|\tilde v\|_2=1$ and $\tilde \mu=\tilde u^* A \tilde v$. Let residuals $r=A\tilde v-\tilde \mu \tilde u$ and $s=A^*\tilde u - \tilde \mu \tilde v$, and let $\varepsilon=\max\{\|r\|_2,\|s\|_2\}$. Then $|\tilde \mu -\mu|\leq \varepsilon$ for some singular value $\mu$ of $A$.

  3. The smallest error matrix $\Delta A$ for which $(\tilde u, \tilde \mu, \tilde v)$ is a singular triplet of $\tilde A$ satisfies $\| \Delta A\|_2=\varepsilon$.

  4. Let $\mu$ be the closest singular value in $sv_{ext}(A)$ to $\tilde \mu$ and $(u,\mu,v)$ be the associated singular triplet, and let $$\eta=\mathop{\mathrm{gap}}(\tilde\mu)= \min_{\mu\neq\sigma\in sv_{ext}(A)}|\tilde\mu-\sigma|.$$ If $\eta>0$, then \begin{align*} |\tilde\mu-\mu |&\leq \frac{\varepsilon^2}{\eta},\\ \sqrt{\sin^2\theta(u,\tilde u)+ \sin^2\theta(v,\tilde v)} & \leq \frac{\sqrt{\|r\|_2^2 + \|s\|_2^2}}{\eta}. \end{align*}

  5. Let $$ A=\begin{bmatrix} M & E \\ F & H \end{bmatrix}, \quad \tilde A=\begin{bmatrix} M & 0 \\ 0 & H \end{bmatrix}, $$ where $M\in\mathbb{C}^{k\times k}$, and set $\eta=\min |\mu-\nu|$ over all $\mu\in sv(M)$ and $\nu\in sv_{ext}(H)$, and $\varepsilon =\max \{ \|E\|_2,\|F\|_2 \}$. Then $$ \max |\sigma_j -\tilde\sigma_j| \leq \frac{2\varepsilon^2}{\eta+\sqrt{\eta^2+4\varepsilon^2}}. $$

  6. Let $m\geq n$ and let $$ \begin{bmatrix} U_1^*\\ U_2^* \end{bmatrix} A \begin{bmatrix} V_1 & V_2 \end{bmatrix}= \begin{bmatrix} A_1 & \\ & A_2 \end{bmatrix}, \quad \begin{bmatrix} \tilde U_1^*\\ \tilde U_2^* \end{bmatrix} \tilde A \begin{bmatrix} \tilde V_1 & \tilde V_2 \end{bmatrix}= \begin{bmatrix} \tilde A_1 & \\ & \tilde A_2 \end{bmatrix}, $$ where $\begin{bmatrix} U_1 & U_2 \end{bmatrix}$, $\begin{bmatrix} V_1 & V_2 \end{bmatrix}$, $\begin{bmatrix} \tilde U_1 & \tilde U_2 \end{bmatrix}$, and $\begin{bmatrix} \tilde V_1 & \tilde V_2 \end{bmatrix}$ are unitary, and $U_1,\tilde U_1\in \mathbb{C}^{m\times k}$, $V_1,\tilde V_1\in \mathbb{C}^{n\times k}$. Set $$ R=A\tilde V_1-\tilde U_1\tilde A_1,\quad S=A^*\tilde U_1-\tilde V_1 \tilde A_1. $$ Let $\eta=\min|\tilde \mu-\nu|$ over all $\tilde \mu\in sv(\tilde A_1)$ and $\nu\in sv_{ext}(A_2)$. If $\eta > 0$, then $$ \sqrt{\|\sin\Theta(U_1,\tilde U_1)\|_F^2 + \|\sin\Theta(V_1,\tilde V_1)\|_F^2} \leq \frac{\sqrt{\|R\|_F^2 + \|S\|_F^2 }}{\eta}. $$

Example


In [1]:
import Random
Random.seed!(421)
m=8
n=5
k=min(m,n)
A=rand(-9:9,m,n)


Out[1]:
8×5 Array{Int64,2}:
  9  -8  -6  -8   6
 -7  -3  -3  -5   1
 -6   5   4  -4   9
 -6  -9   9   4   2
  6  -3   8   2  -9
 -5   3   4  -9   9
  5   4   6  -6   4
 -3  -1   5   4   3

In [2]:
ΔA=rand(m,n)/100
B=A+ΔA


Out[2]:
8×5 Array{Float64,2}:
  9.00708  -7.99113   -5.9908   -7.9925    6.00039
 -6.99401  -2.99656   -2.99585  -4.99217   1.00968
 -5.99831   5.00369    4.00099  -3.99453   9.00176
 -5.99876  -8.99785    9.0021    4.00802   2.00052
  6.00337  -2.99211    8.00542   2.00362  -8.99755
 -4.99083   3.00391    4.00915  -8.99509   9.00952
  5.00833   4.0062     6.00446  -5.99618   4.00705
 -2.99155  -0.997366   5.00548   4.00731   3.00406

In [3]:
using LinearAlgebra
U,σ,V=svd(A)
U₁,σ₁,V₁=svd(B)


Out[3]:
SVD{Float64,Float64,Array{Float64,2}}
U factor:
8×5 Array{Float64,2}:
  0.149248      0.696477    -0.450212   0.451151   -0.196549
  0.224144     -0.00330314   0.208539   0.409252    0.674625
  0.536105     -0.224513     0.01025   -0.167696   -0.145676
  0.000779566  -0.559597    -0.415961   0.562063   -0.0795517
 -0.440244     -0.137175    -0.551596  -0.216061    0.463066
  0.619353     -0.0773222   -0.165858  -0.085511    0.287698
  0.24481       0.0940152   -0.484513  -0.473739    0.0171393
  0.0522446    -0.343101    -0.119043   0.0807253  -0.425683
singular values:
5-element Array{Float64,1}:
 22.714973658969672
 19.43948922504952
 14.748139691631417
 13.979222590956905
  7.366312473695588
Vt factor:
5×5 Array{Float64,2}:
 -0.356946   0.21649     0.0561989  -0.535417    0.732047
  0.620365  -0.0384243  -0.651221   -0.432837    0.0472716
 -0.517625   0.41339    -0.692762    0.248441   -0.139755
 -0.332565  -0.881079   -0.289929    0.0553775   0.161166
 -0.330458  -0.0668079   0.0936769  -0.679111   -0.645265

In [4]:
# Mirsky's Theorems
maximum(abs,σ-σ₁),opnorm(Diagonal(σ)-Diagonal(σ₁)),
opnorm(ΔA), norm(σ-σ₁), norm(ΔA)


Out[4]:
(0.00799982856752468, 0.00799982856752468, 0.03472318158204487, 0.010462619462297785, 0.03749141691124676)

In [5]:
# Residual bounds - how close is (x,ζ,y) to (U[:,j],σ[j],V[:,j])
j=rand(2:k-1)
x=round.(U[:,j],digits=3)
y=round.(V[:,j],digits=3)
x=x/norm(x)
y=y/norm(y)
ζ=(x'*A*y)[]
σ, j, ζ


Out[5]:
([22.71443872538939, 19.439175375504053, 14.741722158615167, 13.981197372822301, 7.374312302263113], 4, 13.981196477128567)

In [6]:
# Fact 2
r=A*y-ζ*x
s=A'*x-ζ*y
ϵ=max(norm(r),norm(s))


Out[6]:
0.010564193333948667

In [7]:
minimum(abs,σ.-ζ), ϵ


Out[7]:
(8.956937342929905e-7, 0.010564193333948667)

In [8]:
# Fact 4
η=min(abs(ζ-σ[j-1]),abs(ζ-σ[j+1]))


Out[8]:
0.7605256814865999

In [9]:
ζ-σ[j], ϵ^2/η


Out[9]:
(-8.956937342929905e-7, 0.0001467434743017443)

In [10]:
# Eigenvector bound
# cos(θ)
cosθU=dot(x,U[:,j])
cosθV=dot(y,V[:,j])
# Bound
sqrt(1-cosθU^2+1-cosθV^2), sqrt(norm(r)^2+norm(s)^2)/η


Out[10]:
(0.0007359748346531112, 0.014416518399856987)

In [11]:
# Fact 5 - we create small off-diagonal block perturbation
j=3
M=A[1:j,1:j]
H=A[j+1:m,j+1:n]
B=cat(M,H,dims=(1,2))


Out[11]:
8×5 Array{Int64,2}:
  9  -8  -6   0   0
 -7  -3  -3   0   0
 -6   5   4   0   0
  0   0   0   4   2
  0   0   0   2  -9
  0   0   0  -9   9
  0   0   0  -6   4
  0   0   0   4   3

In [12]:
E=rand(Float64,size(A[1:j,j+1:n]))/100
F=rand(Float64,size(A[j+1:m,1:j]))/100
C=map(Float64,B)
C[1:j,j+1:n]=E
C[j+1:m,1:j]=F
C


Out[12]:
8×5 Array{Float64,2}:
  9.0         -8.0          -6.0          0.00521644   0.00229319
 -7.0         -3.0          -3.0          0.00368332   0.00264824
 -6.0          5.0           4.0          0.00130663   0.00403048
  0.00541968   0.00523952    0.0094073    4.0          2.0
  0.00584898   0.00697791    0.00233598   2.0         -9.0
  0.00916926   0.00831539    0.00563637  -9.0          9.0
  0.00512033   0.000574522   0.00351357  -6.0          4.0
  0.00438309   0.000464975   0.00187312   4.0          3.0

In [13]:
svdvals(M)


Out[13]:
3-element Array{Float64,1}:
 16.166941117157883
  7.9748473893074925
  0.17839291355117629

In [14]:
svdvals(H)'


Out[14]:
1×2 Adjoint{Float64,Array{Float64,1}}:
 16.6354  8.20136

In [15]:
svdvals(M).-svdvals(H)'


Out[15]:
3×2 Array{Float64,2}:
  -0.468496   7.96559
  -8.66059   -0.226508
 -16.457     -8.02296

In [16]:
ϵ=max(norm(E), norm(F))
β=svdvals(B)
γ=svdvals(C)
η=minimum(abs,svdvals(M).-svdvals(H)')
display([β γ])
maximum(abs,β-γ), 2*ϵ^2/(η+sqrt(η^2+4*ϵ^2))


5×2 Array{Float64,2}:
 16.6354    16.6354
 16.1669    16.1669
  8.20136    8.20138
  7.97485    7.97485
  0.178393   0.178431
Out[16]:
(3.7713327345700876e-5, 0.002101834711812092)

Relative perturbation theory

Definitions

Matrix $A\in\mathbb{C}^{m\times n}$ is multiplicatively pertubed to $\tilde A$ if $\tilde A=D_L^* A D_R$ for some $D_L\in\mathbb{C}^{m\times m}$ and $D_R\in\mathbb{C}^{n\times n}$.

Matrix $A$ is (highly) graded if it can be scaled as $A=GS$ such that $\kappa_2(G)$ is of modest magnitude. The scaling matrix $S$ is often diagonal. Interesting cases are when $\kappa_2(G)\ll \kappa_2(A)$.

Relative distances between two complex numbers $\alpha$ and $\tilde \alpha$ are:

\begin{align*} \zeta(\alpha,\tilde \alpha)&=\frac{|\alpha-\tilde\alpha|}{\sqrt{|\alpha\tilde \alpha|}}, \quad \textrm{for } \alpha\tilde\alpha\neq 0,\\ \varrho(\alpha,\tilde \alpha)&=\frac{|\alpha-\tilde\alpha|} {\sqrt{|\alpha|^2 + |\tilde \alpha|^2}}, \quad \textrm{for } |\alpha|+|\tilde\alpha|> 0. \end{align*}

Facts

  1. If $D_L$ and $D_R$ are non-singular and $m\geq n$, then \begin{align*} \frac{\sigma_j}{\|D_L^{-1}\|_2\|D_R^{-1}\|_2}& \leq \tilde\sigma_j \leq \sigma_j \|D_L\|_2\|D_R\|_2, \quad \textrm{for } i=1,\ldots,n, \\ \| \mathop{\mathrm{diag}}(\zeta(\sigma_1,\tilde \sigma_1),\ldots, \zeta(\sigma_n,\tilde \sigma_n)\|_{2,F} & \leq \frac{1}{2}\|D_L^*-D_L^{-1}\|_{2,F} + \frac{1}{2}\|D_R^*-D_R^{-1}\|_{2,F}. \end{align*}

  2. Let $m\geq n$ and let $$ \begin{bmatrix} U_1^*\\ U_2^* \end{bmatrix} A \begin{bmatrix} V_1 & V_2 \end{bmatrix}= \begin{bmatrix} A_1 & \\ & A_2 \end{bmatrix}, \quad \begin{bmatrix} \tilde U_1^*\\ \tilde U_2^* \end{bmatrix} \tilde A \begin{bmatrix} \tilde V_1 & \tilde V_2 \end{bmatrix}= \begin{bmatrix} \tilde A_1 & \\ & \tilde A_2 \end{bmatrix}, $$ where $\begin{bmatrix} U_1 & U_2 \end{bmatrix}$, $\begin{bmatrix} V_1 & V_2 \end{bmatrix}$, $\begin{bmatrix} \tilde U_1 & \tilde U_2 \end{bmatrix}$, and $\begin{bmatrix} \tilde V_1 & \tilde V_2 \end{bmatrix}$ are unitary, and $U_1,\tilde U_1\in \mathbb{C}^{m\times k}$, $V_1,\tilde V_1\in \mathbb{C}^{n\times k}$. Set $$ R=A\tilde V_1-\tilde U_1\tilde A_1,\quad S=A^*\tilde U_1-\tilde V_1 \tilde A_1. $$ Let $\eta=\min \varrho(\mu,\tilde \mu)$ over all $\mu\in sv(A_1)$ and $\tilde \mu\in sv_{ext}(A_2)$. If $\eta > 0$, then \begin{align*} & \sqrt{\|\sin\Theta(U_1,\tilde U_1)\|_F^2 + \|\sin\Theta(V_1,\tilde V_1)\|_F^2} \\ & \leq \frac{1}{\eta}( \|(I-D_L^*)U_1\|_F^2+ \|(I-D_L^{-1})U_1\|_F^2 \\ & \quad +\|(I-D_R^*)V_1\|_F^2+ \|(I-D_R^{-1})V_1\|_F^2 )^{1/2}. \end{align*}

  3. Let $A=GS$ and $\tilde A=\tilde GS$, and let $\Delta G=\tilde G-G$. Then $\tilde A=DA$, where $D=I+(\Delta G) G^{\dagger}$, and Fact 1 applies with $D_L=D$, $D_R=I$, and $$ \|D^*-D^{-1}\|_{2,F} \leq \bigg(1+\frac{1}{1-\|(\Delta G) G^{\dagger}\|_{2}}\bigg) \frac{\|(\Delta G) G^{\dagger}\|_{2,F}}{2}. $$ According to the notebook on Jacobi Method and High Relative Accuracy, nearly optimal diagonal scaling is such that all columns of $G$ have unit norms, $S=\mathop{\mathrm{diag}} \big( \| A_{:,1}\|_2,\ldots,\|A_{:,n}\|_2 \big)$.

  4. Let $A$ be an real upper-bidiagonal matrix with diagonal entries $a_1,a_2,\ldots,a_n$ and the super-diagonal entries $b_1,b_2, \ldots,b_{n-1}$. Let the diagonal entries of $\tilde A$ be $\alpha_1 a_1,\alpha_2 a_2,\ldots,\alpha_n a_n$, and its super-diagonal entries be $\beta_1 b_1,\beta_2 b_2,\ldots,\beta_{n-1} b_{n-1}$. Then $\tilde A=D_L^* A D_R$ with \begin{align*} D_L &=\mathop{\mathrm{diag}} \bigg(\alpha_1,\frac{\alpha_1 \alpha_2}{\beta_1}, \frac{\alpha_1 \alpha_2 \alpha_3}{\beta_1 \beta_2},\cdots\bigg),\\ D_R &=\mathop{\mathrm{diag}} \bigg(1, \frac{\beta_1}{\alpha_1}, \frac{\beta_1 \beta_2}{\alpha_1 \alpha_2},\cdots\bigg). \end{align*} Let $\alpha=\prod\limits_{j=1}^n \max\{\alpha_j, 1/\alpha_j\}$ and $\beta=\prod\limits_{j=1}^{n-1} \max\{\beta_j, 1/\beta_j\}$. Then $$ (\alpha\beta)^{-1}\leq \| D_L^{-1}\|_2 \|D_R^{-1}\|_2 \leq \| D_L\|_2 \|D_R\|_2 \leq \alpha\beta, $$ and Fact 1 applies. This is a result by Demmel and Kahan.

  5. Consider the block partitioned matrices \begin{align*} A & =\begin{bmatrix} B & C \\ 0 & D\end{bmatrix}, \\ \tilde A & = \begin{bmatrix} B & 0 \\ 0 & D\end{bmatrix} =A \begin{bmatrix} I & -B^{-1} C \\ 0 & I \end{bmatrix}\equiv A D_R. \end{align*} By Fact 1, $\zeta(\sigma_j,\tilde \sigma_j) \leq \frac{1}{2} \|B^{-1}C\|_2$. This is used as a deflation criterion in the SVD algorithm for bidiagonal matrices.

Example - Bidiagonal matrix

In order to illustrate Facts 1 to 3, we need an algorithm which computes the singular values with high relative acuracy. Such algorithm, the one-sided Jacobi method, is discussed in the notebook on Jacobi and Lanczos Methods.

The algorithm actually used in the function svdvals() for Bidiagonal is the zero-shift bidiagonal QR algorithm, which attains the accuracy given by Fact 4: if all $1-\varepsilon \leq \alpha_i,\beta_j \leq 1+\varepsilon$, then $$ (1-\varepsilon)^{2n-1} \leq (\alpha\beta)^{-1} \leq \alpha\beta \leq (1-\varepsilon)^{2n-1}. $$ In other words, $\varepsilon$ relative changes in diagonal and super-diagonal elements, cause at most $(2n-1)\varepsilon$ relative changes in the singular values.

However, if singular values and vectors are desired, the function svd() calls the standard algorithm, described in the next notebook, which does not attain this accuracy .


In [17]:
Random.seed!(135)
n=50
δ=100000
# The starting matrix
a=exp.(50*(rand(n).-0.5))
b=exp.(50*(rand(n-1).-0.5))
A=Bidiagonal(a,b,'U')
# Multiplicative perturbation
DL=ones(n)+(rand(n).-0.5)/δ
DR=ones(n)+(rand(n).-0.5)/δ
# The perturbed matrix
α=DL.*a.*DR
β=DL[1:end-1].*b.*DR[2:end]
B=Bidiagonal(α,β,'U')
# B-Diagonal(DL)*A*Diagonal(DR)
(A.dv-B.dv)./A.dv


Out[17]:
50-element Array{Float64,1}:
 -5.852262815675105e-6
 -4.432971830256208e-6
  9.499233270831072e-7
  4.567575931093151e-6
  2.770326572618551e-6
 -4.92171847733396e-7
  5.266957243325909e-6
  5.025519572543715e-6
  5.129129842939982e-6
 -8.441270187290216e-6
 -1.4503440775795408e-6
 -6.554283522405492e-6
 -3.3632431352673292e-6
  ⋮
 -5.3786193925786096e-6
 -7.616636534367934e-7
  1.9783011184963243e-6
  2.4010262830165367e-8
  4.6799063473791355e-6
 -1.4899813743325476e-6
  1.954030295660262e-6
  2.474624635502817e-6
 -6.348705399626838e-6
 -4.658776693320952e-6
 -2.6292527550221562e-8
 -3.79509939253771e-6

In [18]:
cond(A)


Out[18]:
5.011206458136967e52

In [19]:
(a-α)./a, (b-β)./b


Out[19]:
([-5.852262815675105e-6, -4.432971830256208e-6, 9.499233270831072e-7, 4.567575931093151e-6, 2.770326572618551e-6, -4.92171847733396e-7, 5.266957243325909e-6, 5.025519572543715e-6, 5.129129842939982e-6, -8.441270187290216e-6  …  1.9783011184963243e-6, 2.4010262830165367e-8, 4.6799063473791355e-6, -1.4899813743325476e-6, 1.954030295660262e-6, 2.474624635502817e-6, -6.348705399626838e-6, -4.658776693320952e-6, -2.6292527550221562e-8, -3.79509939253771e-6], [-6.708070137549648e-6, -6.705622242219345e-8, 5.0597840777682445e-6, 1.77341001219051e-6, 4.4799783729450925e-6, -1.306160026031243e-6, 5.397290559204566e-6, 5.846980913854804e-6, -2.23905465453556e-6, -5.02853017915449e-6  …  1.9548430888157477e-6, -4.048182171195338e-7, 5.9021178617452265e-6, -9.020255874414939e-7, 6.99460777191916e-6, 6.823528776892489e-8, -5.1238063340278575e-6, -5.401524825650969e-6, -1.2326342751066054e-6, -3.2077499709355445e-6])

In [20]:
@which svdvals(A)


Out[20]:
svdvals(A::AbstractArray{#s664,2} where #s664<:Union{Complex{Float32}, Complex{Float64}, Float32, Float64}) in LinearAlgebra at C:\Users\Ivan_Slapnicar\AppData\Local\Programs\Julia\Julia-1.4.1\share\julia\stdlib\v1.4\LinearAlgebra\src\svd.jl:194

In [21]:
@which svdvals!(copy(A))


Out[21]:
svdvals!(M::Bidiagonal{#s664,V} where V<:AbstractArray{#s664,1} where #s664<:Union{Float32, Float64}) in LinearAlgebra at C:\Users\Ivan_Slapnicar\AppData\Local\Programs\Julia\Julia-1.4.1\share\julia\stdlib\v1.4\LinearAlgebra\src\bidiag.jl:198

In [22]:
σ=svdvals(A)
μ=svdvals(B)
[σ (σ-μ)./σ]


Out[22]:
50×2 Array{Float64,2}:
 4.64298e10   -1.23263e-6
 3.10187e10   -4.32527e-6
 2.67815e10    1.84558e-7
 2.19952e10    5.02965e-6
 2.12924e9     1.9783e-6
 1.09703e9     4.56758e-6
 4.87714e8     5.12913e-6
 3.504e8      -7.46928e-6
 3.38076e8    -1.1964e-6
 1.52855e8    -3.53586e-6
 1.51249e8    -6.7203e-6
 6.65116e7    -8.03924e-6
 1.09161e7    -6.70807e-6
 ⋮            
 1.49347e-6   -5.067e-6
 6.26626e-8   -5.19851e-6
 4.6482e-8     5.16824e-7
 2.8816e-8     1.70642e-6
 1.58533e-8    1.94282e-6
 5.9872e-9    -3.20284e-7
 2.3034e-9    -4.60262e-6
 3.00645e-11   2.13156e-6
 2.42344e-11   1.67547e-6
 2.57861e-15  -3.19695e-6
 2.8134e-19   -7.00209e-6
 9.26519e-43   3.20759e-6

In [23]:
# The standard algorithm
U,ν,V=svd(A);

In [24]:
ν


Out[24]:
50-element Array{Float64,1}:
 4.64297988814722e10
 3.1018732553641674e10
 2.6781484235726624e10
 2.1995195415186718e10
 2.1292405702699533e9
 1.0970258726486578e9
 4.8771438635928595e8
 3.5039953017282873e8
 3.380763163975408e8
 1.5285479961405435e8
 1.5124895582047072e8
 6.651164429234651e7
 1.091614470029362e7
 ⋮
 4.639272457574271e-6
 4.6392685998815156e-6
 4.639268599364849e-6
 4.6392685424922195e-6
 4.6392683628936985e-6
 4.261717617689395e-6
 2.0843733304851733e-8
 1.160878502592858e-8
 4.7881986966198696e-11
 2.666020498475744e-12
 3.5067969761154713e-15
 8.699874466163866e-35

In [25]:
(σ-ν)./σ


Out[25]:
50-element Array{Float64,1}:
     -3.2864215288662434e-16
      1.2298043638720966e-16
      0.0
      0.0
      0.0
      0.0
      0.0
      1.7010480792023783e-16
      0.0
      0.0
      0.0
      0.0
      0.0
      ⋮
     -2.106379463708313
    -73.03564453873132
    -98.80781911453913
   -159.99625795670377
   -291.63662182925435
   -710.8045478679006
     -8.049124179013239
   -385.129757532541
     -0.9757893105870773
  -1032.8990821023888
 -12463.61962348508
     -9.389846110860628e7

In [ ]: