Usual boilerplate.
In [1]:
%matplotlib inline
import matplotlib.pyplot as pl
from matplotlib import rcParams
from IPython.display import HTML, Latex, display
from scipy import *
from scipy.linalg import eigh
set_printoptions(linewidth=180)
import json ; s = json.load(open("mplrc.json")) ; del json
rcParams.update(s)
rcParams['figure.figsize'] = 6,10
def css_styling():
styles = open("custom.css", "r").read()
return HTML(styles)
css_styling()
Out[1]:
Following, some utility functions:
In [2]:
def redeigh(K, M, phi):
"""Solves the reduced eigenproblem in subspace iteration method.
Input: phi, a 2-d array containing the current subspace;
output: 1. 1-d array of eigenvalues estimates;
2. 2-d array of eigenvector estimates in Ritz coordinates."""
# compute the reduced matrices
Mr = phi.T*M*phi
Kr = phi.T*K*phi
# solve the reduced eigenproblem, using a library function
return eigh(Kr,Mr)
In [3]:
def plot_eigenvectors(evecs):
floors = range(shape(evecs)[0]+1)
for i,v in enumerate(evecs_e.transpose()):
pl.plot(concatenate(([0],ravel(v))), floors, linewidth=2, label='Mode #%d'%(i+1,))
pl.vlines(0,0,floors[-1])
xmn, xmx = pl.xlim() ; pl.xlim(xmn, 1.6*xmx)
pl.yticks(floors) ; pl.legend(loc=0)
pl.show()
In [4]:
def same_sign(arr):
"modifies \"in place\" a 2-D array, forcing the last row to be non-negative."
for col in asarray(arr).transpose():
col *= sign(col[-1])
return None
In [5]:
def error(arr):
err = []
for col in arr.transpose():
err.append(sqrt(sum(ravel(col)*ravel(col))))
return err
We analyse a 2D frame, with negligible flexural deformations in beams and negligible shear and axial deformation, all the deformations are due to the lateral flexibility of columns. Under these assumptions, we can study a dynamic model where we have 1 DOF for each floor.
Our frame has twelve floors and hence 12 DOF. DOF are numbered from the bottom up to the top if the frame.
We need only the lowest 4 eigenvalues-eigenvectors, so we can use the subspace iteration method with a base
phi
that's a 12x4 array.
The floor masses are all the same, while the story stiffnesses are decreasing with height, starting from 23$k$ for storey 1, i.e., between the ground and the first floor, down to 12$k$
We start creating a list containing the lateral stiffnesses, taking into account also the (zero!) stiffness of the (non existing) storey above the top, as this trick is handy when definining the coefficients of the stiffness matrix.
In [6]:
story_stiffness = range(23,11,-1)
story_stiffness.append(0)
story_stiffness = array(story_stiffness)
y = array(range(13))
print "Storey: ",','.join([" %2d"%(i+1,) for i in y])
print "Stiffness: ",",".join([" %2d"%(s,) for s in story_stiffness])
We construct the structural matrices, M
being a unit matrix and K
is constructed as the superposition of 3
matrices of which we specify one of the diagonals. The index set [:-1]
means from the first to the penultimate,
[1:]
means from the second to the last and [1:-1]
means from the second to the penultimate.
While we are at it, we compute also the dynamic matrix.
In [7]:
M = matrix(eye(12))
ss = story_stiffness
K = (diag(+ss[:-1] + ss[1:]) +
diag(-ss[1:-1], k=+1) +
diag(-ss[1:-1], k=-1) ) * 1.0
K = matrix(K)
D = K.I*M
In [8]:
print "normalized mass matrix M/m:"
print M
print
print "normalized stiffness matrix K/k:"
print K
To have something to compare our approximate results with, we compute, using a library function, a very good approximation to the eigenvalues and eigenvectors of our system.
In [9]:
evals, evecs = eigh(K,M,eigvals=(0,3))
same_sign(evecs)
print "first four eigenvalues of the system"
print evals,"\n"
print "first four eigenvectors of the system"
print evecs
The initial base is a 12x4 matrix, with linearly independent columns. To start with a very bad set,
we choose a matrix of random numbers. The call to random.seed
insures repeatability of the results.
In [10]:
random.seed(8+8+1988) # good luck
phi = matrix(random.random((12,4)))-0.5
print "initial subspace vectors"
print phi
It's time to iterate, starting from a very bad choice for the initial base vectors.
In [11]:
description = Latex(r'$$\text{error}_i = \sqrt{\textstyle{\sum_j} \Delta\psi_{ji}^2}$$')
for i in range(5):
display(HTML("<h3>Iteration #%2.2d</h3>"%(i+1,)))
ritz_evals, ritz_evecs = redeigh(K, M, phi)
# "_e" is for "estimate"
evals_e = ritz_evals
evecs_e = phi*ritz_evecs
same_sign(evecs_e) # force the same sign in the last component of evecs
# compute the new base
phi = D*evecs_e
# show what we have done
print "\"Real\" eigenvalues ", evals
print "Estimated eigenvalues ", evals_e
print "Relative error (e-r)/r ", (evals_e-evals)/evals_e
print "2-norm of the difference between estimated eigenvectors and \"real\" ones"
display(description)
print error(evecs_e-evecs)
display(HTML("<h5>The normalised shapes at iteration #%2.2d</h5>"%(i+1,)))
plot_eigenvectors(evecs_e)
Notice how slowly the error index decreases for the two last eigenvector... we have done almost our best for them.
Using a Ritz base with just a few vectors, we expect (from the last slide shown in class) a pretty fast convergence for the lower half of the eigenvectors only and a not so fast convergence for the remaining ones (I mean, eventually all the eigenvectors will converge, what is under discussion is the velocity of convergence).
I'd say that this can be seen also in this example.