down

Exercise 9: Singular Value Decomposition


Before you actually start working on this Exercise, it is essential that you have read Chapter 2 of Numerical Recipes (particularly the section Singular Value Decomposition), and that you understand the various concepts mentioned there (rank, range, orthogonality, etc.). That will make it much easier to understand what is going on when the various control parameters are changed in the experiments below.

Here are your 5 bonus points   Credits: 5/5

Subsections
  approx time  
Preparation section

15 min

SVD experimentarium
30 min
Improving SVD

60 min

Setting up the matrix system from scratch

25 min

Home Work
2 hours


down top CVS

[about 1 minute]

To extract the exercise files for this weeks exercise, do

    cd ~/ComputerPhysics
    cvs update -d

In case of problems, see the CVS update help page.


down top SVD preparatory section

[about 15 minutes]

Before you start playing with the SVD experimentarium, let's check that you picked up enough from the lecture and the Numerical Recipes to make the exercise meaningful.

The following is a short summary, combined with questions. If you have prepared enough, you should have no problem with the questions. Conversely, if you find the summary too condensed and incomprehensible, you probably have not prepared enough.

Basic Principles of SVD

The basic theorem on which the SVD method is based says that an arbitrary MxN (M rows, N columns) matrix A may be decomposed into a product of three matrices, denoted U, W, and Vt in the Lecture Notes and in Numerical Recipes:

	A = U W Vt

One of the three matrices is MxN, the other two are NxN. Which one is MxN; Credits: 2/-1
OK

One of these is a diagonal matrix; is it Credits: 2/-1
OK

One of the matrices is both row- and column-orthonormal, which? Credits: 2/-1
OK

The linear equation system

	A x = b

may be interpreted as a mapping from an N-dimensional x-space to an M-dimensional b-space, where we assume that M is larger than N. Using the SVD decomposition of A into U W Vt, we may write A x as U ( W ( Vt x)), where the innermost operation is a generalized rotation in N-space, the next one is a scaling of each of the N components, and the last one is a linear combination of the N column vectors in U, each with M components.

Since, according to this formula, b is a linear combination of the column vectors of U, it can never lie outside the N-dimensional space spanned by these. But b has M (> N) components; hence it formally belongs to an M-dimensional space.

Which of the following statements are true?

The rank of A is equal to: Credits: 2/-1
OK

Least-squares Solutions

If M>N, an M-dimensional vector has more components than a vector expressed as components of the N orthogonal column vectors in U. Hence it is possible to have b vectors for which the equation system has no solution; there is no x that will produce a b outside the range of A. However, the expression for the inversion of the equation,

	x = A-1 b = (V (W-1 (Ut b)))

may still be formally applied, and indeed still produces a useful solution; the solution for which the remainder (A x - b) is "smallest", in the sense that it has the smallest sum of squares of its components. To see that this is indeed the case, consider the following:

The three successive steps indicated by the parentheses may be interpreted as finding the components of b along the N column vectors of U, then scaling the components, and finally rotating the answer back to the original coordinate system.

Which of the following statements are true?

The components of b that lie outside the range of A make no difference to the projection, and hence do not influence x
make a difference to the projection, and hence do influence x
Credits: 1/-1
OK

A change of x will result in a contribution to the remainder that lies inside the range of A.
outside the range of A.
Credits: 1/-1
OK

Contributions to the sum of squares from inside and outside the range of A are dependent
independent
Credits: 1/-1
OK

The minimum is obtained when the contribution from inside the range vanishes
outside the range vanishes
Credits: 1/-1
OK

The Singular Values wi

Each column vector in U is multiplied with a singular value wi when going from x to b. It may happen, that some changes of x have practically no effect on b (think about b as a measurement, and x the "true" function --- a limited resolution in the measurement means that details of x can hardly be "seen" in b).

These x patterns will correspond to very large singular values wi
small singular values wi
Credits: 1/-1
OK

Correspondingly, when going from b to x, the projections of b onto those column vectors get multiplied by very large 1/wi factors; i.e., they lead to very small contributions to x
large contributions to x
Credits: 1/-1
OK

Thus, if b is obtained from measurements, and is influenced by noise in the measurements, then the components of the noise along those particular U column vectors get multiplied with very large coefficients, which leads to huge uncertainties in the answer.

At this point, it should be easy to understand that one is better off ignoring these components completely than by blowing up their accidental values with large numbers. Hence the rule to put 1/wi = 0 if wi is too small.

In the experimentarium, synthetic measurements b are produced by first smearing the input profile x with a (slightly noisy) Gaussian profile, and then adding some noise to the "measurements".


down top SVD experimentarium

[about 30 minutes]

To run the SVD experimentarium:


down top Improving the SVD solution

[about 60 minutes]

The SVD solution is much better than the LU-decomposition (less noise sensitive), but it shows some ringing near the sharp corners of the original function.

In order to pass the test below your xsvd.pro needs to have:
  • A new widget slider
  • A widget label for the slider
  • A new widget base for the slider to be in
  • A new field called (exactly) 'p' in the 'data' structure
All of these can be made by essentially copying the corresponding parts for existing sliders.

I have a version that works with the new additions mentioned above Locate and upload your xsvd.pro file: Credits: 5/-5
OK



down top Setting up the equations from scratch

[about 25 minutes]

In the previous part of this exercise you became familiar with SVD using a prebuilt setup. In this part you perform the basic calculations yourself, starting from a series of data files and information about how these relate to the final result.

In order to pass the test below your svd.jou journal file must assign the RMS value to an IDL variable 'rms'.

My working journal file svd.jou is ready for uploading Locate and upload your svd.jou file: Credits: 5/-5
OK

Use any remaining time for example to play with the X-widget interface, either with this weeks interface, or the one from last week.


down top Home Work

[about 2 hours]

Spend at least one hour this week reading the chapter about Linear Algebra in Numerical Recipes; understanding what goes on in this exercise is non-trivial! Details about what to read are given in the Lecture Notes.

If you did not have enough time to play with IDL last week, or during the exercise hours, then do it in your home work time. The X-widget interface this week is similar to the one last week; in fact if you would like to experiment with modifications of the X-widgets, you can pick either one of the two exercises.

If you want to prepare for the next Lecture, then check out the Chapter in Numerical Recipes about Ordinary Differential Equations.


$Id: index.php,v 1.14 2010/05/18 19:29:19 aake Exp $