# An attempt at an intuitive description of the QR decomposition using Householder reflectors

I really like the Householder matrix. I’ve blogged about its ability to generate an orthonormal basis containing a particular vector in a previous blog post. This blog post is a bit of a tutorial which will use the Householder matrix to perform a QR decomposition. Applying Householder reflectors to compute a QR decomposition is nothing new, but I want this blog post to attempt to provide some intuition into how the algorithm works starting from almost nothing. We’ll briefly visit inner products, matrix multiplication, the Householder matrix and then build a QR decomposition in C. I’m not going to use formal language to define these operations unless it’s necessary. I’m going to assume that you understand things like matrix multiplication and what a matrix inverse is, but not much more than that. The post will be written assuming complex numbers are being used (the C implementation will not… maybe I will add one later).

## The inner product

The inner product takes two vectors and produces a scalar. You can think of a vector as being an array of length values representing coordinates in some N-dimensional space.

Note the bar in the above indicates conjugation. If and are row vectors, we could write the above as:

Where is the transposed conjugate of . Two vectors are said to be orthogonal if they have an inner product of zero. If the inner product of a vector with itself is equal to one, the vector is said to be a unit-vector. The inner product of a vector with a unit-vector is the proportion of that vector that is pointing in the same direction as the unit-vector i.e. if is some unit vector and I define a new vector as:

The vector is now orthogonal to i.e. .

There are some good geometric visualisations of these for two and three dimensional vectors… but everything continues to work in arbitrary dimensions. The above image contains two plots which both contain two perpendicular unit vectors (red and green) and an arbitrary vector (black). In both plots we can verify the red and green vectors are unit vectors by applying the inner product. We can also verify they are orthogonal by applying the inner product. If we compute the inner product of the black vector on the two unit vectors, then multiply each of the scalar results by the corresponding unit vectors used to compute them and sum the results, we should get back the black vector.

## Matrix multiplications

We can define matrix multiplication using inner products of the vectors which make up the rows of the left-hand side matrix with the vectors which make up the columns of the right-hand side matrix.

We can use this to demonstrate that a matrix that is made up of rows (or columns) that are all orthogonal to each other is trivially invertible – particularly if the vectors are unit vectors, in which case the inverse of matrix is . To make this particularly clear:

If the rows are orthogonal to each other, the inner products off the main diagonal will all be zero. If the rows are unit vectors, the diagonal entries are by definition 1 and .

## The Householder matrix

Define a matrix as:

Where is a unit vector. Then:

Thereore is its own inverse () and must also be unitary ().

If we can work out a method to choose such that will contain a particular unit vector and we multiply by any scaled version of vector , we will get a vector which has only one non-zero entry (because all other rows of the matrix will be orthogonal to ). I have described this process before in a previous blog post but will repeat it here with some more detail. Define a vector that we want the first column of to equal:

If is real, we can break into its individual coordinates to solve for their values:

Given the way the matrix is defined, the computation of the square root is unnecessary in an implementation. This is best seen if we expand out the matrix:

Define a scalar and a vector :

Then . This is a particularly useful formulation because we rarely want to compute in its entirity and perform a matrix mutliplication. Rather, we compute:

Which is substantially more efficient. A final necessary change is necessary: if is close to one (which implies all other coefficients of are approaching zero), will begin to approach infinity. There is a relatively simple change here which is to recognise that and are parallel but pointing in different directions (their inner-product is -1). If we don’t care about the direction of the vector, we can change its sign to be the most numerically suitiable for the selection of i.e. if use otherwise use .

This is actually very useful in many applications, one of which is the QR decomposition.

## The QR decomposition

The QR decomposition breaks down a matrix into a unitary matrix and an upper-triangular matrix . There are a number of ways to do this, but we are going use the Householder matrix.

Let’s start by defining a series of unitary transformations applied to as:

Where are all n-by-n unitary matrices. Notice that the first row of will be unmodified by all the transforms that follow. The first two rows of will be unmodified by all the transforms that follow – and so-on.

If we can somehow force the first row of to be the first column of scaled to be a unit vector (while keeping unitary), the first column of will contain only one non-zero entry. We then set about finding a way to select such that the second column contains only two non-zero entries… and so on. Once the process is finished, is upper triangular and the unitary matrix which converts into this form can be written as:

And:

When we follow the above process and the matricies are chosen to be Householder matrices, we have performed the QR decomposition using Householder matrices.

Follows is the C code that implements the decomposition. Because the algorithm repeatedly applies transformations to to eventually arrive at , we will design an API that operates in-place on the supplied matrix. The orthogonal matrix will be built in parallel. The N-1 iterations over the i variable, correspond to the N-1 transformation steps presented earlier. I make no claims about the computational performance of this code. The numerical performance could be improved by using a different method to compute the length of the vector (first thing that happens in the column loop).

/* Perform the QR decomposition of the given square A matrix. * A/R is stored and written in columns i.e. * a[0] a[3] a[6] * a[1] a[4] a[7] * a[2] a[5] a[8] * * q is stored in rows i.e. * q[0] q[1] q[2] * q[3] q[4] q[5] * q[6] q[7] q[8] */ void qr(double *a_r, double *q, int N) { int i, j, k;   /* Initialise qt to be the identity matrix. */ for (i = 0; i < N; i++) { for (j = 0; j < N; j++) { q[i*N+j] = (i == j) ? 1.0 : 0.0; } }   for (i = 0; i < N-1; i++) { double norm, invnorm, alpha; double *ccol = a_r + i*N;   /* Compute length of vector (needed to perform normalisation). */ for (norm = 0.0, j = i; j < N; j++) { norm += ccol[j] * ccol[j]; } norm = sqrt(norm);   /* Flip the sign of the normalisation coefficient to have the * effect of negating the u vector. */ if (ccol[0] > 0.0) norm = -norm; invnorm = 1.0 / norm;   /* Normalise the column * Convert the column in-place into the w vector * Compute alpha */ ccol[i] = (1.0 - ccol[i] * invnorm); for (j = i+1; j < N; j++) { ccol[j] = -ccol[j] * invnorm; } alpha = 1.0 / ccol[i];   /* Update the orthogonal matrix */ for (j = 0; j < N; j++) { double acc = 0.0; for (k = i; k < N; k++) { acc += ccol[k] * q[k+j*N]; } acc *= alpha; for (k = i; k < N; k++) { q[k+j*N] -= ccol[k] * acc; } }   /* Update the upper triangular matrix */ for (j = i+1; j < N; j++) { double acc = 0.0; for (k = i; k < N; k++) { acc += ccol[k] * a_r[k+j*N]; } acc *= alpha; for (k = i; k < N; k++) { a_r[k+j*N] -= ccol[k] * acc; } }   /* Overwrite the w vector with the column data. */ ccol[i] = norm; for (j = i+1; j < N; j++) { ccol[j] = 0.0; } } }

The algorithm follows almost everthing described on this page – albiet with some minor obfuscations (the vector is stored temporarily in the output matrix to avoid needing any extra storage). It’s worth mentioning that the first step of initialising the matrix to isn’t necessary. The initialisation could be performed explicitly in the first iteration of the loop over i – but that introduces more conditional code.

# Release Alignment in Sampled Pipe Organs – Part 1

At the most basic level, a sample from a digital pipe organ contains:

• an attack transient leading into
• a looped sustain block and
• a release which will be cross-faded into when the note is released.

The release cross-fade must be fast (otherwise it will not sound natural or transient details may be lost) and it must also be phase-aligned to the point where the cross-fade begins.

## The necessity for phase alignment

Without phase aligning the release, disturbing artefacts will likely be introduced. The effects are different with short and long cross-fades but are always unpleasant.

The following image shows an ideal cross-fade into a release sample. The crossfade begins at 0.1 seconds and lasts for 0.05 seconds. The release is aligned properly and the signal looks continuous.

A good crossfade into a release.

The following image shows a bad release where the cross-fade is lagging an ideal release offset by half-a-period. Some cancellation occurs during the cross-fade and the result will either sound something like a “pluck” for long cross-fades or a “click” for short cross-fades.

A worst-case crossfade into a release.

(The cross-fade used in generating the above data sets was a raised cosine – linear cross-fades can be used but will result in worse distortions).

The problem of aligning release cross-fades in virtual pipe organs is an interesting one. As an example: at the time of writing this article, release alignment in the GrandOrgue project is not particularly good; it uses a lookup-table taking the value and first-order estimated derivative (both quantised heavily) of the last sample of the last played block as keys. This is not optimal as a single sample says nothing about phase and the first-order derivative estimate could be completely incorrect in the presence of noise.

## Another approach for handling release alignment

If the pitch a pipe was to be completely stable, known () and we knew one point where the release was perfectly aligned (), we know that we could cross-fade into the start of the release at:

Hence, for any sample offset we could compute an offset into the release to cross-fade into.

In reality, pipe pitch wobbles around a bit and so the above would not strictly hold all the time – that being said, it is true for much of the time. If we could take a pipe sample and find all of the points where the release is aligned we could always find the best way to align the release.

It turns out that a simple way to do this is to find the cross-correlation of the attack and sustain segment with a short portion of the release. Taking the whole release would be problematic because as it decays it becomes less similar to the sustaining segment (which leads to an unhelpful correlation signal).

The first 25000 samples of the signal used for the cross-correlation.

The above image shows the attack and some sustain of bottom-C of the St. Augustine’s Closed Horn. This shows visually why single sample amplitude and derivative matching is a poor way to align releases. During one period of the closed horn, there are 14 zero crossings and 16 obvious zero crossings in the derivative. One sample gives hardly enough information.

A 1024 sample cut from the start of the release.

The above image shows a 1024 sample segment taken from the release marker of the same Closed Horn sample. It contains just over a single period of the horn.

The next image shows the cross-correlation of this release segment with the sample itself. My analysis program does correlation of the left and right channels and sums them to provide an overall correlation. Positive maximums correspond to points where the release will phase-align well. Minimums correspond to points where the signal has the least correlation to the release.

Normalised cross correlation of the signal with the release segment.

Using the correlation and a pitch guesstimate, we could construct a function which given any sample offset in the attack/sustain could produce an offset into the release which we should cross-fade into. This is for next time.

# Understanding the Modified Discrete Cosine Transform (MDCT)

After playing around with discrete cosine transforms, I thought I would implement an MDCT and document my understanding of how everything works. I use some similar techniques to those used on the Wikipedia page as they are helpful for understanding but will add some scanned drawings which I think help (I’m not even close to being clever enough to get a computer to draw these for me).

## Prerequisites

The only real background knowledge which I think is relevant to understanding the MDCT is the data extensions which the DCT-4 transform assumes.

First DCT-4 Basis Function with Shifted 2N Sample Input

I’ll refer to the above image in the Forward Transform overview, but for the mean time, only pay attention to the solid quarter wave. This is the first basis function (i.e. ) for an length DCT-4. If the basis is continued past , it has a repeating symmetrical pattern (the dashed line in the image) which repeats every . The symmetry is even around and odd around and holds for every basis function of the DCT-4. i.e. The DCT-4 assumes that the input data continues on forever, repeating itself in the following manner: , , , .

## Forward Transform

The MDCT takes in real data points and produces real outputs. These inputs are designed to overlap, so the first half of the input data should be the second half of the input data of the previous call. The definition is:

It should be trivial to see from the above that the MDCT can be computed using a DCT-4 with an extended number of input data points, all of which have been shifted by half a basis. Go back to the crappy drawing and notice the concatenated length sequences , , and . The total length of this sequence is and begins at (or half the length of a basis function). We need to get , and back into the point region if we want to compute the MDCT using a DCT-4, this can be achieved with the following concatenated sequence (I will subscript these sequences with to denote a reversal of the sequence):

If we take the DCT-4 of this concatenated sequence, we have found the MDCT of the input sequence.

## Inverse Transform

The inverse MDCT or IMDCT takes in real data points and produces real outputs. In this transform, the outputs should overlap such that the first half of the output should be added to the second half of the output data in the previous call. The definition is:

Because we know how the DCT-4 assumes the input and output data repeats in a symmetric pattern, we can get this data trivially in exactly the same fashion as we did in the forward transform. In the following Illustration, we take the output from the forward transform and extend it along the basis:

Extended Projection of the MDCT Output on the First DCT-4 Basis

In output row zero, we can see how to extend the input sequence to obtain the 2N points required. We then see in rows two and three how summing the overlapping blocks causes the aliased sequences to cancel in subsequent calls to the IMDCT.

## World’s Dumbest C MDCT Implementation

I validated all this actually works with a small C program. Follows are the MDCT/IMDCT implementations I came up with… ignore the “twid” input, I cache the modulation factors for the FFT which gets called in the dct4 routine:

/* state should contain double the number of elements as the input buffer (N) * and should have all elements initialized to zero prior to calling. The * output buffer is actually the first N elements of state after calling. */ void mdct(double *state, const double *input, double *twid, unsigned lenbits) { unsigned rl = 1u << lenbits; unsigned i; /* Alias the input data with the previous block. */ for (i = 0; i < rl / 2; i++) { state[i] = - input[rl/2+i] - input[rl/2-i-1]; state[rl/2+i] = state[rl+i] - state[rl+rl-i-1]; } /* Save the input block */ for (i = 0; i < rl; i++) state[rl+i] = input[i]; /* DCT-4 */ dct4(state, lenbits, twid); }   /* state should contain double the number of elements as the input buffer (N) * and should have all elements initialized to zero prior to calling. The * output buffer is actually the first N elements of state after calling. */ void imdct(double *state, const double *input, double *twid, unsigned lenbits) { unsigned rl = 1u << lenbits; unsigned i; /* Collect contributions from the previous frame to the output buffer */ for (i = 0; i < rl / 2; i++) { state[i] = - state[rl+rl/2-i-1]; state[rl/2+i] = - state[rl+i]; } /* Load the input and run the DCT-4 */ for (i = 0; i < rl; i++) state[rl+i] = input[i]; dct4(state + rl, lenbits, twid); /* Sum contributions from this frame to the output buffer and perform the * required scaling. */ for (i = 0; i < rl / 2; i++) { state[i] = (state[i] + state[rl+rl/2+i]) / rl; state[rl/2+i] = (state[rl/2+i] - state[rl+rl-i-1]) / rl; } }

## Windowed MDCT Implementation

Typical MDCT implementations will window the input and output data (this can also be thought of as windowing the basis functions – which I think is a more helpful way to understand what is happening). It is really important to note that the window function must be carefully chosen to ensure that the basis functions remain orthogonal! The window makes the basis functions always begin and end near zero. The process has the side effect of de-normalising the basis functions (unless the window is rectangular) and means there will be a window-dependent scaling factor which will need to be applied at the output to achieve perfect reconstruction. The following images show the second basis function of the MDCT both un-windowed and windowed with a half-sine window (given at the end of the post).

Second MDCT Basis Function

Sine Windowed Second MDCT Basis Function

In a lossy codec this windowing process is somewhat necessary because if the start and end points are not close to zero, the output is likely to periodically glitch for even the slightest errors in the reconstructed MDCT data. This glitching will occur at the boundaries of the transform (i.e. every N points).

We can work out the necessary conditions for the window to obtain perfect reconstruction using the previous drawings (I’d steer away from equations for this one – it’s easier to validate the results visually) by applying a window function split into 4 segments to each of the input blocks. I’ll do the generic case for a symmetrical window which is applied to both the input and the output. We split the window (which has a length ) into four segments which will be applied to our original input segments , , and . Because we are defining this window to be symmetric, we can call the pieces:

Symmetrical Window Impact on MDCT

The above illustration shows how our window segments are applied to the input data and the impact that has on the DCT-4 analysed data blob. Following that is the output segments from two sequential IMDCT calls with the windows applied to the output here as well.

We need to make the overlapping terms equal the required output segment i.e.

It is clear from the above that the necessary condition to achieve reconstruction is for (which implies in this case that must also be true).

A simple solution to this is:

The output requires a scaling factor of 2 for this window.

# Derivation of fast DCT-4 algorithm based on DFT

It’s well known that an point DCT-4 can be computed using an point complex FFT. Although the algorithm is widespread, the texts which I have read on the subject have not provided the details as to how it works. I’ve been trying to understand this for a while (I tend not to use algorithms until I understand them) and finally figured it out and thought I would share (please provide comments/links if you can find a better or shorter explanation – this is as good as I could get).

Start with the definition:

Trig functions are hard to manipulate in expressions (for me anyway), so knowing that is real, we change the into a complex exponential and obtain the following:

It is worth noting that the sign of the exponent is irrelevant.

We then split the expression up to operate over two half-length sequences composed of and . The second sequence is reversed and decimated because when the is replaced by in the term of the exponential, the expression is negated rather than the offset being modified. We move the term into another exponential which becomes trivial as cancels the denominator. i.e.:

Or:

The exponentials still contain no term which resembles an DFT (over length anyway). To get one, we break up into the terms and . Again we choose to reverse the second sequence because it will keep the exponential terms in a similar but negated form.

Now because we are only interested in the real terms, we can ignore or conjugate anything which contributes to the imaginary term of the above expressions. This means that we can conjugate the exponentials when they are only modulating a real term. Doing this, we obtain:

Almost there, but to obtain the full benefit we need the inner terms to be the same. If we now multiply the term by , we get:

Done. If we expand out the and terms completely we get:

Which give us the steps for a DFT based DCT-4 algorithm:

• Transform the point real sequence into the point complex sequence .
• Multiply each element of the sequence by .
• Find , the DFT of the sequence .
• Multiply each element of by .
• The DCT-4 outputs are given by: