Author Archives: nick

An attempt at an intuitive description of the QR decomposition using Householder reflectors

I really like the Householder matrix. I’ve blogged about its ability to generate an orthonormal basis containing a particular vector in a previous blog post. This blog post is a bit of a tutorial which will use the Householder matrix to perform a QR decomposition. Applying Householder reflectors to compute a QR decomposition is nothing new, but I want this blog post to attempt to provide some intuition into how the algorithm works starting from almost nothing. We’ll briefly visit inner products, matrix multiplication, the Householder matrix and then build a QR decomposition in C. I’m not going to use formal language to define these operations unless it’s necessary. I’m going to assume that you understand things like matrix multiplication and what a matrix inverse is, but not much more than that. The post will be written assuming complex numbers are being used (the C implementation will not… maybe I will add one later).

The inner product

The inner product takes two vectors and produces a scalar. You can think of a vector as being an array of length N values representing coordinates in some N-dimensional space.

    \[ \langle \mathbf{a}, \mathbf{b} \rangle = \sum^{N}_{k=1} a_k \overline{b_k} \]

Note the bar in the above indicates conjugation. If \mathbf{a} and \mathbf{b} are row vectors, we could write the above as:

    \[ \langle \mathbf{a}, \mathbf{b} \rangle = \mathbf{a} \mathbf{b}^H \]

Where \mathbf{b}^H is the transposed conjugate of \mathbf{b}. Two vectors are said to be orthogonal if they have an inner product of zero. If the inner product of a vector with itself is equal to one, the vector is said to be a unit-vector. The inner product of a vector with a unit-vector is the proportion of that vector that is pointing in the same direction as the unit-vector i.e. if \mathbf{u} is some unit vector and I define a new vector as:

    \[ \mathbf{b} = \mathbf{a} - \langle \mathbf{a}, \mathbf{u} \rangle \mathbf{u} \]

The vector \mathbf{b} is now orthogonal to \mathbf{u} i.e. \langle\mathbf{b},\mathbf{u}\rangle = 0.

There are some good geometric visualisations of these for two and three dimensional vectors… but everything continues to work in arbitrary dimensions. The above image contains two plots which both contain two perpendicular unit vectors (red and green) and an arbitrary vector (black). In both plots we can verify the red and green vectors are unit vectors by applying the inner product. We can also verify they are orthogonal by applying the inner product. If we compute the inner product of the black vector on the two unit vectors, then multiply each of the scalar results by the corresponding unit vectors used to compute them and sum the results, we should get back the black vector.

Matrix multiplications

We can define matrix multiplication using inner products of the vectors which make up the rows of the left-hand side matrix with the vectors which make up the columns of the right-hand side matrix.

    \[ \begin{tiny} \begin{pmatrix} \langle r_0, \overline{c_0} \rangle & \langle r_0, \overline{c_1} \rangle & \dots & \langle r_0, \overline{c_{N-1}} \rangle \\ \langle r_1, \overline{c_0} \rangle & \langle r_1, \overline{c_1} \rangle & \dots & \langle r_1, \overline{c_{N-1}} \rangle \\ \vdots & \vdots &  & \vdots \\ \langle r_{M-1}, \overline{c_0} \rangle & \langle r_{M-1}, \overline{c_1} \rangle & \dots & \langle r_{M-1}, \overline{c_{N-1}} \rangle \end{pmatrix} = \begin{pmatrix} \dots & r_0 & \dots \\ \dots & r_1 & \dots \\   & \vdots &  \\ \dots & r_{M-1} & \dots \end{pmatrix} \begin{pmatrix} \vdots & \vdots &  & \vdots \\ c_0 & c_1 & \dots & c_{N-1} \\ \vdots & \vdots &  & \vdots \end{pmatrix} \end{tiny} \]

We can use this to demonstrate that a matrix that is made up of rows (or columns) that are all orthogonal to each other is trivially invertible – particularly if the vectors are unit vectors, in which case the inverse of matrix \mathbf{A} is \mathbf{A}^H. To make this particularly clear:

    \[ \begin{tiny} \begin{pmatrix} \langle r_0, r_0 \rangle & \langle r_0, r_1 \rangle & \dots & \langle r_0, r_{N-1} \rangle \\ \langle r_1, r_0 \rangle & \langle r_1, r_1 \rangle & \dots & \langle r_1, r_{N-1} \rangle \\ \vdots & \vdots &  & \vdots \\ \langle r_{N-1}, r_0 \rangle & \langle r_{N-1}, r_1 \rangle & \dots & \langle r_{N-1}, r_{N-1} \rangle \end{pmatrix} = \begin{pmatrix} \dots & r_0 & \dots \\ \dots & r_1 & \dots \\   & \vdots &  \\ \dots & r_{N-1} & \dots \end{pmatrix} \begin{pmatrix} \vdots & \vdots &  & \vdots \\ \overline{r_0} & \overline{r_1} & \dots & \overline{r_{N-1}} \\ \vdots & \vdots &  & \vdots \end{pmatrix} \end{tiny} \]

If the rows are orthogonal to each other, the inner products off the main diagonal will all be zero. If the rows are unit vectors, the diagonal entries are by definition 1 and \mathbf{I} = \mathbf{A}\mathbf{A}^H.

The Householder matrix

Define a matrix \mathbf{P} as:

    \[\mathbf{P}=\mathbf{I} - 2\mathbf{v}\mathbf{v}^H\]

Where \mathbf{v} is a unit vector. Then:

    \[\begin{array}{@{}ll@{}} \mathbf{P}\mathbf{P} &= \mathbf{I}-2\left(\mathbf{v}\mathbf{v}^H+\mathbf{v}\mathbf{v}^H\right)+4\mathbf{v}\mathbf{v}^H\mathbf{v}\mathbf{v}^H \\ &= \mathbf{I}-4\mathbf{v}\mathbf{v}^H+4\mathbf{v}\left(\mathbf{v}^H\mathbf{v}\right)\mathbf{v}^H \\ & = \mathbf{I} \end{array}\]

Thereore \mathbf{P} is its own inverse (\mathbf{P}=\mathbf{P}^{-1}) and must also be unitary (\mathbf{P}=\mathbf{P}^H).

If we can work out a method to choose \mathbf{v} such that \mathbf{P} will contain a particular unit vector \mathbf{u} and we multiply \mathbf{P} by any scaled version of vector \mathbf{u}, we will get a vector which has only one non-zero entry (because all other rows of the matrix will be orthogonal to \mathbf{u}). I have described this process before in a previous blog post but will repeat it here with some more detail. Define a vector \mathbf{u} that we want the first column of \mathbf{P} to equal:

    \[ \begin{array}{@{}ll@{}} \mathbf{u} &= \left( \mathbf{I} - 2 \mathbf{v} \mathbf{v}^H \right) \mathbf{e}_1 \\ \frac{\mathbf{e}_1-\mathbf{u}}{2} &= \mathbf{v} \overline{v_1} \end{array} \]

If u_1 is real, we can break \mathbf{v} into its individual coordinates to solve for their values:

    \[ \begin{array}{@{}ll@{}} v_1 &= \sqrt{\frac{1-u_1}{2}} \\ v_2 &= \frac{-u_2}{2\sqrt{\frac{1-u_1}{2}}} \\ &\vdots \\ v_n &= \frac{-u_n}{2\sqrt{\frac{1-u_1}{2}}} \end{array} \]

Given the way the matrix is defined, the computation of the square root is unnecessary in an implementation. This is best seen if we expand out the 2 \mathbf{v} \mathbf{v}^H matrix:

    \[ \begin{array}{@{}ll@{}} 2 \mathbf{v} \mathbf{v}^H &= \begin{pmatrix} 2 v_1 \overline{v_1} & 2 v_1 \overline{v_2} & \dots & 2 v_1 \overline{v_n} \\ 2 v_2 \overline{v_1} & 2 v_2 \overline{v_2} & \dots & 2 v_2 \overline{v_n} \\ \vdots & \vdots & \ddots & \vdots \\ 2 v_n \overline{v_1} & 2 v_n \overline{v_2} & \dots & 2 v_n \overline{v_n} \end{pmatrix} \\ &= \begin{pmatrix} 1 - u_1 & -\overline{u_2} & \dots & -\overline{u_n} \\ -u_2 & \frac{u_2 \overline{u_2}}{1-u_1} & \dots & \frac{u_2 \overline{u_n}}{1-u_1} \\ \vdots & \vdots & \ddots & \vdots \\ -u_n & \frac{u_n \overline{u_2}}{1-u_1} & \dots & \frac{u_n \overline{u_n}}{1-u_1} \end{pmatrix} \end{array} \]

Define a scalar \alpha=\frac{1}{1-u_1} and a vector \mathbf{w}:

    \[\mathbf{w} = \begin{pmatrix}1 - u_1 \\ -u_2 \\ \vdots \\ -u_n \end{pmatrix} \]

Then 2 \mathbf{v} \mathbf{v}^H = \alpha \mathbf{w} \mathbf{w}^H. This is a particularly useful formulation because we rarely want to compute \mathbf{P} in its entirity and perform a matrix mutliplication. Rather, we compute:

    \[ \mathbf{P}\mathbf{A} = \left(\mathbf{I}-\alpha\mathbf{w}\mathbf{w}^H\right)\mathbf{A}=\mathbf{A}-\alpha\mathbf{w}\mathbf{w}^H\mathbf{A} \]

Which is substantially more efficient. A final necessary change is necessary: if u_1 is close to one (which implies all other coefficients of \mathbf{u} are approaching zero), \alpha will begin to approach infinity. There is a relatively simple change here which is to recognise that \mathbf{u} and -\mathbf{u} are parallel but pointing in different directions (their inner-product is -1). If we don’t care about the direction of the vector, we can change its sign to be the most numerically suitiable for the selection of \alpha i.e. if u_1 < 0 use \mathbf{u} otherwise use -\mathbf{u}.

This is actually very useful in many applications, one of which is the QR decomposition.

The QR decomposition

The QR decomposition breaks down a matrix \mathbf{A} into a unitary matrix \mathbf{Q} and an upper-triangular matrix \mathbf{R}. There are a number of ways to do this, but we are going use the Householder matrix.

    \[\mathbf{A} = \mathbf{Q}\mathbf{R}\]

Let’s start by defining a series of unitary transformations applied to \mathbf{A} as:

    \[\begin{array}{@{}ll@{}} \mathbf{X}_1&=\mathbf{P}_N \mathbf{A} \\ \mathbf{X}_{2} &= \begin{pmatrix} \mathbf{I}_{1} & \mathbf{0} \\ \mathbf{0} & \mathbf{P}_{N-1} \end{pmatrix} \mathbf{X}_{1} \\ \mathbf{X}_{3} &= \begin{pmatrix} \mathbf{I}_{2} & \mathbf{0} \\ \mathbf{0} & \mathbf{P}_{N-2} \end{pmatrix} \mathbf{X}_{2} \\ & \vdots \\ \mathbf{X}_{N-1} &= \begin{pmatrix} \mathbf{I}_{N-2} & \mathbf{0} \\ \mathbf{0} & \mathbf{P}_{2} \end{pmatrix} \mathbf{X}_{N-2} \end{array} \]

Where \mathbf{P}_n are all n-by-n unitary matrices. Notice that the first row of \mathbf{X}_1 will be unmodified by all the transforms that follow. The first two rows of \mathbf{X}_2 will be unmodified by all the transforms that follow – and so-on.

If we can somehow force the first row of \mathbf{P}_N to be the first column of \mathbf{A} scaled to be a unit vector (while keeping \mathbf{P}_N unitary), the first column of \mathbf{X}_1 will contain only one non-zero entry. We then set about finding a way to select \mathbf{P}_{N-1} such that the second column contains only two non-zero entries… and so on. Once the process is finished, \mathbf{X}_{N-1} is upper triangular and the unitary matrix which converts \mathbf{A} into this form can be written as:

    \[ \mathbf{Q}^H = \begin{pmatrix} \mathbf{I}_{N-2} & \mathbf{0} \\ \mathbf{0} & \mathbf{P}_{2} \end{pmatrix} \dots \begin{pmatrix} \mathbf{I}_{2} & \mathbf{0} \\ \mathbf{0} & \mathbf{P}_{N-2} \end{pmatrix}  \begin{pmatrix} \mathbf{I}_{1} & \mathbf{0} \\ \mathbf{0} & \mathbf{P}_{N-1} \end{pmatrix} \mathbf{P}_N \]

And:

    \[ \begin{array}{@{}ll@{}} \mathbf{X}_{N-1} &= \mathbf{Q}^H\mathbf{A} \\ \mathbf{Q}\mathbf{X}_{N-1}&=\mathbf{A} \end{array} \]

When we follow the above process and the \mathbf{P}_n matricies are chosen to be Householder matrices, we have performed the QR decomposition using Householder matrices.

Follows is the C code that implements the decomposition. Because the algorithm repeatedly applies transformations to \mathbf{A} to eventually arrive at \mathbf{R}, we will design an API that operates in-place on the supplied matrix. The orthogonal matrix will be built in parallel. The N-1 iterations over the i variable, correspond to the N-1 transformation steps presented earlier. I make no claims about the computational performance of this code. The numerical performance could be improved by using a different method to compute the length of the vector (first thing that happens in the column loop).

/* Perform the QR decomposition of the given square A matrix.
 * A/R is stored and written in columns i.e.
 * a[0] a[3] a[6]
 * a[1] a[4] a[7]
 * a[2] a[5] a[8]
 *
 * q is stored in rows i.e.
 * q[0] q[1] q[2]
 * q[3] q[4] q[5]
 * q[6] q[7] q[8] */
void qr(double *a_r, double *q, int N) {
	int i, j, k;
 
	/* Initialise qt to be the identity matrix. */
	for (i = 0; i < N; i++) {
		for (j = 0; j < N; j++) {
			q[i*N+j] = (i == j) ? 1.0 : 0.0;
		}
	}
 
	for (i = 0; i < N-1; i++) {
		double  norm, invnorm, alpha;
		double *ccol = a_r + i*N;
 
		/* Compute length of vector (needed to perform normalisation). */
		for (norm = 0.0, j = i; j < N; j++) {
			norm += ccol[j] * ccol[j];
		}
		norm = sqrt(norm);
 
		/* Flip the sign of the normalisation coefficient to have the
		 * effect of negating the u vector. */
		if (ccol[0] > 0.0)
			norm = -norm;
		invnorm = 1.0 / norm;
 
		/* Normalise the column
		 * Convert the column in-place into the w vector
		 * Compute alpha */
		ccol[i] = (1.0 - ccol[i] * invnorm);
		for (j = i+1; j < N; j++) {
			ccol[j] = -ccol[j] * invnorm;
		}
		alpha = 1.0 / ccol[i];
 
		/* Update the orthogonal matrix */
		for (j = 0; j < N; j++) {
			double acc = 0.0;
			for (k = i; k < N; k++) {
				acc += ccol[k] * q[k+j*N];
			}
			acc *= alpha;
			for (k = i; k < N; k++) {
				q[k+j*N] -= ccol[k] * acc;
			}
		}
 
		/* Update the upper triangular matrix */
		for (j = i+1; j < N; j++) {
			double acc = 0.0;
			for (k = i; k < N; k++) {
				acc += ccol[k] * a_r[k+j*N];
			}
			acc *= alpha;
			for (k = i; k < N; k++) {
				a_r[k+j*N] -= ccol[k] * acc;
			}
		}
 
		/* Overwrite the w vector with the column data. */
		ccol[i] = norm;
		for (j = i+1; j < N; j++) {
			ccol[j] = 0.0;
		}
	}
}

The algorithm follows almost everthing described on this page – albiet with some minor obfuscations (the \mathbf{w} vector is stored temporarily in the output matrix to avoid needing any extra storage). It’s worth mentioning that the first step of initialising the \mathbf{Q} matrix to \mathbf{I} isn’t necessary. The initialisation could be performed explicitly in the first iteration of the loop over i – but that introduces more conditional code.

Circular Convolution, Discrete Fourier Transforms and Toeplitz Matrix Multiplication

Circular Convolution

It’s fairly well known that multiplying the results of two discrete Fourier transforms and taking the inverse results in a circular convolution. This is fairly straight-forward to prove given the definitions of the DFT and its inverse.

Given the DFT:

    \[ X \left[ k \right] = \sum_{n=0}^{N-1} e^{-\frac{j 2 \pi n k}{N}} x \left[ n \right] \]

And the inverse DFT:

    \[ x \left[ n \right] = \frac{1}{N} \sum_{k=0}^{N-1} e^{\frac{j 2 \pi n k}{N}} X \left[ k \right] \]

We can derive convolution as:

    \[ \begin{array}{@{}ll@{}} y \left[ n \right] &= \frac{1}{N} \sum_{k=0}^{N-1} e^{\frac{j 2 \pi n k}{N}} \left( \sum_{m=0}^{N-1} e^{-\frac{j 2 \pi m k}{N}} a \left[ m \right] \right) \left( \sum_{l=0}^{N-1} e^{-\frac{j 2 \pi l k}{N}} b \left[ l \right] \right) \\ &= \sum_{m=0}^{N-1} \sum_{l=0}^{N-1} a \left[ m \right] b \left[ l \right] \frac{1}{N} \sum_{k=0}^{N-1} e^{\frac{j 2 \pi \left( n - l - m \right) k}{N}} \\ &= \sum_{m=0}^{N-1} \sum_{l=0}^{N-1} a \left[ m N + \left( n - l \right) \mathrm{ mod } N \right] b \left[ l \right] \end{array} \]

The last step in the above recognises that the summation over k is only non-zero for certain values of l, m and n and we make a variable swap of m to attain the result. We can write the above in matrix form as:

    \[ \begin{pmatrix} y_0 \\ y_1 \\ y_2 \\ \vdots \\ y_{N-1} \end{pmatrix} = \begin{pmatrix} a_0 & a_{N-1} & a_{N-2} & \dots & a_1 \\ a_1 & a_0 & a_{N-1} & & \\ a_2 & a_1 & a_0 & \ddots & \\ \vdots & & \ddots & \ddots & a_{N-1} \\ a_{N-1} & & & a_1 & a_0 \end{pmatrix} \begin{pmatrix} b_0 \\ b_1 \\ b_2 \\ \vdots \\ b_{N-1} \end{pmatrix} \]

The matrix of a coefficients is a circulant matrix. Each row is a shifted copy of the preceeding row. Given that there exist \mathcal{O} \left( n \log n \right) algorithms for computing the DFT, we have shown that multiplying a vector by a circulant matrix has an efficient algorithm (note – this is only a computational reality for large N).

Circular Convolution with a Generalised DFT

Let’s redefine our DFT as:

    \[ X \left[ k \right] = \sum_{n=0}^{N-1} e^{-\frac{j 2 \pi n \left( k + \alpha \right) }{N}} x \left[ n \right] \]

Which has an inverse of:

    \[ x \left[ n \right] = \frac{1}{N} \sum_{k=0}^{N-1} e^{\frac{j 2 \pi n \left( k + \alpha \right)}{N}} X \left[ k \right] \]

This generalisation gives us some control over the boundary conditions of the DFT and hence the assumed data periodicity i.e. the DFT assumes the transformed data continues forever being repeated verbatim over and over – we can change this using \alpha. Let’s derive the convolution again:

    \[ \begin{array}{@{}ll@{}} y \left[ n \right] &= \frac{1}{N} \sum_{k=0}^{N-1} e^{\frac{j 2 \pi n \left( k + \alpha \right)}{N}} \left( \sum_{m=0}^{N-1} e^{-\frac{j 2 \pi m \left( k + \alpha \right)}{N}} a \left[ m \right] \right) \left( \sum_{l=0}^{N-1} e^{-\frac{j 2 \pi l \left( k + \alpha \right)}{N}} b \left[ l \right] \right) \\ &= \sum_{m=0}^{N-1} \sum_{l=0}^{N-1} a \left[ m \right] b \left[ l \right] \frac{1}{N} \sum_{k=0}^{N-1} e^{\frac{j 2 \pi \left( n - l - m \right) \left( k + \alpha \right)}{N}} \\ \end{array} \]

Given this, we can now draw up some matrices for various values of \alpha. Some interesting (and perhaps useful) values are \frac{1}{4}, \frac{3}{4} and \frac{1}{2}. We will restrict our attention to \frac{1}{2} which has a slightly different form once we obliterate the exponential:

    \[ Y[n] = \sum_{m=0}^{N-1} \sum_{l=0}^{N-1} a \left[ m N + \left( n - l \right) \mathrm{ mod } 2 N \right] b \left[ l \right] \left( -1 \right)^m \]

We find that the matrix operation for this looks like:

    \[ \begin{pmatrix} y_0 \\ y_1 \\ y_2 \\ \vdots \\ y_{N-1} \end{pmatrix} = \begin{pmatrix} a_0 & -a_{N-1} & -a_{N-2} & \dots & -a_1 \\ a_1 & a_0 & -a_{N-1} & & \\ a_2 & a_1 & a_0 & \ddots & \\ \vdots & & \ddots & \ddots & -a_{N-1} \\ a_{N-1} & & & a_1 & a_0 \end{pmatrix} \begin{pmatrix} b_0 \\ b_1 \\ b_2 \\ \vdots \\ b_{N-1} \end{pmatrix} \]

This matrix is no-longer circulant; all entries to the right of the main diagonal have been negated. This convolution might not have practical value by itself, but the symmetry suggests that it might have value when combined with another.

Toeplitz Matrix Vector Multiplication

A Toeplitz matrix has the form:

    \[ \begin{pmatrix} t_0 & t_{-1} & t_{-2} & \dots & t_{-(N-1)} \\ t_1 & t_0 & t_{-1} & & \\ t_2 & t_1 & t_0 & \ddots & \\ \vdots & & \ddots & \ddots & t_{-1} \\ t_{N-1} & & & t_1 & t_0 \end{pmatrix} \]

There are efficient algorithms for performing Toeplitz matrix by vector multiplication that use a circular convolution algorithm. These algorithms end up throwing away much of the computed result (this can be seen in the previous link in the multiplication by a zero matrix). We can avoid this by using the symmetry defined in the previous section.

If we take the sum of a regular DFT convolution of A and X and the previously defined convolution of B and X, we are effectively computing the following matrix operation:

    \[ \begin{small} \begin{pmatrix} a_0+b_0 & a_{N-1}-b_{N-1} & a_{N-2}-b_{N-2} & \dots & a_1-b_1 \\ a_1+b_1 & a_0+b_0 & a_{N-1}-b_{N-1} & & \\ a_2+b_2 & a_1+b_1 & a_0+b_0 & \ddots & \\ \vdots & & \ddots & \ddots & a_{N-1}-b_{N-1} \\ a_{N-1}+b_{N-1} & & & a_1+b_1 & a_0+b_0 \end{pmatrix} \begin{pmatrix} x_0 \\ x_1 \\ x_2 \\ \vdots \\ x_{N-1} \end{pmatrix} \end{small} \]

We can select values for a and b to create a multiplication of any matrix with equal elements on each diagonal – such as our Toeplitz matrix – using only DFTs of the length of the input data sequence. There are some interesting optimisations that can be made when the matrix is Hermitian to eliminate some of the DFTs entirely.

On the perils of cross-fading loops in organ samples

One common strategy when looping problematic organ samples is to employ a cross-fade. This is an irreversible audio modification that gradually transitions the samples leading up to the end of a loop to equal the samples that were leading into the start. The goal is to completely eliminate any sort of impulsive glitch and hopefully also create a “good spectral match”. While the ability to eliminate glitches is possible, creating a good spectral match might not be so simple.

We can think of an organ sample as being a linear combination of two signals:

  • A voiced/correlated/predictable component (choose your word) which represents the tonal part of the sample. Strictly speaking, the tonal component is not entirely predictable – there are continuous subtle variations in the speech of a pipe… but they are typically small and we will assume predictability.
  • An unvoiced/uncorrelated/unpredictable component which represents the pipe-noise of the sample.

Both of these components are necessary for realism in a sample.

The following image shows a cross-fade of two entirely correlated signals. The top picture contains the input signals and the cross-fade transition that will be used, the bottom contains the input signals with the cross-faded signal overlaid on top. There is nothing interesting about this picture: the two input signals were the same. The cross-fade transitions between two identical signals i.e. the output signal is equal to both of the input signals.

Crossfade of correlated signal

Crossfade of correlated signal

The next image shows a cross-fade of the same correlated signal with some uniform white noise overlaid on top. What is going on in the middle of output signal? It looks like it’s been attenuated a little bit and doesn’t appear to have a uniform distribution anymore.

Crossfade of correlated signal with uniform noise

Crossfade of correlated signal with uniform noise

This final image is a cross-fade purely consisting of two uniform random signals to add some more detail.

Crossfade of uniform noise

Crossfade of uniform noise

It turns out that summing two uniformly distributed random variables yields a new random variable with a triangular distribution (this is how noise for triangular dither gets generated). During the cross-fade, the distribution of the uncorrelated components of the signal is actually changed. Not only that, but the power of the signal is reduced in the middle of the transition. Here is an audio sample of two white noise streams being cross-faded over 2 seconds:

Listen for the level drop in the middle.

If the cross-fade is too long when looping a sample, it could make the pipe wind-noise duck over the duration of the cross-fade. While the effect is subtle, I have heard it in some samples and it’s not particularly natural. I suppose the TLDR advice I can give is:

  • If you can avoid cross-fading – avoid it.
  • If you cannot avoid cross-fading – make the cross-fade as short as possible (milliseconds) to avoid an obvious level-drop of noise during the transition.

Arbitrary polynomial-segment signal generation (e.g. square, triangle, sawtooth waves)

There is a tool which is part of my Open Diapason project called “sampletune” which is designed to allow sample-set creators to tune samples by ear and save the tuning information back into the sampler chunk of the wave file. The utility plays back a fixed tone and there are keyboard controls to shift the pitch of the sample up and down until the beats disappear. I was hoping to prototype a sawtooth wave as the tuning signal, but realised during implementation that creating arbitrary frequency piecewise-linear waveforms is actually somewhat a difficult problem – fundamentally due to aliasing… so I parked that idea and ended up summing several harmonic sinusoids at some arbitrary levels.

Recently, I was thinking about this again and set about solving the problem. It was only in the last couple of weeks that I discovered tools like Band Limited Impulse Trains (see link at bottom of post) which seem to be the common way of generating these signals. These are fundamentally FIR techniques. I approached the problem using IIR filters and came up with a solution which I think is kind-of interesting – please let me know if you’ve seen it done this way so I can make some references!

The idea

If we generate a piecewise linear waveform wave at a frequency which is not a factor of the sampling rate, we will get aliasing. In fact, we can work out the number of shifted aliasing spectrum copies which we can expect to see by taking f_0 / \text{gcd}(f_0, f_s). If f_0 is irrational, we have an infinite number of copies. Let’s make the assumption that more-often than not, we are going to be generating a very large number of aliases. The next assumption we are going to make is that most of the signals we are interested in generating have a spectral envelope which more-or-less decays monotonically. With this assumption, we can say: if we generate our signal at a much higher sampling rate, we effectively push the level of the aliasing down in the section of the spectrum we care about (i.e. the audible part). Below is an image which hopefully illustrates this; the upper image shows the spectral envelope of the signal we are trying to create in blue, all of the black lines which follow are envelopes of the aliasing. The second image shows what would have happened if we created the same frequency signal at three times the sampling rate. The interesting content again is in blue and everything in black is either aliasing or components we don’t care about. What we can see is that by increasing the sample rate we generate our content at, we effectively push the aliasing components down in the frequency region we care about.


So let’s assume that instead of generating our waveforms at f_s, we will:

  • Generate them at a sample rate of M f_s where M is some large number.
  • Filter out the high-frequency components above f_s / 2 using some form of IIR filter structure.
  • Decimate the output sequence by a factor of M to get back to f_s (which we can do because of the previous operation).

Now that we’ve postulated a method, let’s figure out if it makes any sense by answering the questions “does it have an efficient implementation?” and “does it actually produce reasonable output for typical signals we may want to generate?”.

Does it have an efficient implementation?

Denote a state-space representation of our SISO discrete system as:

    \[   \begin{array}{@{}ll@{}}     \hat{q}\left[n\right] & = \mathbf{A} \hat{q}\left[n-1\right] + \mathbf{B} x\left[n\right] \\     y\left[n\right] & = \mathbf{C} \hat{q}\left[n-1\right] + D x\left[n\right]   \end{array} \]

For clarity, let’s just consider a single update of the system state where \hat{q}\left[-1\right] represents the “previous state” and x\left[0\right] represents the first sample of new data we are pushing in.

    \[   \begin{array}{@{}ll@{}}     \hat{q}\left[0\right] & = \mathbf{A} \hat{q}\left[-1\right] + \mathbf{B} x\left[0\right] \\     y\left[0\right] & = \mathbf{C} \hat{q}\left[-1\right] + D x\left[0\right]   \end{array} \]

We can derive an expression to insert two samples in via substitution and from that derive expressions for inserting groups of samples.

    \[   \begin{array}{@{}ll@{}}     \hat{q}\left[0\right] & = \mathbf{A} \hat{q}\left[-1\right] + \mathbf{B} x\left[0\right] \\     \hat{q}\left[1\right] & = \mathbf{A} \hat{q}\left[0\right] + \mathbf{B} x\left[1\right] \\  & = \mathbf{A}^2 \hat{q}\left[-1\right] + \mathbf{A}\mathbf{B} x\left[0\right] + \mathbf{B} x\left[1\right] \\ \hat{q}\left[\sigma\right] & = \mathbf{A}^{\sigma+1}\hat{q}\left[-1\right] + \sum_{n=0}^{\sigma}\mathbf{A}^{\sigma-n}\mathbf{B}x\left[n\right]  \\   \end{array} \]

    \[   \begin{array}{@{}ll@{}}     y\left[0\right] & = \mathbf{C} \hat{q}\left[-1\right] + D x\left[0\right] \\     y\left[1\right] & = \mathbf{C} \hat{q}\left[0\right] + D x\left[1\right] \\  & = \mathbf{C}\mathbf{A}\hat{q}\left[-1\right] + \mathbf{C}\mathbf{B}x\left[0\right] + D x\left[1\right] \\ y\left[\sigma\right] & = \mathbf{C}\mathbf{A}^{\sigma}\hat{q}\left[-1\right] + \mathbf{C}\sum_{n=0}^{\sigma-1}\mathbf{A}^{\sigma-1-n}\mathbf{B}x\left[n\right] + D x\left[\sigma\right]   \end{array} \]

We’re interested in inserting groups of samples which form linear segments into our filter. So let’s replace x\left[n\right] with some arbitrary polynomial. This gives us some flexibility; we need only a polynomial of zero degree (constant) for square waves, first degree for saw-tooth and triangle – but we could come up with more interesting waveforms which are defined by higher order polynomials. i.e.

    \[ x\left[n\right] = \sum_{c=0}^{C-1} a_c n^c \]

Let’s substitute this into our state expressions:

    \[   \begin{array}{@{}ll@{}}     \hat{q}\left[0\right] & = \mathbf{A} \hat{q}\left[-1\right] + \mathbf{B} a_0 \\     \hat{q}\left[\sigma\right] & = \mathbf{A}^{\sigma+1}\hat{q}\left[-1\right] + \sum_{n=0}^{\sigma}\mathbf{A}^{\sigma-n}\mathbf{B}\sum_{c=0}^{C-1} a_c n^c \\  & = \left(\mathbf{A}^{\sigma+1}\right)\hat{q}\left[-1\right] + \sum_{c=0}^{C-1} a_c \left(\sum_{n=0}^{\sigma}\mathbf{A}^{\sigma-n}\mathbf{B} n^c\right) \\   \end{array} \]

    \[   \begin{array}{@{}ll@{}}     y\left[0\right] & = \mathbf{C} \hat{q}\left[-1\right] + D a_0 \\     y\left[\sigma\right] & = \mathbf{C}\mathbf{A}^{\sigma}\hat{q}\left[-1\right] + \mathbf{C}\sum_{n=0}^{\sigma-1}\mathbf{A}^{\sigma-1-n}\mathbf{B}\sum_{c=0}^{C-1} a_c n^c + D \sum_{c=0}^{C-1} a_c \sigma^c \\  & = \left(\mathbf{C}\mathbf{A}^{\sigma}\right)\hat{q}\left[-1\right] + \sum_{c=0}^{C-1} a_c \left(\sum_{n=0}^{\sigma-1}\mathbf{C}\mathbf{A}^{\sigma-1-n}\mathbf{B} n^c\right) + \sum_{c=0}^{C-1} a_c \left(D \sigma^c\right)   \end{array} \]

There are some interesting things to note here:

  • The first output sample (y\left[0\right]) is only dependent on a_0 and the previous state. If we restrict our attention to always getting the value of the first sample, we don’t need to do any matrix operations to get an output sample – just vector operations.
  • All of the parenthesised bits in the equations have no dependency on the coefficients of the polynomial we are inserting into the filter – they only depend on the number of samples we want to insert (\sigma), the system state matrices and/or the index of the polynomial coefficient. This means that we can build lookup tables for the parenthesised matrices/vectors that are indexed by \sigma and the polynomial coefficient index and effectively insert a large number of samples of any polynomial into the filter state with a single matrix-matrix multiply and a number of scalar-vector multiplies dependent on the order of the polynomial.

It’s looking like we have an efficient implementation – particularly if we are OK with allowing a tiny (almost one sample at f_s) amount of latency into the system.

Does it work?

Below is a screen capture of Adobe Audition showing the output of a small program which implemented the above scheme. The system is configured using an elliptic filter with 1 dB of passband ripple and a stop-band gain of -60 dB.

Waveform with frequency of 600*pi Hz. Left channel generated using naive modulo, right generated using the proposed scheme.

Waveform with frequency of 600*pi Hz. Left channel generated using naive modulo, right generated using the proposed scheme. The spectrum of the left channel is green.

The waveform has a frequency of 600\pi which is irrational and therefore has an infinite number of aliases when sampled (this is not strictly true in a digital implementation – but it’s still pretty bad). We can see that after the cutoff frequency of the filter, the aliasing components are forced to decay at the rate of the filter – but we predicted this should happen.

Below is another example using an 4th order Bessel filter instead of an elliptic filter. This preserves the shape of the time-domain signal much more closely than the elliptic system, but rolls off much slower. But the performance is still fairly good and might be useful.

Waveform with frequency of 600*pi Hz. Left channel generated using naive modulo, right generated using the proposed scheme with a Bessel filter.

Waveform with frequency of 600*pi Hz. Left channel generated using naive modulo, right generated using the proposed scheme. The spectrum of the left channel is green.

Here is a link to a good paper describing how classic waveforms have been done previously “Alias-Free Digital Synthesis of Classic Analog Waveforms” [
T. Stilson and J. Smith, 1996]
.