When writing eigen solvers, it’s useful to be able to generate an orthonormal basis containing a particular vector. Another way of saying this is: given some fixed vector, find other vectors that are perpendicular to it and each other. I am under the impression from some websites that this is also useful in computer graphics (but I know nothing about that - let me know if you do!).

I’ve seen people ask how to do this on the internet (for example: here and here) and in general the recommended process is:

- If only one perpendicular vector is required, use a heuristic to come up with something perpendicular (see previous links).
- If two perpendicular vectors are required, create one perpendicular vector using the above heuristic then use the cross-product to come up with the final vector.
- For more than two perpendicular vectors, generate random vectors and use Gram Schmidt to (hopefully) orthogonalise them.
**For 3D graphics, see the “Update” heading below…**

The first two usually require branching in the heuristic. The second may not work (if you’re really *really* unlucky) and requires multiple normalisation operations. Follows is an approach which I haven’t seen documented elsewhere which requires one square root for an arbitrary dimension vector set.

What we want is some technique which creates an orthogonal matrix from a vector. This is exactly what the Householder transformation matrix does:

\[ \mathbf{P} = \mathbf{I} - 2 \mathbf{v} \mathbf{v}^* \]

Where \(\mathbf{v}\) is some unit vector. The matrix \(\mathbf{P}\) is hermitian and unitary and is hence orthonormal. If we can find a simple expression for \(\mathbf{v}\) which will force the first column of \(\mathbf{P}\) to be equal to some unit vector \(\mathbf{q}\), we have all the information we need to create a basis containing \(\mathbf{q}\). This turns out to be trivial:

\[ \mathbf{q} = \left( \mathbf{I} - 2 \mathbf{v} \mathbf{v}^* \right) \mathbf{e}_1 \] \[ \mathbf{q} = \mathbf{e}_1 - 2 \mathbf{v} \mathbf{v}^* \mathbf{e}_1 \] \[ \frac{\mathbf{e}_1 - \mathbf{q}}{2} = \mathbf{v} v^*_1 \]

Explicitly, this looks like:

\[ \begin{pmatrix} \frac{1 - q_1}{2} \\ -\frac{q_2}{2} \\ -\frac{q_3}{2} \\ \vdots \\ -\frac{q_N}{2} \end{pmatrix} = v^*_1 \begin{pmatrix} v_1 \\ v_2 \\ v_3 \\ \vdots \\ v_N \end{pmatrix} \]

If \(\mathbf{q}\) is real (i.e. $v^*_1 = v_1 $), we set:

\[ v_n = \left\{ \begin{array}{@{}ll@{}} \sqrt{\frac{1 - q_1}{2}}, & \text{if}\ n = 1 \\ -\frac{q_n}{2 v_1}, & \text{if}\ n > 1 \end{array}\right. \]

Here is a very inefficient C implementation (the square root is not actually necessary - this can be seen in the fact that two \(v_n\) entries are always multiplied together and the term in the square root is constant - also the most inner loop over j should start from 1 because when j==0, the value is simply \(q_{i+1}\) ):

```
/* The first n values in m is a normalised vector which is desired to be
* part of the basis set. Following this vector, n-1 new vectors of length n
* will be added which are orthogonal to it. */
void house_ortho(double *m, unsigned n)
{
const double x = sqrt(1.0 - m[0]);
/* As m[0] approaches 1, x approaches 0. This detects that and will end
* up forcing the result to be the identity matrix. */
const double s = (x < 1e-20) ? 0.0 : (-1.0 / x);
unsigned i, j;
/* We first convert the m vector into the v vector. Note that we have
* incorporated the multiply by 2 present in the Householder operation
* into v by multiplying every element by sqrt(2). */
for (m[0] = x, i = 1; i < n; i++)
[i] *= s;
m/* Create all of the orthogonal vectors using the definition of the
* Householder transformation matrix: I - 2 v v*. */
for (i = 1; i < n; i++) {
for (j = 0; j < n; j++) {
[i*n+j] = ((i == j) ? 1.0 : 0.0) - m[i] * m[j];
m}
}
/* Convert the v vector back into m. Probably could have written this
* code in a way where this didn't need to happen. */
for (m[0] = 1.0 - x * x, j = 1; j < n; j++)
[j] = -m[j] * x;
m}
```

The termination condition for the for loop (over *i*) in the middle of the code sample could be modified to generate a smaller number of vectors.

This process could be modified to work with a complex valued \(\mathbf{q}\) but is slightly more complicated because \(\mathbf{P}\) is hermitian (which implies that all values on the diagonal are real). The modification amounts to finding a unit vector which forces \(q_1\) to be real, multiplying \(\mathbf{q}\) by this unit vector then dividing all resultant vectors by the factor. I haven’t tested this, but it should work.

## Update

I posted a link to this blog on Twitter and reached out to see if anyone else had used this method. Warren Moore helpfully responded and pointed me at this paper “Building an Orthonormal Basis, Revisited” (Pixar) which describes numerical and implementation improvements to an algorithm proposed in this paper “Building an Orthonormal Basis from a 3D Unit Vector Without Normalization” (Frisvad, J. R.).

The algorithm which I’ve described in terms of finding the vector \(v\) which creates the Householder matrix \(P\) with the first column equal to some unit vector \(q\), ends up producing exactly the same equations (given a vector of dimension 3) given in Frisvad’s paper up to a few sign changes - albeit I’ve gone about defining the problem in a slightly different way. We can demonstrate the equivalence by expanding out the resulting householder matrix given the \(v\) vector we derived earlier:

\[ v_n = \left\{ \begin{array}{@{}ll@{}} \sqrt{\frac{1 - q_1}{2}}, & \text{if}\ n = 1 \\ -\frac{q_n}{2 v_1}, & \text{if}\ n > 1 \end{array}\right. \]

\[ \begin{pmatrix} q_1 & q_2 & q_3 \\ q_2 & 1 - 2 v_2 v_2 & -2 v_2 v_3 \\ q_3 & -2 v_2 v_3 & 1 - 2 v_3 v_3 \\ \end{pmatrix} \]

Note: we don’t actually need \(v_1\) because the householder matrix is symmetric. The matrix can be expressed purely in terms of \(q\) as:

\[ \begin{pmatrix} q_1 & q_2 & q_3 \\ q_2 & 1 - \frac{q^2_2}{1-q_1} & -\frac{q_2 q_3}{1-q_1} \\ q_3 & -\frac{q_3 q_2}{1-q_1} & 1 - \frac{q^2_3}{1-q_1} \\ \end{pmatrix} \]

The Pixar paper addresses the singularity which occurs as \(q_1\) tends towards 1. We can do the same thing here by negating the definition of the Householder matrix which still produces a matrix with the same properties. i.e.

\[ \mathbf{P} = 2 \mathbf{v} \mathbf{v}^* - \mathbf{I} \]

In this case, the vector \(v\) is now defined as:

\[ v_n = \left\{ \begin{array}{@{}ll@{}} \sqrt{\frac{1 + q_1}{2}}, & \text{if}\ n = 1 \\ \frac{q_n}{2 v_1}, & \text{if}\ n > 1 \end{array}\right. \]

Which moves the singularity to occur as \(q_1\) tends towards \(-1\). The matrix still looks very similar:

\[ \begin{pmatrix} q_1 & q_2 & q_3 \\ q_2 & \frac{q^2_2}{1+q_1} - 1 & \frac{q_2 q_3}{1+q_1} \\ q_3 & \frac{q_3 q_2}{1+q_1} & \frac{q^2_3}{1+q_1} - 1 \\ \end{pmatrix} \]

## Efficient implementation of 3-dimension basis finder

As shown in the Pixar paper, there is no need to test the sign of \(q_1\) and pick a code path to follow (which would create a branch prediction performance hit) - we can simply extract the sign and use it as a multiplier to flip signs where they need to be flipped (if they need to be flipped).

Modifying some signs in the explicit matrices given previously, we can re-write the matrices as:

\[ \begin{pmatrix} q_1 & q_2 & q_3 \\ q_2 & \frac{q^2_2}{q_1-1} + 1 & \frac{q_2 q_3}{q_1-1} \\ q_3 & \frac{q_3 q_2}{q_1-1} & \frac{q^2_3}{q_1-1} + 1 \\ \end{pmatrix} \]

\[ \begin{pmatrix} q_1 & q_2 & q_3 \\ q_2 & \frac{q^2_2}{q_1+1} - 1 & \frac{q_2 q_3}{q_1+1} \\ q_3 & \frac{q_3 q_2}{q_1+1} & \frac{q^2_3}{q_1+1} - 1 \\ \end{pmatrix} \]

Or more simply:

\[ \begin{pmatrix} q_1 & q_2 & q_3 \\ q_2 & \frac{q^2_2}{q_1+s} - s & \frac{q_2 q_3}{q_1+s} \\ q_3 & \frac{q_3 q_2}{q_1+s} & \frac{q^2_3}{q_1+s} - s \\ \end{pmatrix} \]

Where:

\[ s = \left\{ \begin{array}{@{}ll@{}} 1, & \text{if}\ q_1 > 0 \\ -1, & \text{if}\ q_1 < 0 \\ \pm 1, & \text{if}\ q_1 = 0 \end{array}\right. \]

And here’s the C implementation:

```
/* The first three values in v are a three point unit vector. The
* function produces two more three point vectors of the same coordinate
* space which are orthogonal. */
void ortho3(float *v)
{
float q1 = v[0];
float q2 = v[1];
float q3 = v[2];
float sign = copysignf(1.0f, q1);
float scale = 1.0f / (q1 + sign);
float q2scale = q2 * scale;
float q3scale = q3 * scale;
float q3q2scale = q2scale * q3;
[3] = q2;
v[4] = q2 * q2scale - sign;
v[5] = q3q2scale;
v[6] = q3;
v[7] = q3q2scale;
v[8] = q3 * q3scale - sign;
v}
```