Sean showed us last time that lattices are additive subgroups of \mathbb{R}^d and that any lattice \Gamma is of the form

\Gamma=\{\alpha_1 e_1+\ldots +\alpha_k e_k \mid \alpha_i\in\mathbb{Z}, 1\leq i\leq k\}

for k linearly independent vectors e_i in \mathbb{R}^d. The number k is called the rank of \Gamma. If k=d then we say that \Gamma has full rank. The vectors e_1,\ldots,e_k are called a basis for \Gamma.

Example Take e_1=(1,1) and e_2=(1,0). These are linearly independent in \mathbb{R}^2 so form a lattice \Gamma of full rank in \mathbb{R}^2.

This lattice looks like (and is) \mathbb{Z}^2. We can also think of e_1, e_2 as vectors in \mathbb{R}^3:

e_1=(1,1,0)\qquad e_2=(1,0,0).

They’re still linearly independent so form the basis of a lattice, but this lattice won’t have full rank.

In the above example we noticed that the lattice with basis (1,1), (1,0) looked just like \mathbb{Z}^2. And clearly any vector (a,b)\in\mathbb{Z}^2 can be written as

(a-b)\binom{1}{0}+b\binom{1}{1},

so \Gamma=\mathbb{Z}^2. And in fact infinitely many pairs of basis vectors give the same lattice, so it would be helpful if these lattices had some kind of invariant which didn’t depend on the choice of basis, and indeed they do.

Given a set of d vectors and a basis in \mathbb{R}^d we can write the vectors in a matrix by making the columns the vectors with respect to the basis. For example the vectors (1,3) and (4,7) using the standard basis in \mathbb{R}^2 can be put in the matrix

\left(\begin{array}{ c c }1 & 4 \\ 3 & 7\end{array}\right).

Changing the order of our vectors or the order of our basis will change the matrix, but not the matrix’s determinant. Moreover this determinant is always nonzero provided our vectors are linearly independent. So perhaps the determinant of this matrix is the invariant we’re looking for, but first we need to show that different bases for our lattice give the same determinant.

Let’s let \{a_1,\ldots,a_n\} and \{a_1^\prime,\ldots,a_n^\prime\} be two bases for the same lattice \Gamma. Since they proffer the same lattice any vector from one of the sets can be written in terms of the other set, that is to say for each i,j=1,\ldots,n there exist integers u_{ij} and v_{ij} such that

a_j^\prime=\sum_{i=1}^{n}u_{ij}a_i

and

a_j=\sum_{i=1}^n v_{ij}a_i^\prime.

Then we can write the vectors in terms of themselves as follows

a_j=\sum_{k=1}^n v_{kj}a_k^\prime

=\sum_{k=1}^n v_{kj}\sum_{i=1}^n u_{ik}a_i

=\sum_{i=1}^n\left(\sum_{k=1}^n u_{ik}v_{kj}\right)a_i .

The vectors are linearly independent so the sum inside the brackets must be zero whenever i\neq j, and one when i=j. Similarly, using the other set of vectors, we have

\sum_{k=1}^n v_{ik}u_{kj}=\delta_{ij}.

If we let U and V be the matrices (u_{ij}) and (v_{ij}) respectively then the above tells us that U=V^{-1}. And since they have integer entries the fact that \det(U)\det(V)=1 tells us that \det(U)=\det(V)=\pm1. So these two bases are related by a unimodular matrix (a matrix with integer entries and determinant \pm1). Specifically we get the matrix for one basis by right-multiplying the matrix of the other basis by a certain unimodular matrix.

Conversely if we have a basis \{a_1,\ldots,a_n\} for a lattice \Gamma and take a unimodular matrix U=(u_{ij}) then the lattice with basis \{a_1^\prime,\ldots,a_n^\prime\} where

a_j^\prime=\sum_{i=1}^n u_{ij}a_i

is again \Gamma. To see this let \Gamma^\prime be the lattice with basis \{a_1^\prime,\ldots,a_n^\prime\} and note that each a_j^\prime\in\Gamma, so \Gamma^\prime\subseteq\Gamma. Let U^{-1}=V=(v_{ij}), then this is also a unimodular matrix and we have

\sum_{k=1}^n v_{ki}a_k^\prime=\sum_{k=1}^n v_{ki} \sum_{j=1}^n u_{jk}a_j

=\sum_{j=1}^n \left(\sum_{k=1}^n u_{jk}v_{ki}\right)a_j

=\sum_{j=1}^n\delta_{ji}a_j

=a_i\in\Gamma^\prime.

So that \Gamma\subseteq\Gamma^\prime, so the two lattices are in fact the same.

So we’ve shown that two bases give the same lattice if and only if their matrices are related by a unimodular matrix. In the simple example where we had (1,1),(1,0) as a basis for \mathbb{Z}^2 we can now note that

\left(\begin{array}{ c c }1 & 1 \\ 1 & 0\end{array}\right)\left(\begin{array}{ c c }0 & 1 \\ 1 & -1\end{array}\right)=\left(\begin{array}{ c c }1 & 0 \\ 0 & 1\end{array}\right).

Recall we proposed that the determinant of the matrix given by the basis vectors might be an invariant of the lattice. Well the above shows this is the case. Given two bases of a lattice we’ve seen their matrices A and A^\prime are related by a unimodular matrix U by A=A^\prime U. And so

\det(A)=\det(A^\prime U)=\det(A^\prime)\det(U)=\pm\det(A^\prime).

So the determinant – up to sign – does not depend on the basis chosen. We call the absolute value of this the determinant of the lattice, denoted \det(\Gamma).

Another very important feature of a lattice \Gamma is its fundamental parallelepiped. This depends on the basis chosen, and given a basis \{a_1,\ldots,a_n\} it is the set:

F(\Gamma)=F(\Gamma;a_1,\ldots,a_n)

=\left\{\sum_{i=1}^n x_i a_i \mid 0\leq x_i < 1\textrm{ for }i=1,\ldots,n \right\}\subseteq\mathbb{R}^n.

In \mathbb{R}^2 this is the parallelogram cut out by the points 0, a_1, a_2, and a_1+a_2, and in higher dimensions we get generalisations of this, hence the name. It clearly depends on the basis chosen.

The parallelepiped itself may not be unique, but its volume is. If a_j=\sum_{i=1}^n\alpha_{ij}e_i then we have

\textrm{vol}(F(\Gamma;a_1,\ldots,a_n))=\int\cdots\int_{F(\Gamma;a_1,\ldots,a_n)}dV

=\int_0^1\cdots\int_0^1 |\det(\alpha_{ij})|dx_1\cdots dx_n

=|\det(a_{ij})|

=\det(\Gamma).

And so the volume is independent of the basis used.

The fundamental parallelepiped is extra useful as every point in \mathbb{R}^n can be written uniquely as the sum of a point on the lattice and a point in the fundamental parallelepiped. That is,

\mathbb{R}^n=\Gamma+F(\Gamma;a_1,\ldots,a_n).

This is intuitively clear and is an extension of the idea of writing any real number as its integer part plus its fractional part, except now we have the lattice replacing integers and the parallelepiped replacing the fractional part. The proof uses this basic principle and simply applies it to n dimensions.

About these ads