After five days’ travel in Germany, I should continue writing my posts. This post is concerned with quadratic forms, especially those on the -adic fields. This post is mainly based on the book ‘a course in arithmetic’ by Jean-Pierre Serre.

The quadratic forms can be viewed from two different points of view. The first is an application from a vector space to its ground field satisfying some additional conditions. Another one is a homogenous polynomial of degree . The translation from one to the other is obvious, but sometimes it is easier to understand from one point of view while some other times it is easier from another one. We should keep this in mind and move freely between them.

First of all, some definitions. We denote a field of characteristic , and a vector space over . We say that an application is a quadratic form if and is a bilinear form on . We say that the bilinear form is associated to this quadratic form (that is why we require that the characteristic of be different from ). It is clear that this bilinear form is symmetric, and the corresponding matrix(we also write it as , when there is no confusion) is a symmetric matrix. So, in this way, we give the vector space an additional structure, , and perhaps we should view the couple as one single object. We call it a quadratic space. Then, we should try to create a category. Objects of this category have been determined, next we should find the morphisms. Suppose that are two quadratic spaces over the same field , we call a linear map a morphism between these two quadratic spaces if it preserves the quadratic forms, i.e. . Clearly this makes the quadratic spaces into a category. Note that, if , then is just a transformation. If we restrict to the cases where , then we see easily that the matrices where is the matrix of under some basis chosen. So, we see that and share many invariants as matrices. And if is a bijection, then is an isomorphism between these two quadratic spaces, and thus these two matrices have even more common points. For example, we see that , so their discriminants differ by only a square, so is well defined for the quadratic spaces modulo the isomorphism relations(taking values in ). We denote it by .

We have encountered many quadratic spaces before, for example, the Euclidean spaces. The inner product is clearly induced by some quadratic form. In these Euclidean spaces, we have some notion of orthogonality. In general cases, we can also talk about this concept. That is, if for we have that , then we say that is orthogonal to (and is orthogonal to , too). And for any subspace , we set . In the Euclidean space cases, this is just the orthogonal complementary of . There usually we have that , yet in general this is not the case. If indeed we have that , we say that is non-degenerate, if it is not the case, we say that is degenerate. So, the Euclidean spaces are all non-degenerate. Moreover, if for two subspaces , we have that , we say that are orthogonal. Just as in linear algebra where we have decomposition of a space into the direct sum of subspaces, here we have also this concept. That is, if subspaces of are orthogonal one to another with that , then we say that is an orthogonal direct sum of and write it as . The last important concept is isotropic vectors. A non-zero vector is an isotropic vector if . What does this concept mean? Suppose that expands to a basis of , and under this basis, the matrix of writes as . Then since , this means that . This reminds us of a particular type of homogenous polynomial, which has no terms for . Clearly, the equation determines a hyperbola. If we just have that , then considering that , we can make some coordinate change , we still have that . In fact, we can do something similar, that is

**If is an isotropic vector of (we suppose that , and the quadratic space is non-degenerate), then there exists another isotropic vector linearly independent of such that .**

It is easy to find some ‘counter-example’ if we don’t assume that is non-degenerate. For example, when . So, this assumption is very important. This assumption means that there exists some element such that (it is easy to see that these two vectors are linearly independent). Yet this is not necessarily isotropic. But we can modify it a little bit. For example, , then . Luckily we have that , so this is not, in fact, a quadratic polynomial on , this is only a linear equation, and since , this always exists. And thus we can take such as a solution. So we see that if is non-degenerate, then either it has no isotropic vectors, or it has a subspace of dimension such that the restriction of on this subspace has a matrix of the form , or in other words , just as above.

Another important result concerning the non-degenerate quadratic forms is that, suppose that is non-degenerate, and is another quadratic form, and is a morphism. Then if , then we must have that , thus . Thus this implies that is isotropic in . Yet we have assumed that is non-degenerate, so we must have that . What does this mean? This means that any non-zero vector in can’t have an image as isotropic vector. In other words, is injective as a linear application from to . This, in some sense, characterizes the non-degenerateness of , that is to say, if for any other and any morphism , is injective, then we have that is non-degenerate, and vice-versa. This is not hard to prove. If is degenerate, then we can choose any subspace complementary to , and we see easily that , and is a projection. Then for any , we have that . So is a morphism, yet it is not injective.

Now we restrict ourselves to the case where is a prime number. We suppose that is a quadratic space over , non-degenerate, of finite dimension. Note that, as we have said above, is an element up to multiplication of a square, thus is an element in . What is more, since is non-degenerate, this means that the matrix associated to is non-degenerate, thus . So, . We shall see that it is an invariant of (we have seen that if two quadratic spaces are isomorphic, then their discriminants are equal. We will show that, with the following condition, the converse is true, too). If we choose an orthogonal basis for , then we define where is the Hilbert symbol in . We will show that this quantity doesn’t depend on the choice of this orthogonal basis, that is

**If are two orthogonal basis for , then .**

We start from a simple case: and share an element, say . Then we have that . What is more, it is easy to see that . Thus , similar for . But . As for , we have also that . So, we have that since . Now if we can use induction on the dimension of , we see that the subspace generated by is the same as the subspace generated by (expand in terms of , and we see that the coefficient before is zero, showing that , and vice versa). So, using induction(the first steps, where , we always have , where ,we have that . So, if , this is equivalent to say that has non-trivial solutions. Note that , we have that has non-trivial solutions. Yet this doesn’t depend on the choice of basis, so for the first two cases we are done), we prove the result. So, we have to show that for those of dimension , we can create the situations in the proof above, that is share an element. In deed, we can prove

**If is a non-degenerate space of dimension , and two orthogonal basis, then we can find a series of basis such that share an element for each (this common element may well depend on ).**

What does this result mean? It means that, for any two orthogonal basis, we can always do a series of the following type of operations to transform from one to another: we fix one element in the basis, and rotate the whole space around this element, thus we get a new basis, with at least one element in common with the previous one(yes, one of these commons elements is just the fixed one). This reminds us of the Euler angles. It is exactly the same process. Note that, this result is not true for spaces of dimension (just consider the Euclidean plane). So, here this is important. That is also why we should consider the cases in the above proof. Note that, if satisfy that (which implies that are linearly independent, but not vice versa), then the plane generated by (which is non-degenerate due to the above inequality) and its orthogonal complementary have that (indeed, is just the kernel of the mapping . Yet is a composition of and , the first being a injection(and since both have the same dimension) thus is a surjection, the second being surjective since any linear map on can be extended to one on , so this implies that is surjective, so the exact sequence shows that . So, noting that is non-degenerate, so the orthogonal direct sum makes sense, and thus the subspace is just by counting dimension). Then we can choose an orthogonal basis for , and completing this basis by adding either or to a basis of (this is possible, since ), we obtain two different basis of , sharing the basis for , thus completing the first case of the result. We have used substantially the fact that is non-degenerate, and that is not trivial, which is implied at least partially by the assumption that . Now suppose that for both , we have that , then we try to find a vector such that is not zero for some and , either. For the first inequality, we have that . It is easy to see that since . As for the second inequality, we have that . There is only one value of such that and there is only two possible values of for which . So, since , we only have to consider the case , the finite field of three elements. In this case, note that the assumption implies that ,similarly, , so . Thus in order that , we must have that . So, we can take which satisfies the requirement. With this , we can expand it to an orthogonal basis of the subspace generated by . Hence the new basis have a common element with (for example, ), and with has an inequality .Then we can proceed as above. So, we have proven this lemma, and also the whole theorem that in fact doesn’t depend on . So, we can write it as .

Before we consider the classification theorems of the quadratic spaces, we need to consider if a quadratic form can represent some element. For any , we say that the quadratic form on represents if there is a non-zero vector such that . This is a natural definition. Yet it is not so easy to tell whether a quadratic form can indeed represent some element. First, we consider the representation of . That is

**If is a non-degenerate quadratic space of dimension , and defined as above. Then represents if and only if (1) and (always in );(2), and ;(3) and either or ;(4) .**

In other words, the above conditions says when has an isotropic vector. In order to show the relation between representing and representing other numbers, we introduce an useful notation: if are two quadratic spaces, then we define another quadratic space by (indeed it is a quadratic space). And we write it as . So, if is a non-degenerate quadratic space, then for any non-zero , where is a quadratic form on such that . So, it is easy to see that represents if and only if represents (indeed, if represents , that is, there exist such that . If , then , so represents . If , then represents . This means that has an isotropic vector . Then according to the above result, there exists another isotropic element such that . We write it as , then . Since , we can take , then the vector represents ). Noting also that . So, using the results above, we have that

**If , then represents if and only if (1) ;(2) and ;(3) and either , or ;(4).**

The only non trivial case is the condition (2). It is not hard to see. Note that represents with , if and only if . That is .

These two results give directly the classification of quadratic spaces over . That is

**Two non-degenerate quadratic spaces are isomorphic if and only if they have the same dimension, the same discriminant(in ) and the same (that is ).**

At first glance, this theorem has nothing to do with the above results. Perhaps it will be clearer with the following fact: if a quadratic space represents , if and only if there exists a decomposition of as where . The proof is not difficult. Indeed, if represents , then there is a non-zero vector such that . Expand this into an orthogonal basis of , then we have that . And the restriction of to the subspace is indeed the form required. The other direction is obvious. With this lemma, we see that, if two quadratic spaces have three identical quantities(), then they of course represent the same set of numbers in . Since they are not degenerate, they represent at least one non-zero number . Then they both have a decomposition where . So, we have that for and , they again have three identical quantities(). So, we can use induction on the dimension of or (the first step is easy, when , we just have(after choosing a basis for and ) that , . Note that , we can define , then , so and are isomorphic). Thus we proved this classification theorem for the quadratic spaces over the -adic fields.

Now it remains to show the important zero-representation theorem. We prove this result by examining case by case. There is a general fact that will be useful. For any quadratic space , it has an orthogonal basis. This can be done by induction on . When , this is trivial. If is trivial on , then this is automatic. Otherwise, there is a vector such that . Then the orthogonal complementary of is not the whole space since at least . y counting dimension as above, we have that . So, by using induction on , which thus has an orthogonal basis, so does . First we treat the case . Find an orthogonal basis for , and under this basis . So since represents , there exists a non-trivial pair such that . This means that(suppose that ) . Yet , so , equivalent to in . Conversely, if in , this means that . So, we can choose , then , showing that represents . For the case , we write the quadratic form as where is an orthogonal basis for . So this form represents , this is equivalent to saying that has non-trivial solutions. Yet according to the definition of Hilbert symbol, this implies that . Note that . Now . So, we have that for the case when represents . It is obvious that each step above can be reversed, so we see that when , then represents . For the case , we have to utilize some tricks. we write . Then if represents , since , we have that at least two of are not zero. So without loss of generality, we can assume that . This means that the quadratic forms represents . So, according to the case , we have that this is equivalent to the fact that . In other words, . We set ,. Since , so are not empty. If they are disjoint, which is the same as can not represent , then, noting that is a vector space over , so it has elements. And we have seen that the Hilbert symbol is in fact a non-degenerate quadratic form on the vector space . This means that for any , the linear map has a kernel of cardinality equal to either or to . Now that is either the kernel or the complementary of the kernel of the map (depending on whether or not), is in the same situation for the map (depending on whether or not). Since they are not empty, and the intersection of the kernels of both maps is not empty, it must be the case that one of is the kernel of the corresponding map while the other is the complementary of the corresponding map, with the assumption that , thus their kernels coincide, as a result these two linear maps are in fact equal. This means that and (due to ) . Since the Hilbert symbol is non-degenerate, thus in . So, we have that in . As for the second equation , we have that , that is to say, . So, we have that represents if and only if (in ) or with . For the case , we have shown that for a non-degenerate quadratic space of dimension , it represents if and only if . So, we see that represents at least many numbers in . We have seen in the previous post that there are elements in for and there are for the case . So, we can always find a such that (this is also true for since the ) represents and in . Now we have a decomposition of , that is where with non-degenerate on the dimension of which is . This time in , so according to the case , we have that represents , so does . For the cases , they are reduced easily to the case .

It is interesting to consider together the quadratic spaces over the real number field, that is . Things here are much simpler, and we will just state the results. Suppose that is a non-degenerate quadratic space over . Then under some orthogonal basis we can write this quadratic form as . And we know that the two numbers determine this quadratic form up to isomorphism. Can we also express these quantities using similar invariants as above? Let’s have a try. Here we have that , so , and . Note that if there is another quadratic form over , this time with such that . Then we have that , and . So, in some sense, the three quantities can not determine the quadratic space . It is a pity. As for the zero-representation theorem for over the real numbers, we see that represents if and only if . With this and the fact that , we have that for , represents if and only if , if and only if . For , represents if and only if . We can verify that this is equivalent to saying that . For , represents if and only if . If , then , thus indeed represents . If , then . Yet with the condition that we see that are impossible. So, we must have that . Thus we have verified that the first three conditions in the zero-representation theorem for also works for . Yet we see easily that even for , there are quadratic spaces over the real numbers that can not represent except that . So, the last condition doesn’t work here.

We know that is a global field while these (we set ) are local fields. So if there is a quadratic space over , then using , which is now a vector space over , we see that we can induce a quadratic form on from , just take and expand by linearity to the whole (this is equivalent to defining a quadratic form). So, for each , we have . It is not very clear what these quantities are. But if we consider for an orthogonal basis , then we see easily that ( is again an orthogonal basis on ). So, we have that in . Moreover, . Then we can state the classification theorem of quadratic spaces over :

**If two non-degenerate quadratic spaces over , then they are isomorphic if and only if for each , and are isomorphic.**

Suppose that for each , the induced quadratic spaces are isomorphic. We can use induction on . For the case , we have that in all . This means that they are equal in . So, it is clear that these quadratic spaces are isomorphic. Since these quadratic spaces are non-degenerate, we have that there is which is represented by . So, each represents , and since and are isomorphic, represents , too. So, if we can show that this implies that represents (this will be done using the following theorem), then we see that both quadratic spaces have decompositions such that . This decomposition works for the induced , too. So, we have that(using a theorem of Witt, which says that for two isomorphic non-degenerate quadratic spaces , if is a subspace and there is an injective linear map , then can be extended to an isomorphism on all of ) are isomorphic for each . Then by induction, we have that are isomorphic. And thus completing the proof of the theorem. The theorem we used above is as follows:

**(Hasse-Minkowski) Suppose that is a quadratic space over , then represents if and only if represents for each .**

This theorem is sometimes called the lifting principal of Hasse-Minkowski, which means that we can lift the local information(from these local fields) to the global field. Note that, this theorem doesn’t work for higher degrees, that is, if is a homogenous polynomial of degree greater than with coefficients in , then represents in each doesn’t imply that represents in . One example is just . We will not prove this theorem. Next we state a simple consequence of the above classification theorem in using the invariants defined above:

**Suppose that two non-degenerate quadratic spaces over , then they are isomorphic if and only if , and for each .**

So, until here, we have classified all the quadratic spaces over and .