# quadratic forms-part 3

After five days’ travel in Germany, I should continue writing my posts. This post is concerned with quadratic forms, especially those on the $p$-adic fields. This post is mainly based on the book ‘a course in arithmetic’ by Jean-Pierre Serre.

The quadratic forms can be viewed from two different points of view. The first is an application $f:V\rightarrow k$ from a vector space to its ground field satisfying some additional conditions. Another one is a homogenous polynomial of degree $2$. The translation from one to the other is obvious, but sometimes it is easier to understand from one point of view while some other times it is easier from another one. We should keep this in mind and move freely between them.

First of all, some definitions. We denote $K$ a field of characteristic $char(K)\neq 2$, and $V$ a vector space over $K$. We say that an application $Q:V\rightarrow K$ is a quadratic form if $Q(kv)=k^2Q(v)(\forall k\in K,v\in V)$ and $q:V\times V\rightarrow K,(v,u)\mapsto Q(v+u)-Q(v)-Q(u)$ is a bilinear form on $V$. We say that the bilinear form $\frac{1}{2}(Q(v+u)-Q(v)-Q(u))=$ is associated to this quadratic form $Q$(that is why we require that the characteristic of $K$ be different from $2$). It is clear that this bilinear form is symmetric, and the corresponding matrix(we also write it as $Q$, when there is no confusion) is a symmetric matrix. So, in this way, we give the vector space $V$ an additional structure, $Q$, and perhaps we should view the couple $(V,Q)$ as one single object. We call it a quadratic space. Then, we should try to create a category. Objects of this category have been determined, next we should find the morphisms. Suppose that $(V,Q),(V',Q')$ are two quadratic spaces over the same field $K$, we call a linear map $f:V\rightarrow V'$ a morphism between these two quadratic spaces if it preserves the quadratic forms, i.e. $Q(x)=Q'(f(x))$. Clearly this makes the quadratic spaces into a category. Note that, if $V=V'$, then $f$ is just a transformation. If we restrict to the cases where $dim(V)< \infty$, then we see easily that the matrices $Q=F^{\dagger}Q'F$ where $F$ is the matrix of $f$ under some basis chosen. So, we see that $Q$ and $Q'$ share many invariants as matrices. And if $f$ is a bijection, then $f$ is an isomorphism between these two quadratic spaces, and thus these two matrices $Q,Q'$ have even more common points. For example, we see that $det(Q)=det(F)^2det(Q')$, so their discriminants differ by only a square, so $det(Q)$ is well defined for the quadratic spaces modulo the isomorphism relations(taking values in $K/(K^{\times 2})$). We denote it by $disc(Q)$.

We have encountered many quadratic spaces before, for example, the Euclidean spaces. The inner product is clearly induced by some quadratic form. In these Euclidean spaces, we have some notion of orthogonality. In general cases, we can also talk about this concept. That is, if for $v,u\in V$ we have  that $=0$, then we say that $v$ is orthogonal to $u$(and $u$ is orthogonal to $v$, too). And for any subspace $H\subset V$, we set $H^0=\{v\in V|=0,\forall h\in H\}$. In the Euclidean space cases, this is just the orthogonal complementary of $H$. There usually we have that $V^0=0$, yet in general this is not the case. If indeed we have that $V^0=0$, we say that $(V,Q)$ is non-degenerate, if it is not the case, we say that $(V,Q)$ is degenerate. So, the Euclidean spaces are all non-degenerate. Moreover, if for two subspaces $V_1, V_2$, we have that $\forall v_1\in V_1,v_2\in V_2,=0$, we say that $V_1,V_2$ are orthogonal. Just as in linear algebra where we have decomposition of a space into the direct sum of subspaces, here we have also this concept. That is, if $V_1,...,V_n$ subspaces of $V$ are orthogonal one to another with that $V=V_1\bigoplus V_2\bigoplus...\bigoplus V_n$, then we say that $V$ is an orthogonal direct sum of $V_1,...,V_n$ and write it as $V=V_1\bigoplus'...\bigoplus' V_n$. The last important concept is isotropic vectors. A non-zero vector $0\neq v\in V$ is an isotropic vector if $Q(v)=0$. What does this concept mean? Suppose that $v$ expands to a basis $(e_1,...,e_n)(v=e_1)$ of $V$, and under this basis, the matrix of $Q$ writes as $<\sum a_ie_i,\sum b_je_j>=-\sum_{ij}q_{ij}a_ib_j(a_i,b_j,q_{ij}\in K)$. Then since $Q(e_1)=0$, this means that $q_{11}=0$. This reminds us of a particular type of homogenous polynomial, $p(x,y)=xy$ which has no terms for $x^2,y^2$. Clearly, the equation $p(x,y)=a$ determines a hyperbola. If we just have that $p(x,y)=xy+y^2$, then considering that $p(x,y)=(x+y)y$, we can make some coordinate change $x'=x+y,y'=y$, we still have that $p'(x',y')=x'y'=(x+y)y=p(x,y)$. In fact, we can do something similar, that is

If $v\in V$ is an isotropic vector of $V$(we suppose that $dim(V)>1$, and the quadratic space is non-degenerate), then there exists another isotropic vector $u\in V$ linearly independent of $v$ such that $\neq 0$.

It is easy to find some ‘counter-example’ if we don’t assume that $(V,Q)$ is non-degenerate. For example, when $Q(u)=0(\forall u\in V)$. So, this assumption is very important. This assumption means that there exists some element $u\in V$ such that $\neq 0$(it is easy to see that these two vectors are linearly independent). Yet this $u$ is not necessarily isotropic. But we can modify it a little bit. For example, $u'=u+kv(k\in K)$, then $=+2k+k^2$. Luckily we have that $=0$, so this is not, in fact, a quadratic polynomial on $k$, this is only a linear equation, and since $\neq 0$, this $k$ always exists. And thus we can take such $u'$ as a solution. So we see that if $(V,Q)$ is non-degenerate, then either it has no isotropic vectors, or it has a subspace of dimension $2$ such that the restriction of $Q$ on this subspace has a matrix of the form $=(a_1b_2+a_2b_1)/2$, or in other words $Q(a_1e_1+a_2e_2)=a_1a_2$, just as above.

Another important result concerning the non-degenerate quadratic forms is that, suppose that $(V,Q)$ is non-degenerate, and $(V',Q')$ is another quadratic form, and $f:(V,Q)\rightarrow(V',Q')$ is a morphism. Then if $'=0(\forall v'\in V')$, then we must have that $=0(\forall u\in V)$, thus $=0(\forall u\in V)$. Thus this implies that $v$ is isotropic in $V$. Yet we have assumed that $(V,Q)$ is non-degenerate, so we must have that $v=0$. What does this mean? This means that any non-zero vector in $V$ can’t have an image as isotropic vector. In other words, $f$ is injective as a linear application from $V$ to $V'$. This, in some sense, characterizes the non-degenerateness of $(V,Q)$, that is to say, if for any other $(V',Q')$ and any morphism $f:(V,Q)\rightarrow(V',Q')$, $f$ is injective, then we have that $(V,Q)$ is non-degenerate, and vice-versa. This is not hard to prove. If $(V,Q)$ is degenerate, then we can choose any subspace $U$ complementary to $V^0$, and we see easily that $V=U\bigoplus' V^0$, and $f: V\rightarrow U$ is a projection. Then for any $u+v,u'+v'\in U\bigoplus' V^0$, we have that $==$. So $f$ is a morphism, yet it is not injective.

Now we restrict ourselves to the case $K=\mathbb{Q}_p$ where $p$ is a prime number. We suppose that $(V,Q)$ is a quadratic space over $K$, non-degenerate, of finite dimension. Note that, as we have said above, $disc(Q)$ is an element up to multiplication of a square, thus is an element in $K/K^{\times2}$. What is more, since $(V,Q)$ is non-degenerate, this means that the matrix associated to $Q$ is non-degenerate, thus $disc(Q)=det(Q)\neq 0$. So, $disc(Q)\in K^{\times}/K^{\times2}$. We shall see that it is an invariant of $(V,Q)$(we have seen that if two quadratic spaces are isomorphic, then their discriminants are equal. We will show that, with the following condition, the converse is true, too). If we choose an orthogonal basis $e=(e_1,...,e_n)(n=dim_K(V))$ for $V$, then we define $\epsilon(e)=\prod_{i,)$ where $(a,b)(a,b\in K)$ is the Hilbert symbol in $K$. We will show that this quantity doesn’t depend on the choice of this orthogonal basis, that is

If $e,e'$ are two orthogonal basis for $(V,Q)$, then $\epsilon(e)=\epsilon(e')$.

We start from a simple case: $e$ and $e'$ share an element, say $e_1=e_1'$. Then we have that $a_1=a_1'$. What is more, it is easy to see that $disc(Q)=\prod_ia_i=\prod_ia_i'(K^{\times}/K^{\times2})$. Thus $\epsilon(e)=\prod_{i>1}(a_1,a_i)\prod_{1, similar for $\epsilon(e')$. But $\prod_{i>1}(a_1,a_i)=(a_1,\prod_{i>1}a_i)=(a_1,a_1^{-1}disc(Q))=(a_1)$. As for $e'$, we have also that $\prod_{i>1}(a_1',a_i')=(a_1',\prod_{i>1}a_i')=(a_1',a_1'^{-1}disc(Q))$. So, we have that $\prod_{1 since $a_1=a_1'$. Now if we can use induction on the dimension of $V$, we see that the subspace $U$ generated by $e_2,...,e_n$ is the same as the subspace generated by $e_2',...,e_n'$(expand $e_i(i>1)$ in terms of $e_1',...,e_n'$, and we see that the coefficient before $e_1'$ is zero, showing that $e_i\in U'$, and vice versa). So, using induction(the first steps, where $n=1$, we always have $\epsilon(e)=1=\epsilon(e')$, where $n=2$,we have that $\epsilon(e)=(a_1,a_2)$. So, if $\epsilon(e)=1$, this is equivalent to say that $a_1x^2+a_2y^2=z^2$ has non-trivial solutions. Note that $a_1=,a_2=$, we have that $Q(xe_1+ye_2)=z^2$ has non-trivial solutions. Yet this doesn’t depend on the choice of basis, so for the first two cases we are done), we prove the result. So, we have to show that for those $(V,Q)$ of dimension $n>2$, we can create the situations in the proof above, that is $e,e'$ share an element. In deed, we can prove

If $(V,Q)$ is a non-degenerate space of dimension $n>2$, and $e,e'$ two orthogonal basis, then we can find a series of basis $e^0=e,e^1,...,e^m=e'$ such that $e^i,e^{i+1}(i=0,1,...,m-1)$ share an element for each $i$(this common element may well depend on $i$).

What does this result mean? It means that, for any two orthogonal basis, we can always do a series of the following type of operations to transform from one to another: we fix one element in the basis, and rotate the whole space around this element, thus we get a new basis, with at least one element in common with the previous one(yes, one of these commons elements is just the fixed one). This reminds us of the Euler angles. It is exactly the same process. Note that, this result is not true for spaces of dimension $2$(just consider the Euclidean plane). So, here this $3$ is important. That is also why we should consider the cases $n=1,2$ in the above proof. Note that, if $e_1,e_1'$ satisfy that $-^2\neq 0$(which implies that $e_1,e_1'$ are linearly independent, but not vice versa), then the plane $U$ generated by $e_1,e_1'$(which is non-degenerate due to the above inequality) and its orthogonal complementary $P^0$ have that $P\bigoplus'P^0=V$(indeed, $P^0$ is just the kernel of the mapping $f: V\rightarrow P^*,v\mapsto(u\mapsto )$. Yet $f$ is a composition of $V\rightarrow V^*$ and $V^*\rightarrow P^*$, the first being a injection(and since both have the same dimension) thus is a surjection, the second being surjective since any linear map on $P$ can be extended to one on $V$, so this implies that $f$ is surjective, so the exact sequence $0\rightarrow P^0\rightarrow V\rightarrow P^*\rightarrow0$ shows that $dim(V)=dim(P^0)+dim(P^*)=dim(P^0)+dim(P)$. So, noting that $P$ is non-degenerate, so the orthogonal direct sum $P\bigoplus'P^0$ makes sense, and thus the subspace $P\bigoplus' P^0$ is just $V$ by counting dimension). Then we can choose an orthogonal basis for $P^0$, and completing this basis by adding either $e_1$ or $e_1'$ to a basis of $V$(this is possible, since ), we obtain two different basis of $V$, sharing the basis for $P^0$, thus completing the first case of the result. We have used substantially the fact that $P$ is non-degenerate, and that $P^0$ is not trivial, which is implied at least partially by the assumption that $dim(V)>2$. Now suppose that for both $e_1',e_2'$, we have that $-^2=0(i=1,2)$, then we try to find a vector $v_k=e_1'+ke_2'(k\in K)$ such that $h(k)=-^2$ is not zero for some $k$ and $\neq0$, either. For the first inequality, we have that $h(k)=-2k=ak$. It is easy to see that $a\neq 0$ since $char(K)\neq 2$. As for the second inequality, we have that $g(k)=+k^2\neq 0$. There is only one value of $k$ such that $h(k)=0$ and there is only two possible values of $k$ for which $g(k)=0$. So, since $char(K)\neq2$, we only have to consider the case $K=\mathbb{F}_3$, the finite field of three elements. In this case, note that the assumption implies that $=^2=1$,similarly, $=1$, so $/=1$. Thus in order that $h(k)\neq0\neq g(k)$, we must have that $k\neq 0,k^2\neq -1$. So, we can take $k=1$ which satisfies the requirement. With this $v_k$, we can expand it to an orthogonal basis $(v_k,u_k)$ of the subspace generated by $e_1',e_2'$. Hence the new basis $(v_k,u_k,e_3',...e_n')$ have a common element with $e'$(for example, $e_3'$), and $v_k$ with $e_1$ has an inequality $-^2\neq0$.Then we can proceed as above. So, we have proven this lemma, and also the whole theorem that $\epsilon(e)$ in fact doesn’t depend on $e$. So, we can write it as $\epsilon(Q)$.

Before we consider the classification theorems of the quadratic spaces, we need to consider if a quadratic form can represent some element. For any $k\in K$, we say that the quadratic form $Q$ on $V$ represents $k$ if there is a non-zero vector $v\in V$ such that $Q(v)=k$. This is a natural definition. Yet it is not so easy to tell whether a quadratic form can indeed represent some element. First, we consider the representation of $0$. That is

If $(V,Q)$ is a non-degenerate quadratic space of dimension $n$, and $d=disc(Q),\epsilon=\epsilon(e)$ defined as above. Then $Q$ represents $0$ if and only if (1) $n=2$ and $d=-1$(always in $K^{\times}/K^{\times2}$);(2)$n=3$, and $(-1,-d)=\epsilon$;(3)$n=4$ and either $d=1,\epsilon=(-1,-1)$ or $d\neq1$;(4) $n>4$.

In other words, the above conditions says when $(V,Q)$ has an isotropic vector. In order to show the relation between representing $0$ and representing other numbers, we introduce an useful notation: if $(V,Q),(V',Q')$ are two quadratic spaces, then we define another quadratic space $(V\bigoplus V',S)$ by $S:V\bigoplus V'\rightarrow K,(v,v')\mapsto Q(v)+Q'(v')$(indeed it is a quadratic space). And we write it as $S=Q\bigoplus Q'$. So, if $(V,Q)$ is a non-degenerate quadratic space, then for any non-zero $k\in K$, $(V\bigoplus K,Q\bigoplus Q')$ where $Q'$ is a quadratic form on $K$ such that $Q'(k')=-kk'^2$. So, it is easy to see that $Q\bigoplus Q'$ represents $0$ if and only if $Q$ represents $k$(indeed, if $S=Q\bigoplus Q'$ represents $0$, that is, there exist $(v,k')$ such that $S(v,k')=Q(v)-kk'^2=0$. If $k'\neq 0$, then $Q(\frac{1}{k'}v)=k$, so $Q'$ represents $k$. If $k'=0$, then $Q$ represents $0$. This means that $(V,Q)$ has an isotropic vector $v$. Then according to the above result, there exists another isotropic element $u\in V$ such that $\neq0$. We write it as $a=$, then $Q(v+k'u)=Q(v)+Q(k'u)+2=2k'a$. Since $a\neq0$, we can take $k'=\frac{k}{2a}$, then the vector $v+k'u\neq0$ represents $k$). Noting also that $disc(S)=-kdisc(Q),\epsilon(S)=(-k,disc(Q))\epsilon(Q)$. So, using the results above, we have that

If $k\in K^{\times}$, then $(V,Q)$ represents $k$ if and only if (1) $n=1,disc(Q)=d(\text{in }K^{\times}/K^{\times2})$;(2) $n=2$ and $(k,disc(Q))=\epsilon(Q)$;(3)$n=3$ and either $-kd=1,\epsilon(Q)=(-1,-disc(Q))$, or $-kd=1$;(4)$n>3$.

The only non trivial case is the condition (2). It is not hard to see. Note that $S$ represents $0$ with $n=3$, if and only if $(-1,kdisc(Q))=\epsilon(Q)(-k,disc(Q))$. That is $\epsilon(Q)=(-1,kdisc(Q))(-k,disc(Q))=(-1,kdisc(Q))(-k,kdisc(Q))=(k,kdisc(Q))=(k,-disc(Q))$.

These two results give directly the classification of quadratic spaces over $K=\mathbb{Q}_p$. That is

Two non-degenerate quadratic spaces $(V,Q),(V',Q')$ are isomorphic if and only if they have the same dimension, the same discriminant(in $K^{\times}/K^{\times2}$) and the same $\epsilon$(that is $\epsilon(Q)=\epsilon(Q')$).

At first glance, this theorem has nothing to do with the above results. Perhaps it will be clearer with the following fact: if a quadratic space $(V,Q)$ represents $k\in K^{\times}$, if and only if there exists a decomposition of $(V,Q)$ as $(V'\bigoplus K,Q'\bigoplus S)$ where $S:K\rightarrow K,k'\mapsto kk'^2$. The proof is not difficult. Indeed, if $(V,Q)$ represents $k$, then there is a non-zero vector $v\in V$ such that $=Q(v)=k$. Expand this $v$ into an orthogonal basis $(v,e_2,...,e_n)$ of $V$, then we have that $V=Kv\bigoplus' Ke_2\bigoplus'...\bigoplus' Ke_n=K\bigoplus V'$. And the restriction of $Q$ to the subspace $Kv$ is indeed the form required. The other direction is obvious. With this lemma, we see that, if two quadratic spaces have three identical quantities($n,d,\epsilon$), then they of course represent the same set of numbers in $K$. Since they are not degenerate, they represent at least one non-zero number $k\in K^{\times}$. Then they both have a decomposition $(V,Q)=(V_1\bigoplus K,Q_1\bigoplus S_1), (V',Q')=(V_1'\bigoplus K,Q_1'\bigoplus S_1')$ where $S_1,S_1':K\rightarrow K,k'\mapsto kk'^2$. So, we have that for $(V_1,Q_1)$ and $(V_1',Q_1')$, they again have three identical quantities($n,disc,\epsilon$). So, we can use induction on the dimension of $V$ or $V'$(the first step is easy, when $n=1$, we just have(after choosing a basis $1$ for $V=K$ and $V'=K$) that $Q:K\rightarrow K,k'\mapsto disc(Q)k'^2$, $Q':K\rightarrow K,k'\mapsto disc(Q')k'^2$. Note that $disc(Q)=a^2disc(Q')(a\in K^{\times})$, we can define $f:V=K\rightarrow V'=K,k'\mapsto ak'$, then $Q'(f(k'))=Q'(ak')=a^2Q'(k')=a^2disc(Q')k'^2=disc(Q)k'^2=Q(k')$, so $(V,Q)$ and $(V',Q')$ are isomorphic). Thus we proved this classification theorem for the quadratic spaces over the $p$-adic fields.

Now it remains to show the important zero-representation theorem. We prove this result by examining case by case. There is a general fact that will be useful. For any quadratic space $(V,Q)$, it has an orthogonal basis. This can be done by induction on $dim(V)$. When $dim(V)=1$, this is trivial. If $Q$ is trivial on $V$, then this is automatic. Otherwise, there is a vector $v\in V$ such that $Q(v)\neq0$. Then the orthogonal complementary $U^0$ of $U=Kv$ is not the whole space since at least $v\not\in U$. y counting dimension as above, we have that $V=U\bigoplus U^0$. So, by using induction on $U^0$, which thus has an orthogonal basis, so does $V$. First we treat the case $n=2$. Find an orthogonal basis $(e_1,e_2)$ for $(V,Q)$, and under this basis $h(a,b)=Q(ae_1+be_2)=a^2p_1+b^2p_2$. So since $Q$ represents $0$, there exists a non-trivial pair $(a,b)$ such that $h(a,b)=0$. This means that(suppose that $a\neq0$) $p_1=-p_2(b/a)^2$. Yet $disc(Q)=p_1p_2$, so $disc(Q)=p_2(-p_2(b/a)^2)$, equivalent to $-1$ in $K/K^{\times2}$. Conversely, if $disc(Q)=-1$ in $K/K^{\times2}$, this means that $p_1p_2=-c^2(c\in K^{\times})$. So, we can choose $(a,b)=(1,c)\neq(0,0)$, then $h(a,b)=0$, showing that $Q$ represents $0$. For the case $n=3$, we write the quadratic form as $h(x,y,z)=Q(xe_1+ye_2+ze_3)=ax^2+by^2+cz^2$ where $(e_1,e_2,e_3)$ is an orthogonal basis for $V$. So this form represents $0$, this is equivalent to saying that $-\frac{a}{c}x^2-\frac{b}{c}y^2=z^2$ has non-trivial solutions. Yet according to the definition of Hilbert symbol, this implies that $(-a/c,-b/c)=1$. Note that $\epsilon(Q)=(a,b)(a,c)(b,c)=(ac,b)(a,c)=(db,b)(a,-ac)=(db,b)(-db,a)$. Now $1=(-a/c,b/c)=(-ac,-bc)=(-db,-da)=(-db,a)(-db,-d)=(-db,a)(-b,-d)=(-db,a)(-1,-d)(b,-d)=(-db,a)(-1,-d)(b,db)=\epsilon(Q)(-1,-d)$. So, we have that $\epsilon(Q)=(-1,-d)$ for the case $n=3$ when $Q$ represents $0$. It is obvious that each step above can be reversed, so we see that when $\epsilon(Q)=(-1,-d)$, then $Q$ represents $0$. For the case $n=4$, we have to utilize some tricks. we write $h(x,y,z,w)=Q(xe_1+ye_2+ze_3+we_4)=ax^2+by^2+cz^2+dw^2$. Then if $Q$ represents $0$, since $a,b,c,d\neq0$, we have that at least two of $x,y,z,w$ are not zero. So without loss of generality, we can assume that $k=ax^2+by^2=-(cz^2+dw^2)\in K^{times}$. This means that the quadratic forms $Q_1:K^3\rightarrow K,(x,y,s)\mapsto ax^2+by^2-ks^2;Q_2:K^3\rightarrow K,(t,z,w)\mapsto kt^2+cz^2+dw^2$ represents $0$. So, according to the case $n=3$, we have that this is equivalent to the fact that $(-1,abk)=(a,b)(a,-k)(b,-k),(-1,-cdk)=(c,d)(c,k)(d,k)$. In other words, $(k,-ab)=(a,b),(k,-cd)=(-c,-d)$. We set $A=\{k\in K^{\times}/K^{\times2}|(k,-ab)=(a,b)\}$,$B=\{k\in K^{\times}/K^{\times2}|(k,-cd)=(-c,-d)\}$. Since $a\in A, -c\in B$, so $A,B$ are not empty. If they are disjoint, which is the same as $Q$ can not represent $0$, then, noting that $K^{\times}/K^{\times2}$ is a vector space over $\mathbb{F}_2$, so it has $2q=2^r(q,r\in\mathbb{N})$ elements. And we have seen that the Hilbert symbol is in fact a non-degenerate quadratic form on the $\mathbb{F}_2$vector space $K^{\times}/K^{\times2}$. This means that for any $k'\in K-0$, the linear map $f_{k'}:K^{\times}/K^{\times2}\rightarrow \mathbb{F}_2,x\mapsto (x,k')$ has a kernel of cardinality equal to either $q$ or to $2q$. Now that $A$ is either the kernel or the complementary of the kernel of the map $f_{-ab}$(depending on whether $(a,b)=1$ or not), $B$ is in the same situation for the map $f_{-cd}$(depending on whether $(-c,-d)=1$ or not). Since they are not empty, and the intersection of the kernels of both maps is not empty, it must be the case that one of $A,B$ is the kernel of the corresponding map while the other is the complementary of the corresponding map, with the assumption that $A\bigcap B=\emptyset$, thus their kernels coincide, as a result these two linear maps $f_{-ab},f_{-cd}$ are in fact equal. This means that $(x,-ab)=(x,-cd)(\forall x\in K-0)$ and (due to $A\bigcap B=\emptyset$) $(a,b)=-(-c,-d)$. Since the Hilbert symbol is non-degenerate, thus $-ab=-cd$ in $K^{\times}/K^{\times2}$. So, we have that $disc(Q)=abcd=1$ in $K^{\times}/K^{\times2}$. As for the second equation $(a,b)=-(-c,-d)$, we have that $\epsilon(Q)=(a,b)(a,c)(a,d)(b,c)(b,d)(c,d)=(a,b)(c,d)(ab,cd)=-(-c,-d)(c,d)(-disc(Q),cd)$, that is to say, $\epsilon(Q)=-(-c,d)(-c,-1)(c,d)(-1,cd)=-(d,-1)(-c,-1)(-1,cd)=-(-1,-cd)(-1,cd)=-(-1,-1)$. So, we have that $Q$ represents $0$ if and only if $disc(Q)\neq1$(in $K^{\times}/K^{\times2}$) or $disc(Q)=1$ with $\epsilon(Q)=(-1,-1)$. For the case $n=5$, we have shown that for a non-degenerate quadratic space $(V',Q')$ of dimension $2$, it represents $k\in K^{\times}/K^{\times2}$ if and only if $(k,disc(Q'))=\epsilon(Q')$. So, we see that $Q'$ represents at least $\frac{1}{2}\#(K^{\times}/K^{\times2})$ many numbers in $K^{\times}/K^{\times2}$. We have seen in the previous post that there are $4$ elements in $\mathbb{Q}_p^{\times}/\mathbb{Q}_p^{\times2}$ for $p\neq2$ and there are $8$ for the case $p=2$. So, we can always find a $x\in K^{\times}/K^{\times2}$ such that $Q'$(this is also true for $Q$ since the $dim(V)5>2$) represents $a$ and $a\neq k$ in $K^{\times}/K^{\times2}$. Now we have a decomposition of $(V,Q)$, that is $(V,Q)=(V'\bigoplus K,Q'\bigoplus S)$ where $S:K\rightarrow K,k'\mapsto ak'^2$ with $Q'$ non-degenerate on $V'$ the dimension of which is $n-1=4$. This time $disc(Q')=d/a\neq1$ in $K^{\times}/K^{\times2}$, so according to the case $n=4$, we have that $Q'$ represents $0$, so does $Q$. For the cases $n>5$, they are reduced easily to the case $n=5$.

It is interesting to consider together the quadratic spaces over the real number field, that is $K=\mathbb{R}$. Things here are much simpler, and we will just state the results. Suppose that $(V,Q)$ is a non-degenerate quadratic space over $K$. Then under some orthogonal basis $(e_1,...,e_n)$ we can write this quadratic form as $Q(\sum_ia_ie_i)=-a_1^2-...-a_m^2+a_{m+1}^2+...+a_n^2$. And we know that the two numbers $n,m$ determine this quadratic form up to isomorphism. Can we also express these quantities using similar invariants as above? Let’s have a try. Here we have that $=-1(i\leq m),1(i>m)$, so $disc(Q)=(-1)^{m}$, and $\epsilon(Q)=\prod_{i=(-1)^{\frac{m(m-1)}{2}}$. Note that if there is another quadratic form $Q'$ over $V$, this time with $m'$ such that $m=m'(\text{mod}4)$. Then we have that $disc(Q)=disc(Q')$, and $\epsilon(Q)=\epsilon(Q')$. So, in some sense, the three quantities $n,disc(Q),\epsilon(Q)$ can not determine the quadratic space $(V,Q)$. It is a pity. As for the zero-representation theorem for $(V,Q)$ over the real numbers, we see that $(V,Q)$ represents $0$ if and only if $n>m>0$. With this and the fact that $(-1,-1)=-1$, we have that for $n=2$, $Q$ represents $0$ if and only if $0, if and only if $disc(Q)=(-1)^m=-1$. For $n=3$, $Q$ represents $0$ if and only if $m=1,2$. We can verify that this is equivalent to saying that $(-1,-disc(Q))=\epsilon(Q)$. For $n=4$, $Q$ represents $0$ if and only if $m=1,2,3$. If $disc(Q)=(-1)^m\neq1$, then $m\neq0,2,4$, thus $Q$ indeed represents $0$. If $disc(Q)=1$, then $m=0,2,4$. Yet with the condition that $(-1,-1)=-1=\epsilon=(-1)^{m(m-1)/2}$ we see that $m=0,4$ are impossible. So, we must have that $m=2$. Thus we have verified that the first three conditions in the zero-representation theorem for $\mathbb{Q}_p$ also works for $\mathbb{R}$. Yet we see easily that even for $n>4$, there are quadratic spaces over the real numbers that can not represent $0$ except that $0. So, the last condition doesn’t work here.

We know that $\mathbb{Q}$ is a global field while these $\mathbb{Q}_p(p=2,3,5,...,\infty)$(we set $\mathbb{Q}_{\infty}=\mathbb{R}$) are local fields. So if there is a quadratic space $(V,Q)$ over $\mathbb{Q}$, then using $V_p=V\bigotimes_{\mathbb{Q}}\mathbb{Q}_p$, which is now a vector space over $\mathbb{Q}_p$, we see that we can induce a quadratic form $Q_p$ on $V_p$ from $Q$, just take $_{Q_p}=ss'_{Q}$ and expand by linearity to the whole $V_p$(this is equivalent to defining a quadratic form). So, for each $(V_p,Q_p)$, we have $disc(Q_p),\epsilon(Q_p),dim_{\mathbb{Q_p}}V_p=dim_{\mathbb{Q}}V=n$. It is not very clear what these quantities are. But if we consider $h(x_1,x_2,...,x_n)=Q(x_1e_1+...+x_ne_n)=a_1x_1^2+...+a_nx_n^2(a_i,x_i\in \mathbb{Q})$ for an orthogonal basis $(e_1,...,e_n)$, then we see easily that $h_p(x_1,...,x_n)=Q_p(x_1e_1\bigotimes1+...+x_ne_n\bigotimes1)=a_1x_1^2+...+a_nx_n^2(x_i\in\mathbb{Q}_p)$($(e_1\bigotimes1,...,e_n\bigotimes1)$ is again an orthogonal basis on $V_p$). So, we have that $disc(Q_p)=\prod_ia_i$ in $\mathbb{Q}_p^{\times}/\mathbb{Q}_p^{\times2}$. Moreover, $\epsilon(Q_p)=\prod_{i_{Q_p},_{Q_p})=\prod_{i. Then we can state the classification theorem of quadratic spaces over $\mathbb{Q}$:

If $(V,Q),(V',Q')$ two non-degenerate quadratic spaces over $\mathbb{Q}$, then they are isomorphic if and only if for each $p=2,3,5,...,\infty$, $(V_p,Q_p)$ and $(V_p',Q_p')$ are isomorphic.

Suppose that for each $p$, the induced quadratic spaces are isomorphic. We can use induction on $n=dim(V)=dim(V')$. For the case $n=1$, we have that $disc(Q)=disc(Q')$ in all $\mathbb{Q}_p^{\times}/\mathbb{Q}_p^{\times2}$. This means that they are equal in $\mathbb{Q}^{\times}/\mathbb{Q}^{times2}$. So, it is clear that these quadratic spaces are isomorphic. Since these quadratic spaces are non-degenerate, we have that there is $a\in \mathbb{Q}^{\times}$ which is represented by $Q$. So, each $Q_p$ represents $a\neq0$, and since $(V_p,Q_p)$ and $(V_p',Q_p')$ are isomorphic, $Q_p'$ represents $a$, too. So, if we can show that this implies that $Q'$ represents $a$(this will be done using the following theorem), then we see that both quadratic spaces have decompositions $(V,Q)=(U\bigoplus \mathbb{Q}, T\bigoplus S),(V',Q')=(U'\bigoplus \mathbb{Q}, T'\bigoplus S')$ such that $S=S':\mathbb{Q}\rightarrow\mathbb{Q},x\mapsto ax^2$. This decomposition works for the induced $(V_p,Q_p),(V_p',Q_p')$, too. So, we have that(using a theorem of Witt, which says that for two isomorphic non-degenerate quadratic spaces $(V,Q),(V',Q')$, if $U\subset V$ is a subspace and there is an injective linear map $i:U\rightarrow V'$, then $i$ can be extended to an isomorphism on all of $V$) $(U_p,T_p),(U_p',T_p')$ are isomorphic for each $p$. Then by induction, we have that $(U,T),(U',T')$ are isomorphic. And thus completing the proof of the theorem. The theorem we used above is as follows:

(Hasse-Minkowski) Suppose that $(V,Q)$ is a quadratic space over $\mathbb{Q}$, then $Q$ represents $0$ if and only if $Q_p$ represents $0$ for each $p$.

This theorem is sometimes called the lifting principal of Hasse-Minkowski, which means that we can lift the local information(from these local fields) to the global field. Note that, this theorem doesn’t work for higher degrees, that is, if $f$ is a homogenous polynomial of degree greater than $2$ with coefficients in $\mathbb{Q}$, then $f$ represents $0$ in each $\mathbb{Q}_p$ doesn’t imply that $f$ represents $0$ in $\mathbb{Q}$. One example is just $f(x,y,z)=3x^3+4y^3+5z^3$. We will not prove this theorem. Next we state a simple consequence of the above classification theorem in using the invariants defined above:

Suppose that $(V,Q),(V',Q')$ two non-degenerate quadratic spaces over $\mathbb{Q}$, then they are isomorphic if and only if $disc(Q)=disc(Q')$, $m=m',n=n'$ and $\epsilon(Q_p)=\epsilon(Q_p')$ for each $p$.

So, until here, we have classified all the quadratic spaces over $\mathbb{Q}_p,\mathbb{R}$ and $\mathbb{Q}$.

Advertisements