# positive definite functions

I have encountered positive definite functions somewhere long before, and I think it should be during a probability course. These days when I attend a seminar on Brownian motions, this concept, positive definite function, recurs many times, and until then I realized that I did not know anything about this kind of function except perhaps its definition.

The most natural way to introduce positive definite function is perhaps to start from positive definite matrices. We all know what a positive definite matrix is. Here I assume always that the matrices are with real valued entries. Now consider a probability space, $(\mathbb{R}^n,\mathfrak{B},d\mu)$ where the $\sigma$-algebra on the space $\mathbb{R}^n$ is just generated by the Borel sets, and $d\mu$ is a finite measure. Then for $i,j=1,2,...,n$, we can define

$a_i=\int_{\mathbb{R}^n}x_id\mu,cov(i,j)=\int_{\mathbb{R}^n}(x_i-a_i)(x_j-a_j)d\mu$

Of course, we should assume that these quantities all exist. Then, we can easily show that the covariance matrix $Cov=(cov(i,j))_{i,j}$ is semi-definite positive.

One can show that, conversely, for any semi-positive definite matrix $Q$, there is a measure $d\mu$ on $\mathbb{R}^n$ such that the above quantities exist and its covariance matrix is just $Q$.

Then, one can ask, what if these quantities, $a_i,cov(i,j)$ do not exist? Then there appears some generalization. Note that the essential obstacle for the existence of these quantities is that these functions $x_i,(x_i-a_i)(x_j-a_j)$ are not bounded. Are there some important bounded functions related to a random variable or simply a finite measure? Yes, there are. And one of them is the characteristic function of a random variable or in other words, the Fourier transform of a finite measure:

$\hat{\mu}(x)=\int_{\mathbb{R}^n}e^{i}d\mu(y)$

where $$ is the inner product on $\mathbb{R}^n$.

Then how could we translate the semi-positive-definiteness of the covariance matrix into the language of characteristic functions? It is not so obvious. I will give it directly. For any positive integer $k$ and for any real numbers $b_1,b_2,...,b_k$ and any vectors $v_1,v_2,...,v_k$, we have that

$\sum_{i,j=1,...,k}\int_{\mathbb{R}^n}e^{i}d\mu(y)\geq0$

This is the definition for a positive definite function: we call a complex valued function $f:\mathbb{R}^n\rightarrow \mathbb{C}$ is semi-positive definite if for any positive integer $k$ and for any real numbers $b_1,b_2,...,b_k$ and any vectors $v_1,v_2,...,v_k$, we have that

$\sum_{i,j}f(v_i-v_j)b_i\bar{b_j}\geq0$

(a little remark is in order: there is another way around in Bayesian analysis which gives some more motivation for the introduction of (semi)positive definite functions, cf the wikipedia article). We drop the prefix ‘semi’ if the equality holds only for the case that $b_1=b_2=...=b_k=0$. So it is clear that the function $f(x)=\int e^{id\mu(y)}$ is a semi-positive definite function. If the measure $d\mu$ admits the existence of the above quantities, then we can show that $f$ is positive definite if and only if $Cov$ is a positive definite matrix. So, in this sense, we say that positive definite functions are generalizations of positive definite matrices.

There is a rather remarkable result which inverses the above process, that is the Bochner’s theorem.

Theorem(Bochner) If $f$ is a positive definite function on $\mathbb{R}^n$, then there is a finite measure $d\mu$ on $\mathbb{R}^n$ such that $f$ is the Fourier transform of $d\mu$.

Note that, according to this theorem, if $f$ is positive definite, then $f(x)$ tends to zero when $x$ tends to infinity. We can not see this point directly from the definition of positive definite functions.

Note that in the definition of positive definite functions, we use only the additive group structure of $\mathbb{R}^n$, this inspires us to again generalize the domain of definition to a general group. This will be the content of the next post.