One of the most important abilities in human’s learning process is perhaps pattern recognition. That is to say, when a picture is presented in front of a person, he or she can usually give this picture a place in his or her mind: this is a picture of buildings, this is a picture of landscapes, this is a picture of my friends, this is a picture of an animal, etc.,etc. This is the very first step in mind’s information managing process.
But this process is not at all clear to us. A digression here is good. In fact, the reception of the information about a picture is really something microscopic. This means that, just the photons, the photo receptors, the retinas, the neurons, they are all in the microscopic level. But, the result of the pattern recognition is something macroscopic. In other words, knowing what is what, which belongs to which, is in the spiritual level, a macroscopic level. And we have to find a bridge between these two levels. The most common models used now in neuroscience is perhaps still based on networks, that is viewing the neurons as vertexes, the synapses as the connections, and so on. If these network models were to simulate something in the macroscopic level, we have to consider comparing the network to something in physics. A certain quantity of liquid water can, in some sense, be viewed as a network, too. Each water molecule is a vertex, while the forces(Van der Waals) between molecules are the connections(or edges, in network’s terminology). To resemble the network of neurons, this quantity of water must be in static state(macroscopically speaking), because most of the time the neurons don’t move around. It is the synapses that change the states of the mind. So, loosely speaking, the network of neurons are almost always in or near critical states(in other words, this network is always in or near phase transition).
Phase transition, in physics, generally means that, even though the whole material is in macroscopic static state, but its macroscopic physics properties change dramatically with some perturbation in the outside world. This sounds just like the behavior of the network of neurons. Every time there is some stimulus from exterior, the network of neurons will give some macroscopic results.
So, in some sense, all the difficulties in modeling the network of neurons are reduced to the understanding the mechanisms of the phase transition of the neuron groups.
Now return to our original pattern recognition problem. This problem is the common difficulty among all the works that want to mimic the human’s mind, for example, the machine learning. In this field, this problem is called the classification problem. There are two types of classification problem, the supervised learning classification and unsupervised learning classification( or more often called, clustering problem). The first type means to classify things based on a set of things whose classification results are already known, while the second type means to classify things using only their innate properties.
Even though it is a common practice that the classification result of one object is a certain thing, but if we think over it for some time, we should realize that the word ‘certain’ already involves probability(in this sense, we can say, with one hundred percent probability, that our world, the world of our humans, is in fact probabilistic, even though this phrase is itself self-contradictory). So it seems more natural to consider those statistical models, rather than those deterministic models.
One such model in classification problem is the Fisher kernel model. Like most other statistical model, this model uses a particular distance function(the Fisher kernel) to measure similarities between objects. And to test if an object belongs to a certain class, we just have to measure the distances between this object and the objects in this class and then take their mean, so after some conversion, this mean number represents the probability that this object belongs to this class.