On Wednesday, Professor Andrea Montanari talked about the computational reducibility of four problems in graphical probability models to each other. Since I missed the first lecture as well as beginning of Wednesday’s, I will rely partially on his lecture notes online. First we introduce the notion of graphical models, which are probability distributions on that displays the graphical structure on the set . Here is thought of as the range of symbols that a function on can take at each point. Examples are in order: a Bayesian network on a directed graph is represented as follows: for each vertex we are given the conditional probability where is the set of parents of . If has no parents, we only specify instead the prior . So is divided into two sets, consisting of Adams and Eves, and its complement. A Markov chain is the simplest kind of Bayesian network, in which the graph is a line graph with each edge going from left to right. Applying Bayes theorem many times, one deduces that

There are three other graphical models: pairwise graphical models which are defined on simple graphs and each edge contributes a term in the potential, as in the formula above. A factor graph is a bipartite graph with variable nodes and function nodes . The probability measure associated with factor graph model is of the form

so the factor nodes index the “factors” in the probability measure.

Andrea showed in the first lecture (which I missed) that the above three models are reducible to each other. The only nontrivial direction is reduction from factor graph to pairwise graph model, i.e., showing every pairwise graph can be represented as a factor graph, because it involves augmentation of the symbol set .

One last model is the so-called Markov random field, in which each clique in the graph contributes to a factor in the probability measure, i.e.,

This clearly is equivalent to factor graph model because every clique is an edge.

All four models exhibit domain Markov property: i.e., if one conditions on the values on a set of vertices , that insulates two other subsets , then are conditionally independent.

Here are the four main computational problems on graphical models people are interested:

- Computing small marginal probabilities
- computing conditional probability of the following form where .
- Sampling from the whole distribution
- Computing the partition function .

It is not hard to show that all four are equivalent. Note that item one above only requires small marginal probabilities, which might seem a far-cry from the sampling task in item 3. But in fact we are talking about computational reducibility up to a linear factor in , hence we are allowed to reduce the size of the graph and do the reduction by induction.

Someone brought up another interesting topic in the end, namely

5. computing the mode of the distribution.

While one could compute mode easily by sampling with an inverse temperature parameter that goes to infinity, in the same spirit as Laplace principle, the converse is far from true: taking the uniform measure on the set of some combinatorial object, then every element is a mode, but sampling amounts to counting, which can be NP-hard.

Finally Andrea discussed a paper applying graphical models to sonar system: the idea is we have a 2-dimensional function , which perhaps describes the altitude of a square sea floor. We want to use sonar to recover this function. But sonar only detects at each up to an integer multiple of the wavelength . So essentially we are given the data , which can be viewed as a section of the torus bundle over . Furthermore the detection process only occurs at discrete spatial points, say . Thus we have a torus bundle over . Instead of asking for , which is impossible since with sonar one couldn’t theoretically obtain the absolute altitude, one only asks for the relative altitude variation in the square. Thus it makes sense to try to recover the functions , which give the discrete partial derivatives of the overall phase of , which takes value most frequently in the set . One certain needs the mixed partials to match, hence must satisfy the loop condition .

The idea is then to impose a Gaussian free field distribution on the set of discrete two dimensional -valued functions (after all, Gaussian fluctuations are the most universal in nature), subject to the loop constraint. Thus for each input observation on , one could compute the prior distribution of using this model.