Thursday, October 25, 2012

Convex Belief Propagation

Recently an interesting paper named distributed convex BP was published in CVPR 2011. I asked Alex Schwing (ETH Zurich) to give a brief overview of convex BP and his paper.







What is convex belief propagation?
The convex variant of belief propagation, named "convex belief propagation," is a message passing algorithm which naturally extends a variety of approaches (loopy belief propagation, tree-reweighted message passing) by respectively choosing counting numbers. For the original loopy belief propagation algorithm those counting numbers are chosen such that the resulting distribution is exact for a graphical model corresponding to a tree. For loopy graphical models the true distribution is commonly approximated by using counting numbers that correspond to a tree-structured graphical model. It turns out that the resulting cost function which is optimized by the message passing algorithm is non-convex, and loopy belief propagation is not guaranteed to converge. Instead of approximating the true distribution with counting numbers originating from a tree-like model, "convex belief propagation" employs counting numbers that ensure a convex cost function. Hence there is a global optimum and the respective message passing algorithm is guaranteed to converge.
What are counting numbers?
Counting numbers specify how often entropy terms for different subsets of variables are counted in the entropy approximation used in the energy functional. 
If I need to read one paper about convex BP what should I read?
One of the first papers introducing convex belief propagation is T. Hazan and A. Shashua, "Convergent message-passing algorithms for inference over general graphs with convex free energy," UAI 2008. It is also explained within the related work section of the CVPR 2011 work on distributed convex belief propagation.

What are the applications for convex BP?
Convex belief propagation is applicable to any problem that can be addressed by loopy belief propagation. Its computational and memory complexity are identical to the loopy variant and recent applications that benefit from (distributed) convex belief propagation are:

  • M. Salzmann and R. Urtasun; "Beyond Feature Points: Structured Prediction for Monocular Non-rigid 3D Reconstruction" ECCV 2012 

  • K. Yamaguchi, T. Hazan, D. McAllester and R. Urtasun; "Continuous Markov Random Fields for Robust Stereo Estimation"; ECCV 2012
  • J. Yao, S. Fidler and R. Urtasun; "Describing the Scene as a Whole: Joint Object Detection, Scene Classification and Semantic Segmentation"; CVPR 2012
  • A.G. Schwing, T. Hazan, M. Pollefeys and R. Urtasun; "Efficient Structured Prediction for 3D Indoor Scene Understanding"; CVPR 2012



Efficient implementation for larger models
Distributed convex belief propagation (dcBP) is an inference algorithm that allows the user to partition the task (graph) and solve the sub-problems independently and in parallel on distributed memory compute environments. Efficiency is improved by partitioning the computation onto multiple cluster nodes/machines connected, e.g., by a standard LAN. Most importantly, convergence guarantees of convex belief propagation are maintained by occasionally exchanging information between different machines and efficiency is improved due to additional computation power. Within the paper we compare the distributed computation to a non-distributed version and we observe an improvement almost equivalent to the number of additional machines (as expected). We also show that it is important to reduce communication overhead between machines, even when working on a very common 4-connected grid-like graph. We also compare the algorithm to loopy belief propagation and other message passing implementations.

1 comment:

  1. ... does this mean we should expect the implementation of dcBP in the graphlab/chi family in the near future? ;-)

    ReplyDelete