The UPGMpp library

The Undirected Probabilistic Graphical Models in C++ (UPGMpp) is an open-source library for the building, training and managing of probabilistic graphical models.

UPGMpp logo



If you use this software, please cite it through:

      @INPROCEEDINGS{Ruiz-Sarmiento-REACTS-2015,
           author = {Ruiz-Sarmiento, J. R. and Galindo, Cipriano and Gonz{\'{a}}lez-Jim{\'{e}}nez, Javier},
            title = {UPGMpp: a Software Library for Contextual Object Recognition},
        booktitle = {3rd. Workshop on Recognition and Action for Scene Understanding (REACTS)},
             year = {2015}
      }

      

For getting the library and further information please check the project webpage at GitHub. New!A wrapper for its utilizantion within the ROS environment is now available here. The main features of UPGMpp are:

  • It works with discrete random variables.
  • Handles first order (local or unary) and second order (pairwise) relations.
  • Nodes (random variables) of different types can appear in the same PGM.
  • If the value of a random variable is known, such an evidence can be considered.
  • It supports PGMs with an arbitrary structure (including graphs with loops).
  • Log-linear potentials are considered, but the user can set a different way to model potentials.
Implementation details:
  • Fully implemented in C++.
  • It resorts to Eigen for efficient operations with matrices, and to Boost for memory management and serialization purposes.
  • Easy inference methods addition through clear interfaces.
  • Currently available inference methods:
    • MAP:
      • Iterated Conditional Modes
      • Greedy Iterated Conditional Modes
      • Exact Inference
      • Loopy Belief Propagation
      • Tree Reparametrization Belief Propagation
      • Residual Belief Propagation
      • Graph Cuts
      • Alpha-expansions Graph Cuts
      • Alpha-beta Swaps Graph Cuts
    • Marginals:
      • Loopy Belief Propagation
      • Tree Reparametrization Belief Propagation
      • Residual Belief Propagation
  • Training can be performed through:
    • Objective functions:
      • Pseudo-likelihood
      • Score-matching
      • Piecewise
      • Marginal-based approximation
      • MAP-based approximation
    • Optimization methods: