Functoriality of inference on diagrams in the category of Markov kernels.
Abstract
Graphical models are widely used families of probability distributions that capture conditional independence relations between a collection of variables $X_i, i \in I$. Celebrated examples include Hidden Markov models and Bayesian Networks. Graphical models are built from directed and undirected graphs $G=(I,A)$, where nodes $i \in I$ are uniquely identified with the variables $X_i$. Inference in graphical models ultimately reduces to inference for an undirected graphical model, achieved through the Belief Propagation algorithm, a message-passing algorithm. Such inference constitutes a specific instance of variational inference as it revolves around optimizing a free energy. Adopting a variational inference perspective for graphical models has facilitated the extension of the Belief Propagation algorithm to encompass broader classes of probability distributions, enabling the accommodation of interactions among more than $2$ variables (factor graphs), in contrast to traditional graphical models. Continuing in this direction enables the extension of variational inference to heterogeneous signals that can interact, and the analysis of such data structures has recently attracted increased attention. The associated models are poset-shaped diagrams in a category where objects are finite sets and morphisms are stochastic maps, and inference is done through a message-passing algorithm. I will present recent work, done in collaboration with Toby St Clere Smithe, on the functoriality of Belief Propagation and its extensions.
Origin | Files produced by the author(s) |
---|