posted on 2025-07-21, 14:43authored byDaniel Gonzalez Cedre
The scale and complexity of relational data in critical domains like chemistry, neuroscience, and social media has ignited interest in graph neural networks as performant, expressive, flexible frameworks for solving problems specified over graphs.
However, this performance comes at the cost of interpretability: the behavior of a typical neural network is, at best, a mystery.
Graph grammars, in contrast, provide a symbolic, discrete, rule-based formalism for describing transformations between graphs.
While profoundly interpretable, they are mired in the inductive biases and restrictive assumptions common to many traditional approaches to graph modeling.
This dissertation tries to diminish the discrepancy between graph neural networks and graph grammars.
The first contribution introduces Dynamic Vertex Replacement Grammars as a way of modeling temporal graph datasets with graph grammars.
The second contribution proposes an analytically-invertible normalizing flow network that learns prototypical probability distributions as intrinsic explanations for its behavior.
The third contribution shows how the attention mechanism in graph neural networks induces grammars that can act as generative post hoc explainers.
This supports the thesis that discrete rules and continuous distributions are jointly critical to the future of machine learning.<p></p>