Last week, Efi, Phokion, Loizos and I ran a virtual workshop (1) on deep learning & logic. The workshop was initiated many months ago by Efi; Sponsored by Samsung Cambridge, it finally came to life this month.
We learnt that over a thousand people registered for the workshop and had as many as 600 people attend on a single day. This was obviously very rewarding and gratifying to see.
I’ll gloss over the details of the full extent of the very exciting talks we heard. These details can be retrieved from the abstracts; besides the slides and the recorded videos should be online soon. What I’ll instead go over are my general impressions of the ideas discussed.
There are a number of motivations for a workshop such as this. It is widely acknowledged that although deep learning continues to make impressive strides on benchmarks for vision and language, it is ultimately a device for classification and prediction. The goals of AI are much broader than that. Leslie Valiant, for example, had the Aristotle quote that all knowledge is the result of syllogism or induction, the former accomplished in AI by (namesake) logic / knowledge representation / symbolic reasoning, and the latter by adaptive machine learning.
The significance of this dichotomy for human cognition and how that translates to AI is not lost on the community. Integrating learning and reasoning has been an active area of research since at least the seventies, and has led to sub areas. such as statistical relational learning, neurosymbolic modeling, inductive logic programming etc. (2, 3) See, for example, a recent survey I wrote (4), and there are many text books and popular science books motivating the matter (5).
At the workshop, Leslie Valiant rightly argued that when it comes to capturing the semantics of a language that allows logical rules to be integrated with” neural” (or for that matter, any learned) predicates, the latter induced from noisy data, there is much to be gained by precisely defining a logic. His approach, the so-called robust logics, have been very influential and especially notable because it adheres to the rigorously defined PAC (probably approximately correct) notions.
Martin Grohe had a different take on the deep learning & logic intersection: he argued that descriptive complexity has been instrumental in linking computational complexity to logical fragments, and thus expressiveness. Perhaps its possible that by studing the logical expressiveness of neural networks, we might achieve an analogous understanding? So he asks us: what do, for example, graph neural networks with a message passing type training regime correspond to? He discussed some of the seminal results he has had on linking these networks to certain classes of FO2 (the 2 variable fragment of first order logic). It was interesting that FO2 features very prominently in knowledge representation: e. g., in description logics, weighted model counting and now here (6,7). It seems plausible that FO2 will be at the cornerstone of the expressiveness / tractability tradeoff is knowledge representation, and perhaps also in the KR-ML landscape.
It is worth noting that by exploiting the symmetry of constants, then is another fragment of clausal first-order logic that is tractable (8, 9); I wonder what kind of neural model, if at all, this might find relevance for.
A number of talks talked about the significance. that combining neural models and logical reasoning could have for real-world applications. Jiajun Wu, Daisy Zhe Wang, Madhu, Jacob, Le and Balder talked about a wide range of techniques and insights that have led to impressive AI systems. I would not be able to do justice to this range here, I recommend looking at their latest publications.
Efi focussed on a new logical framework for composing neural outputs and logical reasoning. Her work relates to a recent exciting integration of embedding logical constraints in deep learning seen in proposals like DeepProbLog. She discussed a number of ideas that could lead to improved scalability in such approaches.
Efi’s approach of look at a neural and probabilistic logical integration can be contrasted to the real-valued logical semantics of Ryan Riegels’ work, akin to approaches like the Logic Tensor Networks. As Ryan argued, a notable property of the latter is that there is a natural neural interpretation to logical formulas, and vice versa, there is a natural logical interpretation to neural networks. The question of whether this is useful might depend on the application, he felt. He also noted that besides the more practical work that his team was interested in, they’ve also done some substantial work on proving the so-called strong completeness of these logics. While previously the completeness was only shown for formulas with value one, they have a more general result.
The workshop ended with a profound talk by Christos Papadimitriou on the computational processes for capturing brain activity in the context of language understanding. He described his recently published work on a calculus that underpins his model, directly tackling the problem of parsing language head on. As he appropriately put it, by attempting to capture language comprehension this way, we are not getting fooled by “shadows” and instead battling the real phenomena. I presume here he refers to purely data-intensive approaches for language understanding that exploit statistical co-occurrences.
If you didnt manage to catch the workshop, I would recommend watching the videos/reviewing the slides to simply note how “deep” the links between logic and neural networks (and adaptive learning more generally) are.
Links / References
(1)
https://research.samsung.com/news/-When-deep-learning-meets-logic-a-three-days-virtual-workshop-on-neural-symbolic-integration-sponsored-by-Samsung-Research
(2)
Statistical Relational Artificial Intelligence: Logic, Probability, and Computation | Synthesis Lectures on Artificial Intelligence and Machine Learning
(3)
Neural-Symbolic Cognitive Reasoning | Artur S. D’Avila Garcez | Springer
(4)
[2006.08480] Symbolic Logic meets Machine Learning: A Brief Survey in Infinite Domains
(5)
Rebooting AI: Building Artificial Intelligence We Can Trust: Amazon.co.uk: Marcus, Gary, Davis, Ernest: 9781524748258: Books
(6)
The Expressive Power of Graph Neural Networks as a Query Language – SIGMOD Record
(7)
[1412.1505] Symmetric Weighted First-Order Model Counting
(8)
http://www.cs.toronto.edu/~hector/Papers/disjunction.pdf
(9)
https://proceedings.neurips.cc/paper/2019/file/09fb05dd477d4ae6479985ca56c5a12d-Paper.pdf