Back to top

Plenary Invited Speakers

Richard Evans

Richard Evans

Deep Mind

The Apperception Engine

Talk outline: Imagine a machine, equipped with various sensors, receiving a stream of sensory information. Somehow, it must make sense of that sensory stream. But what, exactly, does “making sense” involve, and how can it be implemented in a machine? I argue that making sense of a sensory stream involves constructing objects that persist over time, with properties that change over time according to intelligible universal laws (represented as logic programs). Thus, making sense is a form of program synthesis - but it is unsupervised program synthesis from raw unstructured input. I will describe our system, the Apperception Engine, that was designed to make sense of raw sensory input by constructing programs that explain that input. I will show various examples of our system in action. This talk will draw on two of our recent papers published in the Artificial Intelligence Journal: “Making Sense of Sensory Input”, and the sequel “Making Sense of Raw Input”.

Bio: Richard Evans is a Staff Research Scientist at DeepMind and an Honorary Senior Research Fellow at Imperial College. He is particularly interested in program synthesis, neuro-symbolic systems, and the philosophy of Immanuel Kant.
Noah Goodman

Noah Goodman

Stanford University

Learning to reason, from probabilistic programs to autoregressive models

Talk outline: TBA

Bio: Noah D. Goodman is Associate Professor of Psychology and Computer Science at Stanford University. He studies the computational basis of human and machine intelligence, merging behavioral experiments with formal methods from statistics, machine learning, and programming languages. His research topics include language understanding, social reasoning, concept learning, and natural pedagogy. In addition he explores related technologies such as probabilistic programming languages and deep generative models. He has released open-source software including the probabilistic programming languages Church, WebPPL, and Pyro. Professor Goodman received his Ph.D. in mathematics from the University of Texas at Austin in 2003. In 2005 he entered cognitive science, working as Postdoc and Research Scientist at MIT. In 2010 he moved to Stanford where he runs the Computation and Cognition Lab. His work has been recognized by the J. S. McDonnell Foundation Scholar Award, the Roger N. Shepard Distinguished Visiting Scholar Award, the Alfred P. Sloan Research Fellowship in Neuroscience, seven computational modeling prizes from the Cognitive Science Society, and best paper awards from AAAI, EDM, and other venues.
Alexander Gray

Alexander Gray

VP of Foundations of AI, IBM Watson Centre, New York

New Theoretical Foundations for Neuro-Symbolic Fusion and Emerging Benefits for NLP and Sequential Decision Making

Talk outline: I’ll describe our efforts to establish a complete neuro-symbolic paradigm and its progress toward some open problems of AI, vs deep learning (DL) alone.

1) I’ll introduce the Logical Neural Network (LNN) model framework, and new first-of-a-kind theoretical results which address the need for provably correct and tractable full first-order logical inference with realistic (uncertain, partial and even inconsistent) knowledge while also maintaining the full learning capabilities of a modern neural network. Upon this foundation, we build effective learning of logical formulae, in ways that achieve more compactness for interpretability, and can utilize temporal relationships. More broadly, I’ll discuss our larger emerging theory around learning the compositional structure underlying a dataset and its implications for fundamentally reducing sample complexity.

2) I’ll describe our progress toward an alternative strategy for NLP which this framework enables, which is to translate sequences of words into logic statements representing their underlying semantics, using our state-of-the-art technologies for semantic parsing, entity linking and relation linking. While a difficult road, it holds the promise of “true” understanding of language, in particular when quizzed on examples far from what was observed in training data. The approach already reaches or exceeds the state of the art in question answering benchmarks, and more importantly can answer questions that are not otherwise answerable without reasoning. I’ll also show how we can use our machinery to constrain text generation by large language models to stay consistent with a chosen set of reference facts or policy constraints.

3) Finally, I’ll show how we can use the framework to realize a sequential decision making scheme which can leverage the aforementioned reasoning (for planning-like capability) and the aforementioned learning (for RL-like capability) in concert to perform more efficiently in very difficult open-ended domains with infinite actions such as text-based interactive fiction games (like Zork). In order to address the need for a rich world model for bootstrapping new applications in general, I’ll introduce our Universal Logic and Knowledge Base (ULKB), a first federation/alignment of all of the major public linguistic and ontological knowledge graphs comprising about 1 biliion facts, with a logical foundation in simple type theory. Putting these together with our NLP machinery, we show the ability to solve games that typical DL approaches are unable to complete.

Bio:Alexander Gray serves as VP of Foundations of AI at IBM, and currently leads a global research program in Neuro-Symbolic AI at IBM. He received AB degrees in Applied Mathematics and Computer Science from UC Berkeley and a PhD in Computer Science from Carnegie Mellon University. Before IBM he worked at NASA, served as a tenured Associate Professor at the Georgia Institute of Technology, and co-founded an AI startup in Silicon Valley. His work on machine learning, statistics, and algorithms for massive datasets using ideas from computational geometry and computational physics, predating the movement of “big data” in industry, has been honored with a number of research honors including multiple best paper awards, the NSF CAREER Award, selection as a National Academy of Sciences Kavli Scholar, and service as a member of the 2010 National Academy of Sciences Committee on the Analysis of Massive Data. His interests have generally revolved around exploring new or underdeveloped connections between diverse fields with the potential of breaking through long-standing bottlenecks of ML/AI.
Sumit Gulwani

Sumit Gulwani

Microsoft Redmond

AI-assisted Programming

Talk outline: AI can enhance programming experiences for a diverse set of programmers: from professional developers and data scientists (proficient programmers) who need help in software engineering and data wrangling, all the way to spreadsheet users (low-code programmers) who need help in authoring formulas, and students (novice programmers) who seek hints when stuck with their programming homework. To communicate their need to AI, users can express their intent explicitly—as input-output examples or natural-language specification—or implicitly—where they encounter a bug (and expect AI to suggest a fix), or simply allow AI to observe their last few lines of code or edits (to have it suggest the next steps).

The task of synthesizing an intended program snippet from the user’s intent is both a search and a ranking problem. Search is required to discover candidate programs that correspond to the (often ambiguous) intent, and ranking is required to pick the best program from multiple plausible alternatives. This creates a fertile playground for combining symbolic-reasoning techniques, which model the semantics of programming operators, and machine-learning techniques, which can model human preferences in programming. Recent advances in large language models like Codex offer further promise to advance such neuro-symbolic techniques.

Finally, a few critical requirements in AI-assisted programming are usability, precision, and trust; and they create opportunities for innovative user experiences and interactivity paradigms. In this talk, I will explain these concepts using some existing successes, including the Flash Fill feature in Excel, Data Connectors in PowerQuery, and IntelliCode/CoPilot in Visual Studio. I will also describe several new opportunities in AI-assisted programming, which can drive the next set of foundational neuro-symbolic advances.

Bio: Sumit Gulwani is a computer scientist connecting ideas, people, and research & practice. He invented the popular Flash Fill feature in Excel, which has now also found its place in middle-school computing textbooks. He leads the PROSE research and engineering team at Microsoft that develops APIs for program synthesis and has incorporated them into various Microsoft products including Visual Studio, Office, Notebooks, PowerQuery, PowerApps, PowerAutomate, Powershell, and SQL. He is a sponsor of storytelling trainings and initiatives within Microsoft. He has started a novel research fellowship program in India, a remote apprenticeship model to scale up impact while nurturing globally diverse talent and growing research leaders. He has co-authored 11 award-winning papers (including 3 test-of-time awards from ICSE and POPL) amongst 140+ research publications across multiple computer science areas and delivered 60+ keynotes/invited talks. He was awarded the Max Planck-Humboldt medal in 2021 and the ACM SIGPLAN Robin Milner Young Researcher Award in 2014 for his pioneering contributions to program synthesis and intelligent tutoring systems. He obtained his PhD in Computer Science from UC-Berkeley, and was awarded the ACM SIGPLAN Outstanding Doctoral Dissertation Award. He obtained his BTech in Computer Science and Engineering from IIT Kanpur, and was awarded the President’s Gold Medal.
José Hernández-Orallo

José Hernández-Orallo

Valencian Research Institute for Artificial Intelligence, Universitat Politècnica de València Leverhulme Centre for the Future of Intelligence, University of Cambridge

Instructing prior-aligned machines: programs, examples and prompts

Talk outline: Turing considered instructing machines by programming, but also envisaged ‘child’ machines that could be educated by learning. Today, we have very sophisticated programming languages and very powerful machine learning algorithms, but can we really instruct machines in an effective way? In this talk I claim that we need better prior alignment between machines and humans for machines to do what humans really want them to do –with as little human effort as possible. First, I’ll illustrate the reason why very few examples can be converted into programs in inductive programming and machine teaching. In particular, I’ll present a new teaching framework based on minimising the teaching size (the bits of the teaching message) rather than the classical teaching dimension (the number of examples). I’ll show the somewhat surprising result that, in Turing-complete languages, when using strongly aligned priors between teacher and learner, the size of the examples is usually smaller than the size of the concept to teach. This gives us insights into the way humans should teach machines, but also the way machines should teach humans, what is commonly referred to as explainable AI. Second, I’ll argue that the shift from teaching dimension to teaching size reconnects the notions of compression and communication, and the primitive view of language models, as originally introduced by Shannon. Nowadays, large language models have distilled so much about human priors that they can be easily queried with natural language ‘prompts’ combining a mixture of textual hints and examples, leading to ‘continuations’ that do the trick without any program or concept representation. The expected teaching size for a distribution of concepts presents itself as a powerful instrument to understand the general instructability of language models for a diversity of tasks. With this understanding, ‘prompting’ can properly become a distinctively new paradigm for instructing machines effectively, yet deeply intertwined with programming, learning and teaching.

Luis Lamb

Luis Lamb

Federal University of Rio Grande do Sul (UFRGS), Brazil

On the Evolution and Contributions of Neurosymbolic AI

Talk outline: Although AI and Machine Learning have significantly impacted science, technology, and economic activities several research questions remain challenging to further advance the agenda of trustworhty AI. Researchers have show that there is a need for AI and machine learning models that soundly integrate logical reasoning and machine learning. Neurosymbolic AI aims to bring together effective machine learning models and the logical essence of reasoning in AI. In recent years, technology companies have organized research groups toward the development of neurosymbolic AI technologies, as contemporary AI systems require sound reasoning and improved explainability. In this presentation, I address how neurosymbolic AI evolved over the last decades. I also address the recent contributions within the field to building richer AI models and technologies and how neurosymbolic AI can contribute to improved AI explainability and trust.

Bio: Luis C. Lamb is a Full Professor at the Federal University of Rio Grande do Sul (UFRGS), Brazil. He holds both the Ph.D. and Diploma in Computer Science from Imperial College London (2000), MSc (1995) and BSc in Computer Science (1992) from UFRGS, Brazil. Lamb researches on Neurosymbolic AI, the integration of learning and reasoning, ethics in AI, and innovation strategies. He co-authored two research monographs in AI: Neural-Symbolic Cognitive Reasoning, with Garcez and Gabbay (Springer, 2009) and Compiled Labelled Deductive Systems (IOP, 2004). His research has led to publications at flagship journals and conferences including AAAI, IJCAI, NIPS, HCOMP, and ICSE. He was co-organizer of two Dagstuhl Seminars on Neurosymbolic AI: Neural-Symbolic Learning and Reasoning (2014) and Human-Like Neural-Symbolic Computing (2017), and several workshops on neurosymbolic learning and reasoning at AAAI, IJCAI, ECAI, and IJCLR. He was an invited speaker at IBM, Samsung, The AI Debate #2, AAAI2021 panel, NeuriIPS and CIKM workshops, and on a large number of neurosymbolic AI, innovation, and technology meetings. Lamb is Former Secretary of Innovation, Science and Technology of the State of Rio Grande do Sul (2019-2022), Vice President of Research (2016-2018) and Dean of the Institute of Informatics (2011-2016) at the Federal University of Rio Grande do Sul, Brazil. He has also been a Visiting Fellow at the MIT Sloan School of Management and Sloan Fellows MBA student at MIT.
Pierre Lévy

Pierre Lévy

University of Montréal (Canada), INTLEKT Metadata

The case for a semantic protocol.

Talk outline: Starting from a reflection on the evolution of computing and on the crucial role of language in human cognition, I propose the adoption of a computable language to serve as a semantic metadata protocol.

A candidate already exists to play this role: IEML (Information Economy Metalanguage) is composed of a compact dictionary of 3000 words and a fully regular grammar. This artificial language is philological, i.e. it can evoke any concept, translate natural languages and is self-defining. It is univocal, i.e. it has no homonyms or synonyms. It promotes narrativity, i.e. it allows for the evocation of scenes and stories - including causal explanations. It is recursive, i.e. its sentences can fit into each other. The act of reference (to data) is explicit and included in the grammar. It is self-referential, which means that it can refer to and comment on its own expressions. On a formal level, IEML is an abstract algebra based on a stack of symmetry structures. On the user interface level, it is manipulated from its translations into natural languages, it is visualized in the form of tables and graphs. In terms of efficiency, it allows the programming of nodes and semantic links, i.e. the semi-automatic generation of ontologies and other data structures.

The adoption of this protocol could multiply the power of artificial intelligence, bring deep learning, blockchain and metavers into synergy, ensure the interoperability of databases and applications of all kinds, and finally lay the technical foundations for the emergence of a reflexive collective intelligence.

Bio: Pierre Lévy is a philosopher who devoted his professional life to the understanding of the cultural and cognitive implications of the digital technologies, to promote their best social uses and to study the phenomenon of human collective intelligence. He has written a dozen of books on this subject that have been translated in more than 12 languages. His book on the technologies of intelligence, published in 1990, forecasted the advent of the Web. As soon as 1992, he founded in France one of the first software company dedicated to knowldge management. His book Collective intelligence, published in 1994 and translated in several languages, is still inspiring young researchers. His last book The Semantic Sphere (2011), proposes a scientific approach to transform the Internet into a reflexive observatory of human collective intelligence. For this purpose, he invented IEML, a language with computable semantics. Pierre Lévy is fellow of the Royal Society of Canada, founding associate editor of the “Collective Intelligence” journal and he received several awards and academic distinctions. He is currently associate professor at the University of Montréal (Canada) and CEO of INTLEKT Metadata.
Leslie Pack-Kaelbling

Leslie Pack-Kaelbling


Doing for our robots what nature did for us

Talk outline: We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in “the factory” (that is, at engineering time) and in “the wild” (that is, when the robot is delivered to a customer). I will share some general thoughts about the strategies for robot design and then talk in detail about some work I have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot.

Bio: Leslie is a Professor at MIT. She has an undergraduate degree in Philosophy and a PhD in Computer Science from Stanford, and was previously on the faculty at Brown University. She was the founding editor-in-chief of the Journal of Machine Learning Research. Her research agenda is to make intelligent robots using methods including estimation, learning, planning, and reasoning.

She is not a robot.