Back to events
Date

Jan 22, 2026

Location

Harbor Centre, Vancouver, BC

Deepdive 009 – On Meaning and Understanding

Topic

On Meaning and Understanding: Can AI Understand?

Description

A MAC Group Deepdive - an in-person 2 hour exploration of the nature of meaning and understanding - from semantics to semiotics, epistomology to ontology, and the deeper sense of "knowing" - what can that tell us about AI's current or future capacity to do that thing we know so well - understanding.

Deepdive #009

The meaning of life is something we answer with our own activities; there’s no general answer. We determine what the meaning of it is. Meaning in the sense of significance… is something you create.”

– Noam Chomsky

“What we call reality is just when we all agree about our hallucinations.”

– Sam Harris

Making Sense

Sense-making is the process of creating meaning from complex or ambiguous experiences, involving individuals interpreting information and events to form plausible understandings that guide action, often retrospectively, using social context, cues, and personal identity to build a “map” of their world. It’s a continuous effort to understand connections to act effectively, focusing on developing useful insights rather than absolute accuracy, especially in uncertain situations.

Meaning

Books

The Blind Spot: Why Science Cannot Ignore Human Experience (2024)

  • Author(s): Adam Frank, Marcelo Gleiser, and Evan Thompson
  • PerspectiveThis is the heavyweight philosophical text. It argues that the “hard problem” of consciousness and meaning arises because science (including AI research) systematically ignores the perspective of the scientist.  It challenges the group to consider if we are looking for “understanding” in the wrong place (the objective weights of the model) rather than in the subjective experience that makes those weights meaningful.

The Atomic Human: Understanding Ourselves in the Age of AI (2024)

  • Author(s): Neil D. Lawrence
  • PerspectiveLawrence (DeepMind Professor of ML at Cambridge) offers a technical yet philosophical look at what remains of “humanity” if AI can do everything else. He introduces the concept of “The Great Stagnation” of meaning—where we have more data than ever but less understanding. He argues that human understanding is constrained by our bandwidth (we communicate slowly), whereas AI has infinite bandwidth but lacks the “vulnerability” that forces us to create meaning.

Nexus: A Brief History of Information Networks from the Stone Age to AI (2024)

  • Author(s): Yuval Noah Harari
  • PerspectiveLawrence (DeepMind Professor of ML at Cambridge) offers a technical yet philosophical look at what remains of “humanity” if AI can do everything else. He introduces the concept of “The Great Stagnation” of meaning—where we have more data than ever but less understanding. He argues that human understanding is constrained by our bandwidth (we communicate slowly), whereas AI has infinite bandwidth but lacks the “vulnerability” that forces us to create meaning.

Podcasts

Hidden Levels of Alien Intelligence and Biological Life  (Lex Fridman #486)

  • Author(s): Michael Levin
  • Perspective: Levin’s view that intelligence and mind are widespread, scalable properties of living systems that emerge wherever information is organized to pursue goals across space and time. He and Lex discuss biological intelligence from cells to organisms, how life blurs the line between living and non‑living, and how “alien” minds may already exist on Earth in unfamiliar forms such as cellular collectives, xenobots, and even algorithms. Levin develops themes of a hidden “Platonic” structure of possibilities, brains as interfaces to deeper reality, memories and ideas as quasi‑living patterns, and a spectrum of agency and persuadability that challenges standard boundaries between matter, life, and mind.

Are LLMs Thinkers? (The Human Podcast)

  • Author(s): David Papineau
  • Perspective: Functionalism vs. Consciousness – Papineau, a leading philosopher of science, engages specifically with the “Next Token Prediction” argument. He challenges the standard dismissal that “it’s just statistics” by asking: if the statistics perfectly model the causal structure of the world, at what point does that model become “understanding”? This podcast is bridging the gap between the “stochastic parrot” view and the functionalist view held by many in the technical crowd.

Francois Chollet on Deep Learning and the Meaning of Intelligence (Mindscape Podcast)

  • Author(s): Francois Chollet
  • Perspective: The distinction between skill (memorized templates) and intelligence (adapting to novelty) is explored. Chollet (creator of Keras) is one of the strongest technical voices arguing against LLM understanding. He posits that current LLMs operate on interpolation within a manifold (curve fitting) rather than extrapolation (true reasoning). This directly serves the “topological” discussion by framing understanding as the ability to move off the training manifold.

What AI Can Never Be – John Vernaeke (Lecture Podcast)

  • Author(s): John Vernaeke
  • Perspective:  A focus on Embodiment and Participatory Knowing – Vervaeke (Cognitive Science) argues that “meaning” requires relevance realization, which is biologically grounded in survival and embodiment. He suggests that AI lacks “participatory knowing”—it knows about things (propositional) but cannot be in the world.  This podcast addresses the “embodied AI” aspects, providing a rigorous cognitive science argument for why an un-embodied model (no matter how large) might never “understand.”