This AI Grows a Brain During Training (Pathway’s AI w/ Zuzanna Stamirowska)

Your video will begin in 10
Skip ad (5)
How to write copy that sells

Thanks! Share it with your friends!

You disliked this video. Thanks for the feedback!

Added by admin
2 Views
Imagine an AI that doesn’t just output answers — it remembers, adapts, and reasons over time like a living system.

In this episode of The Neuron, Corey Noles and Grant Harvey sit down with Zuzanna Stamirowska, CEO & Cofounder of Pathway, to break down what's building: the world’s first post-Transformer frontier model, called BDH — the Dragon Hatchling architecture.

Zuzanna explains why current language models are stuck in a “Groundhog Day” loop — waking up with no memory — and how Pathway’s architecture introduces true temporal reasoning and continual learning.

We explore:
• Why Transformers lack real memory and time awareness
• How BDH uses brain-like neurons, synapses, and emergent structure
• How models can “get bored,” adapt, and strengthen connections
• Why Pathway sees reasoning — not language — as the core of intelligence
• How BDH enables infinite context, live learning, and interpretability
• Why gluing two trained models together actually works in BDH
• The path to AGI through generalization, not scaling
• Real-world early adopters (Formula 1, NATO, French Postal Service)
• Safety, reversibility, checkpointing, and building predictable behavior
• Why this architecture could power the next era of scientific innovation

From brain-inspired message passing to emergent neural structures that literally appear during training, this is one of the most ambitious rethinks of AI architecture since Transformers themselves.

If you want a window into what comes after LLMs, this interview is essential.

Resources:
-???? Read the BDH research paper: https://arxiv.org/abs/2509.26507
- ???? Learn more about Pathway: https://pathway.com/

Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai

➤ CHAPTERS
01:12 - From Game Theory to Complexity Science
05:09 - How Intelligence Emerges from Simple Interactions
06:39 - The Transformer Breakthrough — and Its Limits
08:23 - AI’s Groundhog Day Problem
13:24 - Why Pathway Calls It “Baby Dragon Hatchling
16:52 - Continual Learning and the Dragon Metaphor
17:20 - Learning Like a Brain: Neurons and Connections
21:27 - When a Brain Emerges Inside the Model
22:54 - Memory as Strengthened Connections
24:58 - Seeing Neural Activity Inside the Model
26:46 - Memory, Surprise, and Forgetting
27:47 - Scaling Without Brute Force
32:44 - Gluing Models Together Like Lego
34:16 - Real-World Use Cases: From Formula 1 to NATO
36:38 - Dragon Nests & Production Roadmap
38:18 - Reasoning as the Core of Intelligence
39:45 - Safety and Controllable Risk
43:13 - Unlocking True Generalization
45:54 - Long-Term Vision for AI and Humanity

Hosted by: Corey Noles and Grant Harvey
Guest: Zuzanna Stamirowska, CEO and Cofounder of Pathway AI
Published by: Manique Santos
Edited by: Adrian Vallinan
Category
Artificial Intelligence & Business
Tags
Pathway AI, Zuzanna Stamirowska, Dragon Hatchling model

Post your comment

Comments

Be the first to comment