In a recent article, Prof. Dr. Robbert Dijkgraaf, theoretical physicist and former Minister of Education, Culture and Science of the Netherlands, highlights the data inefficiency of current AI learning strategies. Despite their reliance on vast amounts of data and processing power, these systems still miss the mark when it comes to capturing the complexity of the human mind. At the same time, AI companies face growing challenges around training data, including data quality, copyright concerns, and licensing costs.
This points to a fundamental question for AI development: how can we make AI systems learn more like humans do? How can we leverage the way children learn about the world and integrate this into AI learning?
Prof. Dr. Dijkgraaf takes us to the playground, where children act as mini-researchers. They explore, test their theories of how things work, and try again when needed. Their brains continuously signal surprise, recognition, and expectation. As a result, after only a few encounters with something unfamiliar, children are able to categorize objects and form new concepts, building on prior knowledge and experience.

A crucial cognitive skill in this exploration process is category learning. Through implicit guidance and limited feedback, humans learn to recognize shared characteristics across diverse encounters, shaping future classification. For example, you only need to see two different cups and then recognize hundreds of variations, even if their shape, color, or size differs.
This is precisely where Zander Labs' work becomes relevant. We believe that passive brain-computer interfaces (pBCI) can enable AI to use the human brain as a tutor. By capturing meaningful neural signals related to attention, cognitive workload, error detection, and other key mental states, these systems can use implicit information from the brain to guide AI learning more efficiently.
When integrated into the learning loop, these signals allow AI systems to interpret the world like humans do, becoming more sensitive to context and more efficient in forming categories. The right insight into the brain enables what we call neuroadaptive category learning, creating AI systems that learn and act based on how the human brain responds, rather than relying solely on predefined labels or massive datasets. In this way, the human brain acts as a tutor, continuously shaping the AI learning process through subtle neural feedback.
Echoing Prof. Dr. Dijkgraaf’s playground metaphor, the lesson is clear. If we want AI systems that learn more like humans, we may need to rethink our reliance on scale alone and listen more closely to the human mind.