Rethinking ‘Learning’ in Machine Learning
Feb 13, 2026
As artificial intelligence (AI) advances, comparisons between human learning and machine learning become increasingly relevant. While it might be tempting to think that machines learn similarly to humans, the reality is far more complex.
In the 1940s, Donald Hebb proposed that, “neurons that fire together, wire together,” illustrating how experiences strengthen connections within the brain, enabling learning. This foundational understanding of human learning contrasts with the development of AI.
The perceptron, introduced by Frank Rosenblatt, a psychologist, in 1958, marked a seminal advancement, simulating a basic form of learning in machines by adjusting responses based on outcomes.
In around that time, the term “machine learning” became popular. Arthur Samuel, worked in 1959 on programming a computer to improve at the game checkers. His work coined the term “machine learning” and he wrote: "A computer can be programmed so that it will learn to play a better game of checkers than can be played by the person who wrote the program."
Later, and in 1960s and 1970s, neural networks and deep learning started to emerge as subsets of the field AI. These are described as attempts to mimic how the brain works by their ddevelopers.
However, the debate between Karl Friston, a neuroscientist, and Yann LeCun, a computer scientist, at Davos 2024 underscores a critical divide in understanding AI’s capabilities.
Friston criticizes deep learning for lacking a framework to account for the uncertainties inherent in human cognition, a fundamental aspect of our learning process.
In one interview, Friston mentions: "So, I think Deep Learning is rubbish, largely because it doesn’t have the calculus of the machinery or the physics to have a calculus of beliefs, a calculus of inference, of planning, of situational awareness. And crucially, if you don’t have math or an engineering toolkit that can quantify what you know and what you don’t know, namely, encode the uncertainty that underwrites your sense making, you’ll never know how to act in terms of getting the right kind of information”.
Friston proposes, in his free-energy principle theory, that the human brain operates on predictive coding, constantly making hypotheses about future events to minimize surprises. This is akin to learning through trial and error, where success and mistakes refine our predictions and responses.
On the other hand, LeCun highlights the immense data processing and learning capabilities of AI, yet acknowledges its shortfall in emulating the depth of human learning, which integrates experience, observation, and interaction.
This divergence points to a core issue: the term “machine learning”, while practical, might suggest a cognitive engagement and understanding similar to human learning, is misleading.
Machines excel at identifying patterns and making predictions from data, but they lack the ability to derive meaning, purpose, or undergo experiential growth, all are essential components of human cognition.
Acknowledging these differences is vital for setting realistic expectations for AI’s capabilities while avoiding delusional beliefs. It also guides the ethical development and deployment of AI technologies. The quest to create machines that emulate human learning continues, but recognizing the distinct nature of human cognition is crucial for the progress of AI.
This article was published originally online in 2024 at theyuan.com.