Why aren’t artificial neural network systems replacing people? In AI, machines should replicate what we do, not perform statistical calculations.

What can we do with a thought experiment to improve our AI? Photo by Kazi Mizan on Unsplash
Progress in AI and AGI has had slow enterprise adoption because today’s generative AI don’t work as predicted by big tech CEOs. We’re told fantasy stories about how all of science will soon be solved by machines that gather the statistics from existing scientific papers.
But if the scientific papers included the answers to future science, what benefit do we get from AI, since the science is already written? AI today is obviously missing the key piece – how the world works in all its multisensory glory and the interactions between seen and imagined objects.
Galileo didn’t write about the planets of the solar system having moons like earth because he read about it in a book, but because he saw it through a telescope that he built himself. In turn, he recorded his observations from the telescope night after night until he discovered a pattern! His model came from his brain, combining the ideas of the earth with our moon with his observations of Jupiter and its circling dots. The heliocentric model of the earth comes from the analysis of such observations, not just by reading other people’s work.
Today’s Goal – Best Model of a Brain
Today I introduce Patom theory, a brain model that explains what happens with brain damage as a consequence. I’ll use the quotations from it’s launch
This sets up the analysis, next time, for the analysis of my statement: “your mother’s face” that shows our innate ability to perform most of the magic we see with human language. In science, we should focus our study on the right questions!
Brains aren’t Computers
From the 1970s, many sciences became skewed from the models focused on the specifics of their subject to one focused on computation. Top accolades went to leaders in naming new departments like Computational Linguistics, Computational Neuroscience and even Computational Psychology. There was even a merging of science and engineering disciplines to both, with titles like “Computational Science and Engineering (CSE)!”
Better approach: Humanity needs to learn to keep the sciences going until potential alternatives are proven.
The old concept of machine learning (ML) led to deep learning that led to Large Language Models. All of those models exploit ML. But this isn’t how human or animal brains work as I will show you. We have much better arguments than “the brain is a computer.”
Today we can see why ML isn’t the right model for AI or AGI, and how the competitive model of pattern-atom theory or Patom theory starts to explain brain function without the metaphor of the digital computer clouding our judgement.
Animal Brains – Born to Run
Impalas in the African environment stand within about 10 minutes of birth and can keep up with their mothers within about 30 minutes. They have no training datasets. Survival requires a little luck for those early hours after birth, but the newborn quickly transforms into the jumping and sprinting animal that outruns its predators.
ML is not used. There is little opportunity to learn how their muscles and sensors work to control their bodies for the amazing feats of jumping and sprinting needed for survival. A lot of capability comes from genetics.
Similarly, human brains use genetics to enable conversation and survival. Without being taught how to recognize objects, our brains do it. Effortlessly our brains also can label complex multisensory objects with names and describe situations never seen before – the combination of impressions (what we experienced) with ideas (parts of what we experienced recombined in new ways).
How brains work from the inside
Our personal experience of the world is subjective. It provides us with solid observational scientific data. Our abilities from these experiences can be used to help support a model, and then we can test the model on different problems to support or undermine it.
1990s Thought Experiment
To introduce Patom theory, a theoretical brain model, I wrote the following thought experiment for Australian ABC radio (the Ockham’s Razor program).
Let’s start with learning – or teaching our brain something. What is it?
Human Brain’s ‘networks’
We hear about neural networks being behind AI, but those are artificial neural networks (ANNs) ‘inspired’ by human brains. (ANNs are not brain-like in many important ways.)
What exactly is the brain’s network doing? One model is that it is performing the operations carried out in digital computers, but just in an unfathomable way. Somehow programs are written in our brain to control everything! But brains start, like in an impala newborn, by moving.
And let’s not forget that animal levels of movement far exceed all of today’s robotics. Here’s my analogy of learning a golf swing from Ockham’s Razor:
“If I am hitting a ball, my brain is storing the patterns needed to move my many muscles in the right way. We repeat actions to condition the patterns. We normally say we are learning to hit the ball, of course, not that we’re teaching our brain the sequences to move our muscles.”
Think about this for a moment. By swinging a club, perhaps following verbal instruction from a coach, and refining technique by rote, our brain performs an astronomical number of muscle contractions and relaxations while our body maintains balance and corrects without ever being taught explicitly how to balance.
In humans, learning can be seen as storing a huge number of patterns made up of all our muscles, or some simpler model, but it cannot be some kind of brain program, because our brain is too slow to produce the range of muscle movements in real-time without prior experience. That is known as ballistic motion.
Now, the thought experiment itself, what would we see if we were sitting inside our brain where it connects to our spine to control and sense our body?
“Imagine that you are sitting inside your brain as your arm throws a ball. Moving your arm produces a series of patterns sent by the sensory nerves in your arm. The sensory neurons connected to the muscles in your arm will fire as your arm moves. That’s their role. If you store these patterns in your brain, you can replay the movement by triggering the same pattern from your motor neurons.”
Notice that in our thought experiment there are a wide range of sensors in our body providing specific feedback as muscles are moved at the same time.
“In other words, while sitting in your brain as the sensory patterns are received, you need only record the order and timing of the signals. Later, if you want to repeat those same movements, simply ‘play-back’ the pattern using the motor neurons that control the arm muscles. This is pattern matching, not processing. The brain learns by doing. As children, we spend lots of time moving both ourselves and the objects around us. This teaches our brain key muscular, visual and auditory sequences used for the rest of our lives. I call this pattern matching, ‘learning’.”
Now this analogy holds for some of the functions of our brain cells, but perhaps not others. The concept of activating a muscle with an electrochemical signal is what we’re given to work with, and the timing of those activations is a reasonably complex pattern in which large numbers of timed activating and relaxing muscles are needed.
Even speech production follows this system as it requires sets of muscle controls in sequences, but we need to dig deeper.
Next Time
In my next article, the phrase “your mother’s face” gives an extraordinary platform to explain how our brain works to align with what we say, hear and how we understand it. Language doesn’t start with words, but with our brain and, like the newborn impala, there is a powerful wealth of capabilities that help us survive – from our 5 fingers on each hand to pick up tools, to our motion on two legs to free up our hands, to our language that allows incomparable. But that’s for next time.

