| |

Upcoming Book: Modern Cognitive AI

How to build AI that’s like us and its benefit to society!

Photo 1555431189 0fabf2667795

Language, like our ability to navigate the world with our senses and muscles, comes from our brain. AGI needs better ‘cognitive AI.’ We choose our words carefully, because amazing precision is what our brain does. Photo by Brett Jordan on Unsplash

I’m writing a new book explaining how AGI works with cognitive AI.

I will publish the sections for comment before the final book is compiled. You’ll be able to follow in LinkedIn, Speech Genie, Substack and Medium for the series. I invite you to debate and correct me where you disagree with the model. I want to shine the torch on much-needed science, so let’s do some science!

My goal is to improve AI for language and robotics using brain science.

Many of you probably feel today’s AI potential isn’t moving quickly enough despite the promise of leaps forward for society. We get problematic chatbots, and then big tech CEOs claiming super-human intelligence any day now. Or perhaps soon after only a couple of once-in-a-generation breakthroughs! Yes, that sarcasm from me.

Today’s AI and most AI of the past rely on computers and mathematics.

Computer’s start with encoding. Taking binary bits and using them to represent something. Given an encoding mechanism the bits represent characters, or real numbers, or Boolean values, or just about anything. But – and this is a critical point – does anyone think our brain represents a dollar bill as a jpg file or other compressed format? Probably not, but the assumption that brains and computers are the same needs to be carefully examine.

We need to disrupt the assumptions and start with observation. Science first, then apply theory to engineering. Looking at how to force a technology into a solution rarely works on complex problems. It’s a bit like carving a Thanksgiving turkey with only a spoon.

The solution to AI comes from the emulation of humans. Scientists and engineers can try to do anything. There are no constraints. But we need a brain model that explains how our nervous system works to seriously emulate the amazing capabilities of humans. Lack of breakthrough robotics or systems doing accurately what we ask exposes the shortfalls today. But with better understanding of our brain in place, we could then use digital systems to emulate various brain functions. That would go a long way to rapidly transform robotics and language understanding systems.

Patom (brain) theory (PT), that I announced on ABC Radio’s Ockham’s Razor show in the year 2000, explains how brains work by understanding the implications of what we see resulting from brain damage and brain scans. These studies of brain damage show that the brain functions primarily as a pattern matching machine and this idea explains what we observe in those damaged brains. The model of massive computing with digital bits does NOT explain what we see happening with human brains.

But what if Patom theory is wrong?

Well, that brings in the scientific method. At it’s core, the scientific method progresses with the winning arguments and with more detailed observational data! If your argument isn’t good enough, you lose and need to come back another day with a better argument. If your observations don’t support your argument, deeper analysis is required.

In science, you must articulate a better model that explains what we see. Show why it is wrong compared with something else. In these twenty-five years since I first described PT, the biggest objection I hear is that our brain isn’t built on pattern-atoms but is just like a computer. That model fails to explain things like how we recognize our world with vision and sound, and deal with multisensory perception in general to enable effective motor control with our muscles. Or how we remember the setup of a meeting room, or maps, or someone’s face with shadows and other obfuscations.

Science can progress engineering more rapidly than engineering alone. The worst case is trying to use engineering without an effective model as we see with today’s LLMs. Doing things right isn’t the same as doing the right things.

Just as our knowledge of the world, through the scientific breakthough that put our sun at the center of our solar system revolutionized astronomy and related engineering opportunities, so, too will debating our brain’s function for replication.

We will post regular updates on our blog at http://speechgenie.co . This site is our go-to-market product that uses brain science for proven language learning. As we say, learn like a child, speak like a native.

Please contribute $10 to become a VIP if you can support us.

Similar Posts

  • |

    Patom Theory Understands the Meaning Behind Language

    For most of my life, I have pondered a question that sits at the very center of howour brain works: how do we understand language?The question isn’t how we repeat language, nor how we recognize its surfacepatterns, but how we understand its meaning in context. If you ask ten expertsabout this subject, called Natural Language Understanding or NLU, you will getten different definitions because there are many theories available in academia!These tend to have origins in the 1950s or before and can be seen as validcompetitors in the absence of working solutions.But to most people, understanding is simple. It is the moment when wordsconnect to meaning. You do not ‘predict’ meaning (a popular paradigm in today’smachine learning community): you experience it. To many of us,…

  • |

    AI Improves with Better Brain Science

    Our brain represents the final frontier in science. Is it a computer, or something else? (Hint: it’s not a computer) Photo by Robina Weermeijer on Unsplash Today’s article overviews how our brain works. Its purpose is to persuade you that the cognitive science behind Patom theory (PT) is the best and only brain model yet. It also explains PT to understand a model of human language in an upcoming article. A thought experiment today illustrates how our brain works as input to the science of PT. If I say: “Your mother’s face” many of you will have a good idea of what her face looks like. You can choose anyone’s face to do this experiment, and you can choose any modality, too, like hearing, touch,…

  • | | |

    What is a Language Parent?

    If you’re scratching your head, wondering what on earth a language parent is, don’t worry.  You’re not alone! A language parent is someone, or a role that someone plays, that is ESSENTIAL to how you learn any language, especially if it is very different to the language that you already know like Mandarin Chinese, or Cantonese, or Japanese. A language parent is NOT a teacher. The role of a language parent is to interact with you in a normal relationship rather than to be your teacher.  What does this mean?  A language parent talks with you about the normal things that he or she is interested in, and the things that you are interested in.  The language parent spends time with you because he or she likes you and enjoys…

  • | | |

    How To Learn Any Language in Six Months

    How to Learn Any Language in Six Months – Transcript. Welcome to the Transcript page for How to Learn Any Language in Six Months. Content reproduced here with permission. Read on!  Or scroll to the bottom for the pdf download link. Have you ever held a question in mind for so long that it becomes part of how you think? Maybe even part of who you are as a person? Well I’ve had a question in my mind for many, many years and that is: how can you speed up learning? Now, this is an interesting question because if you speed up learning you can spend less time at school. And if you learn really fast, you probably wouldn’t have to go to school at…

  • | |

    Understanding isn’t just memorization

    We learn all the time, continuously, regardless of our age. We never stop, but would it surprise you that many scientists propose the model where we stop learning while we are young? That is false, although more research would help prove the point. We just need some people to use experimental science!   Inside view of the ground floor of a Starbucks in Tokyo – wow! The perfect venue to learn more about language and our ability to understand — with coffee!   OK, how can I claim that learning doesn’t stop while we are young? Why so confident? Do you know the (made up) word Preada? It’s a brand that sells glasses, like Prada. Preada puts additional effort (E) into the designs of Prada,…

  • |

    What’s missing from AI – Part 1

    Background In the 1930s, the American focus on behaviourism turned the linguistics world from the science of signs (semiotics) to one aligned with one of the great scientists in history, Pāṇini, who lived perhaps as far back as the 7th century BC. The use of Pāṇini’s linguistic model by Leonard Bloomfield led to linguistics excluding meaning, such as in the influential Chomsky monograph, Syntactic Structures, published in 1957. My proposed move back to semiotics is a side effect of the highly influential work of Robert D. Van Valin, Jr., whose development of Role and Reference Grammar (RRG) over the past 40+ years creates a clear distinction between the words and phrases in a language (morpho-syntax) and their meaning in context (contextual meaning). RRG views the world’s diverse languages with a…

Leave a Reply

Your email address will not be published. Required fields are marked *