Artificial Intelligence · Information Technology

What can old-fashioned AI tell us about the brain

This is a review of the book Common Sense, The Turning Test and the Quest for Real AI by Hector Levesque, Copyright 2017.

In the last few years artificial intelligence, or AI, has been growing and progressing by leaps and bounds.  Hardly a week goes by without an announcement of some kind of big AI breakthrough .  For example, we hear that Google perfected its translation program over one  weekend by turning it loose on a huge pile of translation data.  On Friday Google’s translations were clunky and hardily understandable, but by Monday they were practically indistinguishable from human translations.  Or Toyota just invested 1 billion in AI research.  Or Elon Musk invested a billion in a new AI company of his own.  Or three of the main auto makers plan on producing driverless cars within a year or two.  Or Uber is almost ready to begin producing driverless trucks.  Or everything from steel making, to medicine, to law, to fast food is on the verge of becoming  completely automated and will potentially destroy millions of jobs in the next ten years.

A new kind of AI called Adaptive Machine Learning is mostly what is responsible for most of this sudden progress.  Adaptive Machine Learning, or AML, is mostly about various kinds of pattern recognition algorithms:  voice recognition, face recognition, highway pattern recognition, etc. What is new about this kind of AI is that it is not based on human-written code that computers implement like most AI algorithms.  AML is pretty much what it says it is.  It’s about turning a computer system loose on a massive amount of data, usually called Big Data and asking the system to find and save similar patterns.  AML is about algorithms learning how to learn on their own.

If you want to make a system that will recognize cats you don’t tell the system to look for cats.  You just show the system billions of pictures of cats and non-cats.  Then you ask the system to look for similar sets of features.  You tell it to find common sets of features, i.e. repeated patterns, and let the computer do the job in its own way.  The computer decides for itself what features of the images are similar and why they are similar.  And the similarities computers see and how they describe them are not what humans see.  Computers pick out mathematical similarities mostly.  Distance between eyes, various measurements of the size and tonality and color of similar color patches.  etc.

Then researchers ask the system to list all the common features it has discovered,  and then to look for smaller sets of common features inside the first batch of features, etc, etc for several levels down.  “As Andrew Ng from Stanford puts it, ‘You throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data.’

“The success of AML is usually attributed to three things says Levesque: 1) truly massive amounts of data that the software can learn from  (mostly this data is online, in specialized repositories, or it’s from vast arrays of sensors like traffic cameras or UTube accounts or consumer photo collections), 2) powerful  computational techniques for dealing with all  this data, and 3) very fast computers.  None of these things were available even a few years ago.

These AML algorithms can be very easily applied to all kinds of jobs in business and industry: examining mammograms, picking out special spoken words, various kinds of translations, examining banking transactions for fraud, helping online shoppers find stuff they will like, or comparing visual roadway data to accelerator, brake, steering wheel data.  All of this is going to change the world very rapidly, starting very soon.

AML is where AI is headed in the very near future and it will certainly change our world tremendously, both for the good and the bad and in many as yet unknown ways.

However, AML is not what this book is about.  This book is about what computer engineers call Good Old Fashioned Artificial Intelligence,  or GOFAI.  Also sometimes called Common Sense AI. This is the kind of AI  computer engineers were working on in the 1950’s and which never really went anywhere.

So, what is common sense AI and why would we want to study it.

Lets start with common sense.  Human beings rely on all kinds of routines in real life.  Common sense is what happens when suddenly one of our routines breaks down and we suddenly have to think for ourselves.  Like maybe the door at the coffee shop refuses to open.  Or our car refuses to start.  When this sort of thing happens, we have to ask ourselves, “What exactly is going  on here and what should I do next.”   How human beings use commonsense and why computers find it so difficult to do is what Levesque’s book is about.

Common sense AI is worlds away from what modern AML is about.  AML is about automating lots of stuff in the business and the industrial world and making truck loads of money.  Common sense AI, on the other hand,  is lot more like philosophy or maybe theoretical, cognitive, brain research.

The first sentence of this book is “This is a book about the mind from the standpoint of  AI.”  Actually it is Good Old-Fashioned AI that Levisque is talking about.  The hypothesis underlying this kind of AI is that ordinary, commonsense thinking like people do everyday is a computational processs and it can be studied without regard for who or what is thinking, humans or computers or whatever.  Lefisque says that AML has been incredibly successful, but we can still learn a lot about how brains work and about what makes human beings tick from the original 1950’s version of AI.

In 1956 John McCarthy, an early AI researcher, wrote a seminal paper called “Programs with Common Sense.”  By a common sense programs he basically meant a program that can automatically deduce the consequences of anything it is told to do based on what it already knows.  In other words the goal of common sense programs was to learn from experience just as well as humans can.  McCarthy called this kind of system The Advice Taker.

Common sense or GOFAI programs are very different from AML programs for several reasons.  In the first place GOFAI does not rely on Big Data.  Instead it usually relies on language.  The focus is on linguistics.  Actually this reliance on language may be a weak point: maybe intelligent behavior doesn’t have to be something that can be put into human words.  The idea that knowledge can only be understood and communicated in words is maybe a human bias.  Perhaps much more powerful kinds of intelligence can be attained if we just skip over what humans deem intelligence to be.  Maybe and maybe not.

But still the idea that human, word-based intelligence is actually important still holds a place of honor in GOFAI.

Levesque says the big mystery that really interests him in AI and in brain research is how the mind actually works.  He says that mind doesn’t really exist.  The brain exists and mind is just what the brain does.  But how?  What does this mean?  This is the real question that needs to be answered says Levesque.  Right now no one knows the answers to these questions.  This is the question that this book is trying to answer.

Different disciplines provide different answers to the fundamental mind/brain question.  Some disciplines say language is the main thing.  Some say psychology is the main thing. some say neuroscience is the main thing.  Some say evolution is the main thing.  All of these approaches have a piece of the answser but to get a final answer we  have to somehow combine all of the answers from different disciplines and this is very difficult.

The approach this book takes is that of intelligent behavior.  This is how Leveaque puts it:

“What we need to care about is intelligent behavior, an agent making intelligent choices about what to do. These choices are made intelligent through the use of background information that is not present in the environment the agent is acting in. This background information is what we call knowledge. The application of knowledge to the behavior at hand is what we call thinking. The problem that needs to be sorted out is how, in concrete terms, this all works, how background knowledge can make a difference in an agent deciding what to do. The solution we consider is a computational one. In the same way that digital computers perform calculations on symbolic representations of numbers, we suggest that human brains perform calculations on symbolic representations of knowledge, and then use the result of those calculations to decide how to act.”

Levesque says there are two possible ways to go about studying thinking.  We could study brains directly (dissect them, look at how neurons work, trace all the layers of neuron connections) or we could look at the process of thinking.

Levesque uses the analogy of studying how birds fly to help us understand this.  We could study the bird themselves very carefully; we could dissect them and study their feather and bone structure carefully.   And then we could try to build a mechanical bird and try to solve the problems as they arise.  Or, we could study aerodynamics, the science of flight itself, and study how airflows both above and below an airfoil work in a wind tunnel and how might effect flight.   In studying aerodynamics we  would study the basic principles of flight.  The study of physical birds would be a very concrete thing; the study of aerodynamics would be a more abstract process.

Levesque chooses the second option.  He says this book will be a study of the principles of thought.  He plans to study the thinking process itself, not the physical brain. He wants to determine the general principles of brains and anything else that needs to think.  This is what Daniel Dennett calls “Taking a design stance.”  And this is what Dennett does in all of his books. Levesque says this is what he plans to do also.  He will not get caught up in the physical details of how the brain does the job but rather  ask how thinking might get done.  He says he will be concerned with observable intelligent behavior and how it is produced.  He will not be concerned with how this might feel to the agent doing the thinking.  That is, he will not be concerned with how consciousness feels.

I’m still in the process of writing this article.  More will follow.

Dead Horse Point in Canyonlands National Park, Utah.  Picture by Hanselmann Photography. 

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s