Artificial Intelligence (AI) and the Comparison Process (COMP)

Artificial Intelligence (AI) and the Comparison Process (COMP)

By Herb Wiggins, discoverer/creator of the Comparison Process/ COMP Theory/Model

Alan Turing’s test for Artificial Intelligence (AI) was simply that in talking to the AI machine, a person could not determine if he was talking to a machine, or a real person. This has not yet been achieved.

Computers are now used at a very hgh level in terms of automobile design. Let’s consider what they are used for and what they are capable of by describing in a general way what they do. The car must be designed as a massive integration of weight, chassis strength, power plant (engine), drive train, wheels and tires. It has to have electronics in it, too, for many purposes, and the usual options. If the car weighs too much for the engine, it won’t go fast enough. It the wheels/tires are too weak and not supportive, they could collapse under the weight or wear out too fast. If too large, they add too much weight to the car, overburdening the gas mileage limitations. We can see how the entire vehicle from an engineering standpoint is very complicated to build. Look under the hood of today’s car if you think it’s simple/easy to do. It’s a mass of complexity carefully fitted under the hood. Try creating that engine compartment manually with just drawing paper in your spare time so see the magnitude of the problem of design.

The computer can search thru a huge number of possible combinations, by trial and error, much, much faster than human minds can. Well programmed design computers do the work 1000’s of time faster than their human builders, alone. This allows a complex, highly integrated system, the car, with a carefully approximated, trial and error path to be followed to create the final model. Which when tested, tweaked and slightly refined, can be even better. This is why computer design of cars is used today, because it can do the job 1000’s of times faster than humans can. It can search thru the huge number of possible combinations of weights, tire/wheel, chassis strength, engine size/power and so forth and create a vehicle much better designed and more efficiently integrated than human could in the same time. The design computer has become a human prosthesis for matching/comparing the endless possibilites to create a viable, manufacturable, highly integrated machine. This is early AI, too, but 3D and very, very useful. And is probably the most complex form of design going on on the planet on a large scale basis, too.

Partial AI, however, HAS been achieved, and using the Comparison Process can be better understood in many of its subtleties. First, of all, the essential aspect of AI is recognition at all levels, be it voice and speaking, writing, and recognizing what images are. That’s what humans do, using the COMP. Understanding AI using the COMP can give some considerably greater insights than other methods, largely because very few people can well and in detail define/describe what goes on in the human brain during recognition/language. The COMP model states that recognition (cognition) is via the comparison process. That is, a person hears a word, compares that word to the Long Term Memory (verbal) & recognizes each word separately within the context of a sentence, then responds appropriately to that word string. The mind understands what the words mean by comparing them (in the LTM) to the standard usage of those words in much of their complexity.

Ask a, AI programmed computer this series of sentences.
“I Can do it.” The beans are in a Can.” I had to hit the Can.” “He saw the Can-can.”
No computer can figure out what those “cans” mean. Because the computer simulations do not compare words to each other in a sentence, they can only respond so far in pre-programmed ways. They cannot initiate real, creative understanding of word strings, nor create them. The COMP does both.

Because recognition requires that a word be known and stored in a working or long term memory(LTM) place(s) in the brain, the brain can hear a word, compare it to the heard word, and recognize it by a close match. That’s what word re-cognition means. It means when the heard word is processed, it can refer to the same/similar word in the memory, recognize it and all the details around the word, describing the word, giving it meaning, and then respond to it. Recognition is one of the basic cognitive task/process of the cortex, the basic function of the Comparison process. It’s how the brain Knows/Understands words and meanings. If the brain doesn’t know the word, it can be spoken about in terms of other related words the mind knows and then understands the meaning of a new word. A word’s definition is explained in terms of other words with related/associated words. Each new word is built upon the meanings of words learned earlier. No word is an island. No word stands alone. Definitions/explanations of words are given in terms of other words.

It’s the relationships among the many words based upon the Comparison process which creates language and understanding and knowing and comprehension. The COMP clearly shows the relationships/associations among words. That’s how we know words are related and can be used together. Once we understand that the COMP does this, tho we cannot YET understand the complex neurophysiology of the 6 layers of the usual cortex which do this work, and how this neurophysiology creates the COMP, we can still use the COMP as a model of the mind/brain interface. The COMP is a critical-to-understand complex system shortcut. And language is a complex system. The COMP is what verbal thinking does and is. The COMP has other tasks too, but verbal processing is part of what it does everyday, all day long.

Thinking and the COMP are often the same. This gives essential and useful insights in terms of understanding what we mean by “thinking”. If the COMP is being used, that is one kind of important, fairly common thinking process. We can see this when we go to find a word in a dictionary, or any other list or index. We are looking for the word, comparing and matching the other words until we find the entry we are looking for. This is one kind of Comparison Process of a multiplicity of types.

So far we have early AI on google. We put in a word and perhaps some of its modifiers into the box, hit the enter button, the massive google search engine compares/matches that word to those in its huge data base (machine LTM memory) and spits out all of the matches. There is very little thinking going on. In a very primitive way, google matches the word(or closely related spellings) and finds all the references to it and puts them up on the screen. That’s simple recognitiion, or matching, both COMP.

But to ask the machine to UNDERSTAND what the word means is something entirely different. So far, tho, If the machine can TALK/WRITE about the word using coherent meaningful sentences, which are sensible, and those can be pre-programmed into it, then it would give a superficial sense of meaning. However, if it were asked a not pre-programmed question, such as to describe “What you had to eat, today?” it’d stumble. Or if it were asked to describe the meaning of a phrase in more than 2-3 ways, it’d also stumble. Either that, or talk like a dictionary reads, which would be fairly obvious to most human listeners. This also has been achieved.

But to be able to talk and learn new words requires the ability to frame sentences, that is to create a string of meaningful, interelated/associated words Comparing to each other, which makes those relationships real and effective. And that requires creativity. and a lot of processing power, as well as a considerable memory, too. If such a question is asked, then it’d fail, and all machines today would fail. They cannot create language which is flexible enough to sound real. Neither can they describe an idea new to them, either. Because in order to do that, the machine would have to know meaning and be able to compare the words in a sentence to each other, to create meaning. And no computer can do that, yet.

Further, no machine can describe a simple image, either, let alone a mountain scene, and ID where it’s from, such as the Valley of the Yosemite. NO machine can see Half Dome and pick it out of a picture from one of its many seen angles and views. It can be pre-programmed to do so from a stock set of images, but it’d fail the task on something it didn’t know, when observed by a normal person. No computer systems yet can turn a complex image such as Half Dome around in its circuitry and recognize it as Half Dome from different directions. But humans do this and analogous processes all the time by using the Comparison Process.

In order to create real AI the programmer must first learn how to create language from the comparison process, which is how the human brain creates, learns, discovers and teaches/explains language. Until that time, by trial and error, there will be simulated and superficial AI, but not true AI which cannot easily be distinguished from a human source.

The AI computer will have to be able to recognize words and to talk about words. It will have to comprehend what those words mean in real, contextual terms, not just automatonic, pre-programmed responses, which are what goes for voice recogniition today. We can see those work. We simply state Linda Eder at the Youtube entry and it transliterates the spoken word (hopefully) into the words “Linda Eder”, then matches those with what it has and up pop all the Eder tunes/images associated with her. I’ve done this. It works, and it was not fooled by the “eider duck” form, either. It grasped that Linda Eder was pronounced the same as the duck,spelling, too. So some progress has been made, but not enough. Basically, it transliterates the spoken word into letters and then does the primitive matching work usually seen on the google box, just as it does when we write ” Linda Eder” into the box on the Goodle starting page. Sometimes you have to type it in, anyway!!

The ability to string words into a sentence which makes sense, is what humans do. No machine can create that string of words, yet. Because no machine is yet creative. True human creativity comes from the comparison process in our cortex, and that’s how we make new sentences. That’s how we discover and learn. The programmer must create an electronic analogue to the brain’s comparison process in order to speak, coherently. Otherwise it’s just all automatonic words coming out. That’s hard enough.

True speech and meaningful language require that AI understand most of the essential relationships among words/categories/classes of each word. Whether the word is living or not. Where it may be on a map. How the object is used. Whether it’s soft, hard, wet, dry, and much else. Until the AI computer can get/discover/learn these essential, meaningful relationship/associations, which are created by the Comparison Process at work in our cortices, then AI will not truly be able to satisfy Turing’s test.

In “Le Chanson Sans Fin” articles, there are posted many examples of the comparison process which creates creativity from Darwin/Wallace, to Einstein thru Edison. Here’s another: “Peleset”, from Rameses 3’s mortuary pylon, Deir el Bahari, West Bank from El Luxor, shows the Peles-et reference, translating as Pelesi–Land of. Just as the ancient Kheman name for their capital, Uas-et translates as the Place of Power. Ramesses 3 stated he settled those people there, on Egyptian Mediterranean territories, tributary to him.

Trudie and Moshe (Moses) Dothan ( People of the Sea) , noted Philistine area archeologists, who excavated Ashkelon, among other Philistine cities, found the Achean/Mikunan stirrup jars, art motifs and Grey Minyan ware pottery at the lowest, foundational levels, there. Philistine is an anglicized version of the same word, Peleset. The area is also known from Roman times as Palestina, and from our time as Palestine, anglicized from Palestina. The Comparison Process shows that both the geographical areas, and names from 11th BC to present are the same word. This verbal/geographical/archeological comparison shows the creativity process at work, seeing that the words, places and peoples when carefully compared are of the same origins. It’s the comparison process at work.

But images? Recognition of images is what our brains do pretty well. And we can talk about them, recognize what we know and don’t know, too. AI and Machines cannot yet do that. A pciture is worth 1000 words, it requires enormous processing power to analyze images,even by our own visual cortex.

But what about a simple drawing of a banana in color? Some AI programs might be able to call it a banana, but just try to ask them what they’d do with one? Or how it tastes? The ability to discover, learn, then talk about a subject is uniquely human. Until, by trial and error and much creative work, AI computers are able to do these tasks using a system which imitates/models the comparison process of creativity, creating new sentences using the COMP to create the relationships among the words, which make sense, and learning, then we won’t see true, real AI. Because that’s what language can be shown to be, repeated use of the Comparison Process. That’s the key to true AI, which until recently has not been understood, or recognized, or discovered.

Now that the goal of AI can be defined using the COMP model, then they know where to go. And having some sense of direction to the goal, is often half the task done.

Most animals can recognize territories, prey/food, predators and other simple recognition tasks. The primates have very similar cortical cell columns to humans. Those of other mammals, popoises, dogs, birds and reptiles, probably use different sorts of Comparison Process neural structures to create recognition by comparison to learned/discovered LTM analogues of the same. Biochemical recognition by the lower forms of life, such as protozoans, bacteria, fungi, etc., probably qualify as analogous recognition structures. It’s been done before. It can be done by humans using AI. But as so often seen, when the problem can be more exactly defined, it’s easier to do. Otherwise, there’s a lot more of trial and error to work through and that can take a long time.

Now consider the value of a computer which processes at about 10 GHz, compared to the human brain’s parallel processing at about 10Hz. About 1 billion fold speed. If a computer can be made to model human COMP creativity, it could potentially find more useful, creative findings using the COMP by a factor of at least 1000 times. This could potentially result in a speed up of creativity by the same factor, that is, inventiveness. What would happen to progress and output of the arts and sciences when that happened? What could happen to programming when a computer was used to create possible approaches to solve real, significant computer programming problems? Imagine the possibilities. The computer as programming problem solving prosthesis for humans. Speeding up problem solving by 1000’s of times. Or far more. Awesome potential for solving all programming problems. And for solving our own very human, very serious problems of chemical and radiation pollutions.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s