Complex Systems, Boundary Events, and Hierarchies

By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/COMP Theory/Model; 14 Mar. 2014

“Mathematics must make great advances in order to understand complexity.” —Stanislaw Ulam

Have frequently mentioned and discussed the Tree of Life of the immense taxonomies of the living millions of species and viruses, the Plate Tectonics Model in geology, and Hertzsprung-Russell Diagram, the biochemical receptor sites concepts, before, and how mathematics does not describe very much of verbal statements, no poems, nor much else which is complex systems. This will create a better understanding of why this occurs and the place the Comparison Process takes in these events and ideas. Have also discussed these events in “Depths within Depths” as well as the “Continua and the Dualities” articles It’s now time to show how more of it fits together.

https://jochesh00.wordpress.com/2014/04/14/depths-within-depths-the-nested-great-mysteries/

https://jochesh00.wordpress.com/2014/04/21/the-continua-yinyang-dualities-creativity-and-prediction/
Sort of a biological field trip format again, beginning with something odd in north Phoenix which puzzled me at the time, and from this simple cameo can come a basic new model of how the universe is organized as is our brain cortex and how that works to understand complexity of all sorts.

On north 51st Ave. in Phoenix there is an odd event there. The road jogs to the west, without much warning. Just why that should be was interesting enough, but the deeper meaning of it becomes clear once we begin to think about boundaries of the macroscopic world, geometries and related subjects. Our generation is the first to have enough information to try to figure out how things work on a deeper level that has come before. The science have become so specialized that it’s hard for many in those fields to begin to understand each other. Or as has been so comically said, “We are now learning more and more about less and less until we will soon know almost everything about almost nothing.”

The hugely expensive search for the Higgs boson, supposedly found for about $15 Billions in new equipment and running costs, cannot be confirmed by a different team because it’s so outrageously expensive using current technologies to do so. These are the Exponential Barriers discussed before. This is crisis in physics, too. We cannot afford to do science this way, either. These are boundary events, just like has been found and so well discussed in Kuhn’s “The Structure of Scientific Revolutions.” These show the structure/function limits and capabilities of current methods, systems and understanding.

See sections 6, 10, 11, the Exponential barrier.
https://jochesh00.wordpress.com/2014/04/21/the-continua-yinyang-dualities-creativity-and-prediction/

This is how those unexpected findings create a world change in our views of most which is around us, in short, a massive epistemological change which is ongoing currently. It’s what Kuhn so brilliantly wrote about and about which we are still mining and learning more to this day, some 50 years later.

Thinking about 2 dimensional flat planes versus the surfaces of spheres gets us to an understanding. The surveying methods used to make maps and lay out our grid patterns of roads are based upon plane geometry, that is flat, 2 dimensional lines and the surfaces connecting those. So if we try to impose a flat plane on a round one, tho of great size, so that the roundness is at once visible, what do we get? We get a mismatch, a problem.
Try taking a flat piece of paper and wrapping it around an orange. We can’t do it. It makes folds in the paper. The two surfaces are NOT commensurable. This is the problem we get when trying to impose a flat surface grid pattern on a round surface. And that’s why 51st Ave. had to jog, to make up for this imposition and it does it all along Bell Road for miles. The longitudinal lines approach each other slowly, and if the system is not carefully adjusted every few miles to that fact, the road will need to be adjusted over to make up for it. This is a mismatch. The systems do NOT compare.

If we think about that carefully, then we realize that it could have been proven easily centuries ago that the world was not flat. Because if the world was flat, then there would have been no mismatch. Because it’s round, as every surveyor of large areas knows, the baseline MUST be adjusted. For centuries that’s been done without realizing it was going on. They just ignored it, and adjusted it. Because map making of considerable accuracy has been going on since 2800 BC, esp. in Egypt, they no doubt saw this in their very long northwards going baselines and also corrected for it. It meant the world was round, just like the sun and moon, but it took Western Europe until the time of Cristoforo Colon to figure it out again. Careful, accurate surveying using long line of sights in clear air could have established this fact in flatter regions 1000’s of years ago. It’s as much a cultural and human mental state observation as it is a scientific fact.

In the same way, just these mismatches between the Ptolemaic model of the solar system, which put the planets in perfectly circular orbits, had to be corrected by adding epicycles, as Kuhn so well points out. Because ALL planetary orbits HAD to be perfect circles, therefore they could accept little circles to correct the measurements. As Kuhn pointed out, the epicycles approximated the ellipse, too.

Moving onward to Kepler, he saw much the same mismatch. and using Tycho Brahe’s extremely accurate measurements of the orbit of Mars, found a significant discrepancy from circularity of 8 arc minutes. That discrepancy justified and created the Keplerian elliptical orbits model and a much better and easier way of calculating planetary orbits. His creativity was a Least Energy solution.

With better measurements, it was discovered that the orbits were ALSO in 3 dimensions and the same mismatches between planar ellipses were found with the modern figures. In fact, Mercury circles the sun in a complicated rosetting orbit. So “elements of orbit” are now used which are better matches, but in fact, in the long run, the measurements of the orbits of the planets are such that they cannot be predicted more than about 50K years into the future without having real loss of accuracy. Prediction fails due to variations of the orbit.

In fact, the solar system is a complex system, not a series of easily computed orbits. If Jupiter were suddenly taken out, the inner planets steadily would spiral into the sun. The problem is that of the N-body problem, which mathematics no computers can solve, because it’s simply too complex to do so. Gleick points this out in his book, “Chaos”.

This greater overview is the problem of the sciences today. They cannot understand complex systems using linear mathematics, algebras and logics. There is a huge mismatch between predictions and outcomes. For an N-body problem of N= or >3, there is no clear solution possible. For interactions of 8 major planets in our solar system, plus the asteroids, the solution becomes of astronomical complexity, (pun intended) and unsolvable.

Now consider the problems of the some 25K genes’ interactions in the human body. It’s impossible to solve, and adding the many thousands of other compounds such as foods and gene products, it becomes even more impossible to solve.

Consider the complexities of the single eukaryotic cell. It contains many thousands of compounds in it, all interacting, many organelles such as nucleus, ribosomes, cell membranes, mitochondria (plus chloroplasts in plants), etc., that the N zooms up to the number of molecules interacting which is a very huge number, also. Essentially, modern mathematics cannot practically, or intrinsically DEAL with those complexities. Effectively the way is closed. The doors to understanding are shut. What are we to do?

The traveling salesman problem was a deep problem for mathematics for years. As the number of places to visit increases, the complexity of the problem gets harder and harder. No computer can solve it, either, even by the brute force approach. It was very expensive for any kind of large delivery system to just guess as to which routes to use for their multiple deliveries. Theoretically they could not approach within about 20% of what they figured was the most efficient route, nor could they compute it.

But how was this solved? “Observe the wisdom of the ant.”

Ants and bees had solved that problem 100 Million years ago, or before. And they did it with a very simple algorithm and process. The bees went to the nearest flower, took nectar, then went to the next nearest flower, took nectar, and after doing that several times their memories, in the neural networks we call their minute brains, collections of a million neurons, achieved a 75% solution of the problem, consistently. Ants did it by laying down a pheromone trace which they could follow, and sequentially the most efficient (least energy principle) route was created by following the pheromone trace that was the most traveled on, because it was also the fastest. Then the line got straighter and straighter to the nest, as well.
Using computer simulation of these two strategies, even better solutions to the problem were eventually found. The UPS and FedEX deliveries are now about 80% of the theoretical maximum, using these methods first found by the ants, bees and termites, perhaps as much as 100-200 of millions of years ago!!

And just how was this done? Complex systems solved it, those found in the bee and ant brains, and trial and error. Complex systems of millions of interacting neurons out computed and out performed any computer we’ve ever had. This was humbling to say the least, but it shows the immense power of complex systems to do what humans cannot do, given time and understanding. This insect solution has in the meantime saved the national and international delivery systems many 1000’s of extra miles/day and 1000’s of hours of less travel time per city, too. It’s a least energy principle in action. Again, the COMP. Comparing the outcomes of the traveled routes for the best one, the least energy solution.

And how did the observing scientists find this out? The same way Von Frisch figured out how the bees told the other workers to find the flowers yielding nectar. HE COMPARED what they were waggling, pointing and buzzing, etc., to where the bees were going and decoded the system they were using. The comparison process, in fact. It was the same way the ant method was decoded, and comparing where the bees went to find the fastest route. Same method was used to understand the ant pheromone system, the comparison process. The same method Champollion and Young used to decode the Rosetta stone. The same method used to translate and decode ANYTHING written, or any event. The Comparison process.

That is the key point. And it can be easily proven to be the case, again and again. Let’s take plate tectonics. This is a complex system, which cannot be solved by mathematics. Nor can it be created by same. The description of it is almost all verbal and visual. Cartooning has been used to show how it works and what the land world looked like 100’s of millions of years ago. Wec an look backwards in time to recreate this because of the remnant geological observations which allow it to be pieced back together in many cases, although it’s large;y approximate. All of the some 15 plates (16 if the East African plate is considered one, as it’s splitting off from the African plate) are all interacting with the others. This creates an impossible to solve 3-D, roughly spherical, N-body problem.

Essentially, the parts of the tectonic theory are the upwelling, sea floor spreading zones, the same on land from the Dead Sea to the south down the center of the Red Sea, inland at the Afar triangle and southwards to the Rift Valley zone of East Africa, too. East Africa is splitting apart. Africa split off from west Asia about 20 M years ago. There are a great number of plates. They are upwellings, subductions, faultings in complex patterns and so forth. The subduction zones often create volcanoes where the plate edge goes into the magma layers, is melted and the lighter rock rises up again, to create volcanoes. There is no center of the plate movements. There is no privileged space. Everything is pushing against everything else. It’s impossible to predict with any certainty how things will look in millions of years, because the computations can’t deal with all of these complexities. The plate tectonics model is a complex system.

The same is true of the weather. Of the stock markets. Of the economy, of social systems. The same is true of trying to understand the human brain and the complexities of each of the millions of known species, let alone all of the extinct ones. As Stanislaw Ulam stated, “Calling the universe non-linear is like calling biology the study of all non-elephants.” The problem is everywhere. 99.9+% of the universe’ events are alinear, complex systems.
So how could we have even developed plate tectonic model?

By studying, looking, thinking creatively, describing/measuring to find the patterns of sea floor spreading, of subduction, melting, etc., leading to volcanoes’ creations and eruptions, of the fault line movements, and so forth. How can we predict when a volcano will erupt? Or a fault line will rupture causing a great quake? We can’t. It’s a complex system.

And this is the key, clearly we have developed a coherent description about earth surface movements. It was done largely by using the comparison process which can handle alinearity. The same comparison process in the bird brains and other brains, which can handle complicated movement problems of flight and highly skilled activities and movements of human beings, viz. alinear systems, totally without mathematics.

Even more complex systems are found in the human body. We have a high incentive to understand these complex systems. Those are the very serious issues and problems of living and survival and reducing suffering, and increasing function. And we have it, the huge corpus of medical conditions and information built up over the last centuries of observation and study. It’s called anatomy, physiology, medical pathologies and the study of those, the differential diagnosis and the treatments. It’s almost all verbal. It describes almost entirely verbally what is going on, with some assistance from math and measurements. It uses observations, history, physical examinations, and then careful reasoning to figure out what’s going on and then bases the treatment on the outcomes of scientific studies, (the Method of Comparison) to find the best possible, current treatment methods as well as to continue to explore and find better ones. And it works. It’s verbal, visually descriptive and the examination, history, and differential diagnoses CANNOT be mathematized at all. Perhaps in the future, but perhaps not at all for most apps. Mathematics is an assistant, but NOT even a major player in the ways in which we diagnose and treat the most serious medical conditions. This fact has been overlooked for some time and it’s time to set the record straight. Mathematics alone is incapable of making the diagnoses and treatment protocols. Almost all of it is Verbal description We use verbal descriptions from comparison process methods throughout.
This is the key to understanding. Comparison processes CAN handle and deal with complex systems. Logic, math, and linear methods cannot do that task very well. How can this be? And the way to understanding it is again by using the comparison process to see what is going on.

Hierarchies have been discussed here a number of times. The massive taxonomies of all known living species, living and extinct. There is the physical reductionist method, starts with particles, then goes to atoms, then to molecules/compounds, then to chains of carbon atoms, organic and biochemistries. The simple to the complex method. Then there is a discontinuity, and we go from single cells, to multicellular living species and the larger multicellular systems with organs in them. Finally we get to neural networks, then larger collections of same in the birds, mammals, then to primates, and then to higher apes, of which we are a closely, the latter with 100,000’s of cortical cell columns, previously addressed and described, which underlie how our brains work.

Boundary events are those events which occur unexpectedly. These are the surprises we find when we carefully investigate, examine, observe, (that is, COMPARE) events around us. Flight is just such an event. We see this massively in our biological world, but not elsewhere. We see spiders using their silk to create flight in the winds. We see the maple and other seeds which spin and take off in the wind. We observe seeds surrounded by a great deal of fluff in huge numbers of species of plants, from dandelion, to cottonwood seed, to Kapok, and even the fluff surrounded seeds of the Chorisia tree, with its incredible flowers. Those can fly in the wind. This is a boundary event.

It’s seen with sugar gliders and flying squirrels. And in its most developed form with millions of insect species, 1000’s of birds and 100’s of bats, not to exclude the ancient birds and flying reptiles, the Pteranodons and the incredible colored, downy species. But how do these species actually fly? The dynamics of gliding were well worked out by the Wright Brothers and many since, but true, flexible wing gliding and powered wing flapping is a boundary event which creates the most successful and widely varied life on the earth, the insects. No one knows how this works, except that it does. Again, a complex system. How can any computer simulate a changing wing dynamic? It’s beyond math. Using complicated 3 dimensional photography it can be studied and perhaps some understanding can be created by those observations and study. But so far, little else has helped. There are many theories, of course, but none of these works on most species which fly. It’s simply too complex to figure.

Boundary events are events which mark the transition from one hierarchy to another. They may be small, or major and hardly trivial. In the tree of life, the Taxonomies, they are those which mark the creation of legs which allow animals to live on land. This transition is also marked by lungs, and skin changes which can conserve water. another boundary event would be the creation of sex, that is male/female in a single species, in all the myriad ways in which this can be done.

Binocular vision, hands, and a complicated cortex are what marked the change from primate to higher apes. A boundary event which created humans were upright posture, which freed the hands, and better enabled long distance sight. Another would be opposable thumbs. Another would be complex speech and vocal cords. The last of course, is the creation of human consciousness, similar to that of the great apes, but vastly amplified by the greatly enlarged human cortex, those 100,000’s of cortical cell columns which give us the capacity to process large amounts of information. Color vision would also be included. Each of these were unexpected and found.

In cultures, esp. in the sciences, boundary events are the discovery of radioactivity, as an aspect of elements’ isotopes. This created an enormous change in physics and everything that has touched including nuclear power, nuclear medicine, dating methods and so forth. If we review “Depths in Depths” cited above, we again see those boundary events which marked major changes in the sciences and our understanding of the unlimited complexity of the universe of events within biological systems and outside of them.
Events change, the rules change at the boundaries of the hierarchies. When the first hydrogen atoms were formed early in the universe’ history, it was the creation of the compounds and molecules from these new kinds of elements and atoms. When the first supernovae exploded, they created 1000’s of new isotopes among the elements higher than carbon atoms, too, and these were flung out into space. From those eventually came living systems. Again, boundary events.

When the massive carbon chains were created using hydrogen and hydroxyl groups, etc., this again marked a boundary event, we call organic chemistry, which then became biochemistry and living cells. From there it became multicellularity with complicated, specialized cells, and so forth. The creation of the worms, with their many, repeated, segmented bodies, each of which could create legs, and were capable of massive specialization, eventually resulting in the notochord and then the backbone, still segmented, resulting eventually in fish, amphibians, marking the transition from marine to land life, to reptiles, birds, and then mammals. Humans are still segmented animals. We need only look at our backbones, those repeated segments derived from adding on again and again, simple parts. Even the dermatomes of the skins and nerves show these segmentations throughout.

In cultures, boundary events are marked by new religions, new belief and political systems, and the sciences. Each of these subtypes, unexpected from the previous forms, just as feathers were used at first for warmth and then made flight possible in the birds. These are transitions from one form to another, and the characteristics which mark those transitions, which can be very complex.

Plate tectonics is just such a boundary event, created when enough lighter rock, viz. granite and quartz differentiated out from the more common, more dense basaltic rock. Again just piling up a great deal of isotopes of the lighter and heavier elements would not have given necessarily any idea that such a thing as continental drift was possible. And yet it exists and is real.

Taking the hierarchy of the change from Atoms to molecules shows this very clearly. Bonding of atoms is real and is due to the electron level characteristics. Simply knowing the structure of atoms does NOT give the necessary knowledge that atoms can bond with each other and create vast, almost unlimited numbers of chemical compounds. Nor does it show, without actually testing and trying it out, that carbon atoms can create very long, relatively stable chains of atoms, which is the basis of organic chemistry, biochemistry and the complex polypeptide/protein chains so necessary for life. The rules change at the boundaries, and these rule changes are the boundary events which mark those edges.

For instance, polypeptide chains can be used as biochemical signaling devices. This is not clearly known from the study of molecules alone. It takes a very large, very advanced cell organization to see this. A simple poplypeptide chain can produce very important hormonal and regulating effects, such as insulin and gastrin, for example. They are 100’s of such examples such as pituitary hormones, GIP, VIP, gastrin, endorphins and so forth.

In addition, certain organizations of complex proteins chains of amino acids can catalyze chemical reactions at lower temperatures than would be expected. Humans create nitrogen fertilizer, essentially ammonia and ammonium by the Haber-Bosch process which requires 100’s degrees of heat and 100’s of atmosphere of pressure. Then we look at Rhizobacteria, one of my favorite bacteria, which live in the roots of legumes, the bean and pea family. They can fix atmospheric nitrogen at soil temperature. And why? because they use protein chain chemistry called enzymes. These are the boundary events from complex biochemistry to living systems. Those new, immanent, unexpected surprises which await us when atoms are connected into larger and larger groups. No one could possibly have imagined that enzymes could have created complex chemical transformations by looking at bonding.

Indeed, the problem is this. We cannot build up biochemistry of enzymes from physics. That’s a boundary event. Physics can’t build up a system of understanding of living systems, simply because of those boundary events, such as complex interactions of the amino acid chains which can create hormonal effects and the enzymes which literally create all metabolism. The Quantum mechanic equations do NOT include those boundary events, which marks the next hierarchy from biochemistry to living cells, let alone complex division and reproduction which gave rise to each cell. Quantum mechanics is incomplete. ALL scientific models are incomplete.

This is why mathematics cannot follow. It’s simply too simple to do it. We cannot model nor describe anything past basic biochemistry because the rules have changed at each boundary event of the hierarchies. It’s why complex system are beyond the ken of even the most advanced mathematics. That’s why they cannot deal with plate tectonics, or solve the N-body problem. Nor can mathematics use its symbology to translate much of anything spoken or written into entirely mathematical symbols which are meaningful. It’s too limited. And those are the facts.

So we have the elementary stable particles, which are then assembled into the next hierarchy, the atoms and elements and isotopes, and then the next hierarchy created by chemical bonding. Then comes organic chemistry of the carbon bond, and then the biochemistry built upon that. And then the complex protein/polypeptide chemistries at a next hierarchical level. And finally comes the cell, the multicellular hierarchies we call multicellular organisms, which contain the next hierarchies of the organ systems.

And finally, comes the neurons, the neural nets, the ganglia and finally the brain. And at the top of this, some FIVE levels above the biochemistry/protein boundary the human mind based upon the complicated neuronal networks of some 50K-60K interacting cortical cell columns, yet another hierarchy, from which arising at the brain interface of the interactive CCC’s, comes the mind itself, of which no math can possible understand, nor describe in very much detail.

At the top is the comparison process, which creates signal detection, recognition, pattern recognition, language, visual imagery, sensory integrations, thinking, analysis, long term memory, the emotions and much else.
If mathematics cannot even tell us how and why enzymes work, and how to understand those processes at the biochemical level, allowing us to design new enzymes and so forth, from the immanency, the boundary events of chemical transformations, then how can it possibly, 4 more levels up describe and understand the mind? Let alone plate tectonics, a complete and accurate solution of the complex orbits in the solar system, nor understand the living cell? That is the boundary problem of the hierarchies. From the simple biochemistry which at first is more understandable and stops there dead cold at the protein chains, and the even more complex organelle and unlimited N-body problems posed by the complex cell. These are the limits of mathematics. We need a much more advanced kind of mathematics which can describe and give us measurable numericity to understand those systems, first. Such methods do not exist.

See:
https://jochesh00.wordpress.com/2014/04/09/languagemath-descriptionmeasurement-least-energy-principle-and-ai/

But time and again, from the immense taxonomies of the millions of species, to the millions of chemical compounds, to musics, to language, and speech, to creativity and beyond, the brain/mind interface can understand, comprehend and build and grow. This progression then leads us inevitably to the comparison process, and the next installment in these articles.

What is the COMP? How does it work outside of normal logic? How does it give rise to logics? How does it avoid the pitfalls of Godel’s proof and normal logic? Is it exempt from the limits of verbal and mathematical logic? Very likely. And that’s why it works where verbal logic and mathematical logic, the very basis of our linear maths, cannot follow. The following article will discuss the major Uber characteristics, the boundary event characteristics of the comparison process and why and how it works and creates the mind.

Language/Math, Description/Measurement, Least Energy Principle and AI

Language/Math, Description/Measurement, Least Energy Principle and AI

By Herb Wiggins, Discoverer/Creator of the Comparison Process/COMP Theory/Model; 14 Mar. 2014

“Man is the measure of all things.”
–Protagoras 5th C. BC

Table of contents
1. Inability of Mathematics to describe language; inability to describe biological, taxonomies and medical language and processes..
2. The universe is NOT mathematical, but partly describable with math.
3. Flexibility of language in descriptions markedly superior to math; useful biological/medical examples
3a. The comparative forms of adjectives as incontrovertible PROOF of the presence of the COMP in all language/descriptions.
4. Measuring is ALL a Comparison Process (COMP): distance, weights, time, etc.
5. Descriptions mostly cannot be measured. It lacks numericity used in the sciences.
6. Visual tracking as a predictive COMP; Butterfly chaotic flight and tracking; missile control by math/geometry versus avian tracking systems; human tracking while driving is much the same.
7. Predicting the future and the Least Energy Principle (LEP); value of the rule of 72;
collapse of the USSR and the LEP;
8. Stock market collapse of 2000 and predictions/prophecies.
9. Understanding the structure/function relationship of the comparison process in the cortex of brain; why it’s very hard to understand complex systems esp. of the cortex;
10. Can mathematics, if it cannot describe language much at all, describe human cortical cell functions which arise from the cortex?
11. Can present day math learn how to speak language, or write creatively?
12. A COMP possible solution to the problem of re-creating by machines, human cortical creativity; increasing speed of human creativity by computer modeling.
13. How do programmers create new programs, new operation processes, etc.?
A new form of relational mathematics is needed. Math needs to grow a new form, more descriptive as are languages.
14. The COMP which creates language is more important than mere grammar.
15. The use of empirical introspection to analyze and model programmer creativity processes, as it has that of scientific creativity. Creating creativity on computers by studying how programmers do their work.
16. Empirical introspective study of programmers’ skills and how their cortex’ output creates new programming. Successfully nderstanding programmers’ creativity can leads to a creative computer and substantially speeds up programming progress. Creativing creativity by computers will then be directly applicable to understanding language, emotions, and so forth and creating true AI.

1. The real problem has been for years that language and mathematics are not consonant. We can say everything in language, even complex mathematics, and we can write a great deal in language and are NOT able to translate that into math. For instance, the entire taxonomy of living and extinct species of all life, all the kingdoms and phylae, cannot be translated into mathematics. A bit of the descriptions can, but very little of it, either. Images of the living species cannot either. This is a real problem. The math does not exist which can describe a living species, except in trivial measurements, either.

In the same way, the entire compendium of medicine, the texts of each specialty, the physical exam, physical findings, differential diagnosis, complex system of steps of testing to a reasonably secure diagnosis, and the treatment protocols cannot be mathematized. We cannot describe the intricacies of psychology and psychiatry, let alone the anatomy and physiology and structure/function relationships derived from neurology in math either. It’s impossible with math as it exists at present.

In the same way we cannot translate a dictionary into mathematics, nor a novel, nor a play, nor a movie. Yet we can say and speak about all of mathematics. Teachers do this every day all over the world. The descriptions using words can describe math, but math cannot describe very much which is verbal.

2. I recall hearing many years ago at university that the universe was mathematics. I just looked at him and asked, then mathematize anatomy, the differential diagnosis and the entire DSM3!! He got very quiet and muttered something rude, and also logically irrelevant to the obvious. The universe is not any more mathematical than it’s English, French, or Latin, and those languages esp. in the biological world describe it far, far better than math ever can. In the arts and religions of the world, we can defy anyone to translate the Bible, New and Old Testaments into math, or for that matter, the Koran or the Bhagavad Gita, or the Buddhist texts. It can’t be done. Or to translate an entire movie into mathematical terms, or an opera or symphony? Impossible!! Clearly.

There is an extreme limit to the abilities of math to take on physical descriptions, esp. images. A picture is worth 1K words. An image would take hugely more, a very great deal more, perhaps 10k’s more using math!!. And neither could the math identify what the objects were, either. Esp. not even famous places.

3. Verbal descriptions on the other hand are very, very flexible and useful, as anyone in biological fields, including medicine, know from working every day. Let’s describe a beetle, for example. We can tell about size, altho we can use measurement to describe in more precise terms. But we use colors, and patterns of colors for the overall description. There are 2 antennae, 6 legs, often swept back in the Scarabaeidae family..There is a hard, protective, chitinous covering over the wings called the elytra. There is the cephalon (head), the thorax and the abdomen. Each of these in many beetle families has its own shape, such as the Coccinelidae, the lady bug family, where all are conforming to the rounded shape, tho the 3 major body division still are there. We describe these often with a drawing or image, so when we see them we can recognize them. The entire taxonomy of all beetles, and indeed all species known has been described using words. measurement is useful, but incidental to it. These descriptions are in fact sorts of measurements, tho they are qualitative, not quantitative. yet there are highly useful in description of almost all living forms.

3a. The most convincing demonstration of the ubiquity and that the Comparison Process is at the core of language and its descriptions are the comparative adjectives and forms. Endless and unlimited, just like the COMP. Here is the proof. Good and bad; Good, better, best, the trinary forms of the dualities, the comparative adjectives. Nice, nicer, nicest. Lowest, lower, low, high, higher, highest. Here is a Continuum built of two continua!!. Very much so. Two together. Comparing, combining, ever additive, endless. Very nice; somewhat nice; very, very nice. Endless comparatives. Take each letter of the alphabet and start listing each of the easiest to think of. Above, almost, below; before, a bit before, just before; After, nearly after, just after. Cool, cooler, coolest; close, closer, closest. Dull, duller, dullest; very dull, most dull. Happy, happier, happiest, very happy, much happiness, more happiness, most happiness. Etc., etc., etc., etc., etc., etc., etc., right to the end of the alphabet and in any modern language, find the same. Universal, real, existing, and solidly evidenced AND confirmed by unlimited examples, which anyone can create, any time.

Again, the COMP, endless, unlimited, undeniable, incontrovertible, essential, ever present, at the very core of description and language. The Comparison Process creates language and is the engine of language creation and usage.

4. In measuring, we use the Comparison Process overtly and completely. If we are to measure distance, we compare that to a known standard, be it a ruler, tape measure, or in surveying the theodolite which measures against the known phenomenon that the further away something is, that the size of it decreases by the square of the distance. This can be very precisely measured and then compared to the known standards to establish quickly and easily the sizes and dimensions of large areas of land without dragging around long ropes, chains and other formerly used methods. Each of these cases shows that the measuring COMPARES to a fixed gradated standard to arrive at a unit measure.

When we measure weights they are measured most accurately using a balance scale which measures a highly graded tension which is standardized against (compared to) a known weight. We step on a scale it’s comparing our weight to what is already standardized. All weights are measured using comparison.

When we measure time, we do so comparing to the UTI in Greenwich, UK, where the time is known and broadcast around the world by radio so the actual time can be known in each of the time zones. For more precise measurement, we compare the second to the vibrations of a quartz piece in a watch, which is precisely known by counting the vibrations/second and constantly counting that to create an accurate time piece. For more precision, we use the transition times of microwave radiation which occur when electrons rise to a higher level when absorbing a radiation then release it when falling to a lower level. This occurs at a very precise rate and the earliest effective clocks were accurate to 1 part per 10 Billion. So time is measured compared to the electron transition times between 2 electron levels in a suitable atom, usually Cesium. Time again is measured by this comparison.

When we synchronize we use the comparison process Two times. First the clock is standardized to noon, where the sun is at it’s highest point. That’s why Noon is used because the day was always measured from noon to noon. For obvious reasons as overcast days, this method had to be modified. Thus we standardize to noon, even today, worldwide, where the center of each time zone is offset 1 hour every 15 degrees latitude for each time zone east or west of Greenwich. Then we look at the clock, usually with a second hand/digital readout and compare our watch to when the second had reaches 12, for instance, and set the watch to precisely compare to the standard clock time. Comparison all the way through.

When we read time we do so by comparing events to a standard time keeper. When we measure speeds we measure the distance divided by time, against two comparison processes to establish distance/time, giving speed in meters/second, or whatever units to be used. In every case, we compare the event being measured against fixed standards. Measuring is clearly, plainly a pure comparison process.

Now look, how is description any different from measuring except there is not the valuable numbering system? It’s no different in fact. So when we state something we are actually using a qualitative description to measure a known quantity without numbers. Whether it’s a color which corresponds (Comparson Process) to known frequencies, or brightness using a photometer to measure the number of incident photons, it’s all the same thing.

Descriptions can be compared to some measurements. But some descriptions cannot BE measured, for instance when we call something a leg, or tail, or head, or wing. Those do not carry numericity, which is so valuable in terms of measuring events in the sciences. And indeed, it’s very hard to introduce numericity into the normal language. It’s been the great breakthru which has created the science, where ingenious, creative (COMP) methods have been used to measure where we could not measure before. It’s the number use, numericity, which has made science so successful because it permits more precise description than possible with verbal descriptions alone. It also creates more predictive capabilities,

6. Consider tracking of an object. Our visual systems are esp. good at this and devote a good deal of the nervous system to closely yoking each eye precisely in line withe other, so we don’t get double vision. This double input allows us to estimate distance using a parallax method, depth perception). It also allows us to determine which directions flying or moving objects/events are going so we can estimate where the object will be in a few seconds (bird flight), minutes(cloud movement wind speed), or hours (movements of the sun, moon, or fixed stars thru the sky. Calendars.) Our visual systems can then predict where they are going.

Now this is an interesting thing when we think about butterflies, because they fly in such a very irregular, almost chaotic way. Most people have seen this but not figured out WHY does the butterfly do this? It’s very easy. Birds also have visual tracking systems, and they can predict which way an object is going to go, because of the tracking system in their brains. So they can intercept an insect flying in a straight line more or less. Yum!!. Butterflies are very much larger and cannot easily avoid a bird, esp. with their colored wings. But, if they fly chaotically, how can the bird brain track it? There is no regularity for the bird to recognize and then target the insect. They escape very easily and so cannot be easily caught. It’s a survival mechanism based upon the bird’s tracking system which has a very hard time following a chaotic butterfly wing’s irregual flight.

How this compares to missile interception and fire contol is much the same. Essentially it can all be understood in a series of comparison processes. First, there must be detection, usually by radar. The targetting mechanism figures out by comparing successive radar impulses where the target is moving in space, and how fast, by measuring the speed at which the radar pulse bounces off the target. If it’s moving to the left, or right, then the directional system figures this out by comparing the time and location in an internal system set up for that, usually a mathmatical program which relates to geometry, that is a comparison to a 3D system. Then it compares the differences between a series of carefully timed pulses to determine the speed, and when those are done, it has a “lock”. The system gives a series of beeps or light blinkings and the operator of the missile fires it. The missile homes in on the target using constant radar updates to figure its position and if the target’s evasive maneuvers are not fast enough nor enough, then it is hit and damaged or destroyed.

The bird does this, but we don’t know how. Clearly it has to have some kind of internal representation using a neural system which can model the changes over a few dozen milliseconds, arrive at an approximation to where the insect will be in a few more seconds and dive towards it, comparing each position to the ones previously to continue to update the approximate spot the bug will be in the near future, connect this into the wing beats for diving and speeds of approach and then grab the bug in its beak within some range of movement depending on how long the neck is, and how fast the beak can shut on the bug. We’ve all seen them do it. And their capacities for intercepting flying insects is remarkable. We cannot duplicate this system, because the birds are constantly changing their positions, directions and so forth and tho the bird might not always capture the bug, it often does. Yum!! Comparison processing through out. And the bird does NOT use mathematics to do it, but neural networks, whatever those are.

Once we undestand that this sort of thing, i.e., the comparison process, is going on to measure, move towards and intercept, then we can more easily figure out how to duplicate in some way, this process. In driving cars we do the same thing. We know that if we speed up too much we will get to the light before it changes to green, and so we learn an internal algorithm related to how fast we are going and how long it’ll take to get to the light at a certain speed after it changes to green. So we do the same thing as a bird. I knew a student who was so good at this he could pick out of the air a fly buzzing about. His comparison Processes were working very quickly and very fast. No math involved, but the superb tracking and predicting system his brain had using the Comparison process.

The Comparison Process cannot just predict speeds and directional velocities, but it can also predict to some extent what people are going to say, or do in set circumstances. It can also sight down time lines, extrapolate from current data, and make a prediction that some event is going to occur. This is in fact a kind of prophecy. For instance we knew, many of us, that the USSR/communism/state socialism was doomed. In Nixon’s autobiography he talked about meeting the Dalai Lama, who stated that the USSR was not acting according to the rules governing human greed and incentive and so must eventually fail for this reason alone.

7. The real reason the USSR failed was the Least Energy Principle. This is a comparison process method, purely. It measures outcomes, compares them and finds the one which uses the least cost, time and distance in accomplishing a certain goal. It’s the basis of efficient production, work and all known tasks. The entire universe uses it in terms of a photon’s paths which are the most direct and the least energy, even thru a gravitational field. The orbits of the planets are least energy. The paths which cows take back from the fields to the milking barns in the afternoons are also least energy. The conformations of series of soap bubbles are also. From the trivial to the mighty galactic clusters, all is Least Energy. The windings and bends of river courses are also least energy for flowing waters.

This principle is seen everywhere, and is a basic tool anyone using the Comparison Process must know about and utilize. Because if a method uses less time, and less resources, and less distance of travel to get a single, set goal done, for instance, mining coal and getting that to the customers, or creating electricity with the least amount of waste of production and transportation, that advantage will build up. Using the Rule of 72, which measures doubling times related to an interest rate divided into 72, if a process of manufacturing by one factory is 10% more efficient than his competitor, given a similar marketing condition (yet another comparison process), then in 7 years, his advantage will be double that of his competitor. In 14 years 4 fold, and in 20 years he will dominate if not own the market.

This was largely what went on in the USSR. There was terrific waste in food production, at all levels, from planting, quality of seed which determined % of sprouting, cultivation, plants, lack of harvesting machines and tractors and so forth. And they could not get the food to market because of bad roads, bad trucks and inefficient storage and labor problems, they had to Fight the “Battle of the harvest” every year” where even students and factory workers had to turn out to get the food harvested, stored and shipped. This took away efficient education and production and affected the entire USSR during harvest times. While US farmers were only 2% of the population, Soviet farmers were 30-40% of the population to grow about the same amount of food. Comparatively, as it’s an outcome statistic.

This problem occurred all over the USSR in all areas. It got so they could not drill oil wells much more than about 10K feet down, in several days, where the US firms would do the same down to 25K feet in only a day, thus giving the US a huge comparative advantage in efficiency of drilling, more drilling and deeper, too. As a result USSR oil production began peaking out in May 1984. The Russkies knew this would happen and built many large, cheap, simple, graphite moderated nuclear power reactors called the RBMK-1000 models, in groups of 2 to 4. This was Chernobyl, and SosNovy Bor as well, including others. Everyone knows what happened there, esp. when an estimated 30% of Soviet workers were drunk most days, which continues to the present.

Upon this basis of inefficiency it was predicted the USSR would collapse if we held strong. Reagan increased pressure on them using direct embargoes of computer and other strategic goods and then forced them into a massively costly arms build up they could not afford. In 1991, the USSR collapsed due to its inefficiencies, many of which have not been reduced even today. This was no surprise to most of us.

8. Before the stock market crash of 2000, the “Economist” of London had 2 front page cover articles about the USA’s stock market bubble, where prices were WAY in excess of reasonable, some with price/earnings ratios of 50-60 to one and some of mathematical infinity because of no dividends. I can recall those two front page cover article images. Further when sitting at a dinner meeting with some associates in March 2000 telling them the our stock markets would collapse and to be ready to get out quickly to cut losses. There were two responses. One wife said, “The stock market can’t collapse. All of our pension money is in stocks.” I looked at her and said, “How can I be overdrawn? I’ve still got checks!!” And another fat and rather overconfident person said, “No one can predict the future at all.” “The London Economist believes they can. and it’s good enough for me.” I said.

In April it collapsed, the Dow falling from $12+K to $7K and the NASDAQ from $2300 to about $700..Many suffered serious losses. None of those persons EVER acknowledged to me what had happened to the market. Another acquaintance of mine made $50K on the fall in prices. Further, “A prophet is without honor even in his own land.” The gift of prophecy of Cassandra, pious daughter of Priam of Troy, was well recognized, but the gods had cursed her. No one would believe her. This is the hidden power of the ancient Greek myths. Do you see how all of these things fit together, creatively using the Comparison Process?

Future predictions ARE possible using the Comparison Process. It’s the gift of prophecy. If you know enough and can wrap your concepts, creatively around events tightly enough to discern the velocity and direction, you know where the thing will land hard. LBJ had this gift. So did the seer, Winston Churchill as reported in C.P Snow’s The Variety of Men. This is another gift of the Comparison Process.

Understanding this, recall that Bayesian mathematical methods can create predictive values and are used widely in machine recognition programs using voice or image recognition. In this way, these programs are doing a simple, Comparison process. How we recognize voices and persons, the same way many other animals do, too. Recognition is a very important part of the COMP, as has been repeatedly shown before. Recognition of words, landmarks, the creation of maps, and so forth. The Comparison Process is Bayesian plus and resides in all human cortices, making recognition, creating creativity, creating and understanding language, math and many, many other tasks, constantly while we are awake, and often in dreams, too. But I have digressed in order to make more important points about the COMP.

9. A further problem is understanding the major functions of the cortical cell columns of human brain devoted to the Comparison Process. The next question is how does the neurophysiology of the 6 layers of the cell columns create the processes which result in the COMP? And that question is an insolvable one at present. Using the structure/function relationship and an analogy with E = MCsquared, it can be understood better. When Einstein wrote his famous equation relating matter/energy, it was in the 1910’s. Nuclear fission did not come along until the 1940’s and with fusion, about 6-7 years later. Now at last at Cardarache, France, the International Thermonuclear Exp. Reactor (ITER) will be coming on line well over breakeven within a few years. That’s 100 years of lag time before the Structure of the S/F relationship was solved on the left side compared to the right side.

Now, the Structure of the cortical cell columns creates the COMP. We know what that is. But what is the structure which does that, neurophysiologically? We don’t know, and there is an impenetrable block on this, too. It’s the N-body problem. We cannot figure out using current math/computation power what happens with N= or greater than 3 is, either. The equations go to such complexity/chaos, not even the best computers can easily solve them. Consider that the number of interacting neurons PLUS neurochemicals is in the 10K’s at least in the cortical columns. The number of genes interacting to create the human body is in the range of 25K, interacting with probably more than 25K MORE chemicals/biochemicals. When we cannot solve for N>3, how can that be done, when in fact each of those neurons might well be interacting via synapses with 1000’s of others?

So it will take a while to figure that out. The difficult we do today, the impossible takes a bit longer, to paraphrase the wag. It took us thousands of years to figure out what the cortex did, in a basic, fundamental way and will take us a lot longer to work out the unlimited workings of the Comparison Process in the cortex.
But the point is we have the mathematical Bayesian predictive values which can create basic machine recognition of voice, fingerprints, and even some simple images. But these statistical methods don’t give us language, but we have the functional origin of language, which has not heretofore been known. It’s the COMP, clearly, that repeating simple process which is the right side of the S/F equation. It will take a while before we can generate all the details of how the COMP creates a real, existing language, altho we have the E = MCsquared of that, the COMP. The same for personality disorders, let alone the emotional system, though some headway has been made recently. Now we need to solve the N-body problem for the neurophysiology/genetics/embryology of the brain cortical cell columns.

10. Let us treat description in the same way. Math cannot give us much in the way of translating all but the simplest language into numbers/equations. Describe the colors and sky of a beautiful Western sunset. Language can give us some idea. It can even give us with certain known landmarks, where that sunset took place, at Point Loma in San Diego or looking at the alpen glow in the Colorado Rockies in the Frazier River area. Math can create a digital summary of the image, as we use Jpegs all the time. But it cannot give a meaningful description of what is being seen using those numbers in the Jpeg. That’s qualitatively/quantitatively a wholly different task.

Consider this, and it’s the critical one. Can mathematics give us the way to create language? No. Can it at present give us a way to re-create creativity, modelling in some way how Einstein, Darwin and Wallace used the Comparison Process to create Relativity and evolutionary theory? No. Until it can, then true, complete AI cannot come about. Until math can create relational methods, comparison processes which re-create the complexities of language, which the Comparison Process does every day in our cortices, then math will not be able to handle meanings and much else. This limit to mathematics must limit its use in creating AI which can model realistically, human cortical functions.

Now consider measurement. We use rulers, tapes, etc. to measure lengths distances, clearly comparing the gradations on those tools to arrive at the values we get. When we measure colors of light, we can analyze the brightness with photometers and the saturation and amplitudes/frequencies of the colors. But that would take a very long time to do. Our visual cortices do that all the time with just a few 100’s ms. of work. When we compare images in our minds, while we are thinking about events in existence, such as Darwin’s finches, can the mathematics recognize what is going on? Would it have the judgement and sense to realize what this means, as did Darwin and then Wallace in his own way in Indonesia?

11. For the same reason that mathematics, even of the Bayesian kind cannot figure out language, tho it can detect targets, compute their trajectories and hit those targets, it’s a long way from that simpler task to understanding language enough to speak it, except in a stereotyped, pre-programmed, limited way. In order for that to happen there must be, very likely, a number of very important breakthroughs in pure and applied mathematics to make this happen.

12. Let’s consider an easier course of action. The Comparison process in our brains is a massive, simple process which creates creativity. It does this by means unknown to us. We CAN however describe what is happening by looking at the process using the COMP. We know there is stream of consciousness going on, where a lot of processors are doing the work, by associations and finding the relationships among events/words by the COMP. So if we can speed up this creativity, then we will be closer to creating a system which can model the Comparison Processes going on in the cortex.

How do we do this? The COMP is a self organizing, ordering system. It’s a big problem solving process operating in our cortex. It can look at a lot of comparisons, possibly using parallel processing, and come up with answers by this same creativity. The COMP can create a lot of ideas, but as one scientist said, to be good you have to have a lot of ideas. But good ideas are a dime a dozen. This implies some natural filtering is going on. The Least Energy Principle is a major one. The structure/function relationship is another. When progress is made in understanding language better, that can be translated into creativity, that is finding out what words/ideas/events have in common and finding bridging concepts, too. These creative recombinings of words/idea/images is the clue we need. When we create sentences which describe clearly what we have never seen before, that is creativity purely. The COMP does this all the time, tho. It’s uniquely creative.

What kinds of methods are used in programming to create recognition? What are the structure/function relationships of how those many programs compare to each other? This should give a common basis in which they are acting to create, roughly, the same process which can ID fingerprints, faces, or voice commands.

The Vocoder can convert an amplitude/frequency pattern of voice into an electrical signal, which can then be converted to microwave signal, be received by antennae, and then reverse translated into more electrical signals, shipped by fiber optics to the site nearest the receiving cell phone, which then receives the microwave signal, turning in back into sound by electromechanical vibrations which create the voice of the sender.

But this gives nothing useful about language, because the device does not understand at all the meanings and use of language. It does a greart job of transmitting with good clarity the signal and doing the job of translating microwave signals to sound and vice versa, but it’s empty of meaning.

What is the difference between this and the voice recognition systems being used? This computer can transliterate voice commands into written words, then act upon that word, producing the search which gives the answers to the person’s mobile phone. And then he can free his hands to do other things. However, meaning is still not there.

13. What would it take to create that meaning, that understanding of language? That’s why true AI has not yet been found. Math doesn’t yet understand the complex relationship among the words. Why we put words in the order they must be to create meaning. The words must relate to meaning. There must be relationships among those words to create meaning and the computer MUST understand/know what those are. When THAT can be done, and it can be done using the analyses using the COMP, then true AI using language can be created. That is the step missing, the step which the Comparison Process can give us. It’s the relationships among the words that give us the meaning of the word “can”. It’s NOT grammar of itself. The word string must make sense, it must have internal consistencies among the meanings of the words.

14. Take the sentence, “The Beans were in the Can.” This makes sense. “The can was in the beans” doesn’t make the same kind of sense. So it’s the sequence which determines meaning, too. This is the aspect of grammar, which is necessary, but NOT sufficient to create languiage, as has been shown before using the COMP. Knowing whether a word is a noun or adverb is not the point. It’s the complex relationship among the words which creates verbal logic, as well. And this is why AI has failed for now to create either meaningful sentences and to understand/interpret them, because the COMP was not invoked, which DOES give meaning by comparing words to each other. This gives meaning by context. Until the contextual sense can be performed by computers, by careful analysis modeling the COMP, then AI will not easily speak good English, or sensibly, either. AI CANnot understand the word, can.

15. Let us analyze using the COMP how the programmer thinks creatively. When that is done, using the COMP’s empirical introspection, then we will see how new programming methods are created. Programming is NOT science. It’s an art like creating a musical composition or creative verbal writing. It’s creative. There is no mathematical description of it possible at this time, because we do NOT have a mathematics of verbal relationships which creates meaning. However, we CAN study a number of creative programmers and learn what steps they take, using comparison processes which do that kind of creative work. and it will be a form of the Comparison Process. We can compare their work to each other and learn from those comparisons. Once that is known, computers can be programmed to create new creative computer programming methods, by trial and error, the usual way creativity proceeds. And once that can be done, progress in the field of programming will expand exponentially because each new method which is created, can be compared to the others and this will create more and more new programming methods to solve the problem of AI. We create the tools to create the tools which create the solutions.

This is what the Comparison Process model can give us. An empirical introspective approach to the living art of the cortical processes of creativity in the programmers. The computers don’t create programming progress, human brains’ cortical processors do. and if we understand that, then creating better and more methods to solve any problem will be much easier. and go much faster. Otherwise it’s all trial and error and it’s unlikely to go anywhere very soon, unless someone gets extraordinarily lucky. But winning the lottery is exceedingly unlikely in this case, altho many are willing to bet on it. As is said, playing the lottery is likely a voluntary tax on those who don’t know the laws of probability. We MUST play when the odds are for us, not against us.

As Einstein so wisely wrote, “An epistemological advance always proceeds progress in physics.” The same is true of AI. The Comparison Process is that epistemological advance in understanding the massive comparison processors in the human brain/mind interface. We don’t have to understand the complex, N-body problem of the brain’s neurophysiology, which creates the COMP. We HAVE the needed function already identified and it’s the COMP.

16. We only have to understand the Function of how programmers are creative and what skills they are using as any professional does when he works so much better and faster than others without those skills. We must learn more about programmers’ creativity. It’s a good bet a good many of those are already known. Those series of programming skills which make him a better, faster, more accurate programmer and creative besides. That is the key here. Understand the creativity of the programmers, and design that into a computer to create more computer driven/originating creativity. That by itself will give us AI far faster. There is no reason to believe otherwise. There is no theoretical reason, given the success of basic voice/image recognition programming methods to believe anything but that it can be done with work, trial and error, and an understanding of the basics. Otherwise, without the COMP it could take 100 years. And there would be little or no understanding of how it worked, either.

We know how Einstein, Wallace./Darwin, Edison, Archimedes and others created their new understandings. We need only to compare THOSE examples to those used by the programmers to find new solutions. When the COMP enhances creative progress by direct utilization by the programmers, then progress in creating computer/mathematical models of the human language will proceed very quickly because the same processes which create new creativity for computer programmers, will, create computer human language and language use and understanding. It’s recursive, self-reflexive and will likely work. One method of creating a COMP model using machines can create all the rest of AI. Create the tools which create the tools and the rest follows.

I wrote all of these creative articles simply by using the COMP, in less than 3 months’ time in a mid 60’s year old, who should not have much creativity at all. The COMP and its re-inforcing dopamine high can give that to many persons. And what could it do to boost that in a 30 year old, already creative computer programmer and analyst? The mind fairly boggles.

This is what the COMP portends. If we understand understanding, if we can create creativity, then there are far more and better answers.

Artificial Intelligence (AI) and the Comparison Process (COMP)

Artificial Intelligence (AI) and the Comparison Process (COMP)

By Herb Wiggins, discoverer/creator of the Comparison Process/ COMP Theory/Model

Alan Turing’s test for Artificial Intelligence (AI) was simply that in talking to the AI machine, a person could not determine if he was talking to a machine, or a real person. This has not yet been achieved.

Computers are now used at a very hgh level in terms of automobile design. Let’s consider what they are used for and what they are capable of by describing in a general way what they do. The car must be designed as a massive integration of weight, chassis strength, power plant (engine), drive train, wheels and tires. It has to have electronics in it, too, for many purposes, and the usual options. If the car weighs too much for the engine, it won’t go fast enough. It the wheels/tires are too weak and not supportive, they could collapse under the weight or wear out too fast. If too large, they add too much weight to the car, overburdening the gas mileage limitations. We can see how the entire vehicle from an engineering standpoint is very complicated to build. Look under the hood of today’s car if you think it’s simple/easy to do. It’s a mass of complexity carefully fitted under the hood. Try creating that engine compartment manually with just drawing paper in your spare time so see the magnitude of the problem of design.

The computer can search thru a huge number of possible combinations, by trial and error, much, much faster than human minds can. Well programmed design computers do the work 1000’s of time faster than their human builders, alone. This allows a complex, highly integrated system, the car, with a carefully approximated, trial and error path to be followed to create the final model. Which when tested, tweaked and slightly refined, can be even better. This is why computer design of cars is used today, because it can do the job 1000’s of times faster than humans can. It can search thru the huge number of possible combinations of weights, tire/wheel, chassis strength, engine size/power and so forth and create a vehicle much better designed and more efficiently integrated than human could in the same time. The design computer has become a human prosthesis for matching/comparing the endless possibilites to create a viable, manufacturable, highly integrated machine. This is early AI, too, but 3D and very, very useful. And is probably the most complex form of design going on on the planet on a large scale basis, too.

Partial AI, however, HAS been achieved, and using the Comparison Process can be better understood in many of its subtleties. First, of all, the essential aspect of AI is recognition at all levels, be it voice and speaking, writing, and recognizing what images are. That’s what humans do, using the COMP. Understanding AI using the COMP can give some considerably greater insights than other methods, largely because very few people can well and in detail define/describe what goes on in the human brain during recognition/language. The COMP model states that recognition (cognition) is via the comparison process. That is, a person hears a word, compares that word to the Long Term Memory (verbal) & recognizes each word separately within the context of a sentence, then responds appropriately to that word string. The mind understands what the words mean by comparing them (in the LTM) to the standard usage of those words in much of their complexity.

Ask a, AI programmed computer this series of sentences.
“I Can do it.” The beans are in a Can.” I had to hit the Can.” “He saw the Can-can.”
No computer can figure out what those “cans” mean. Because the computer simulations do not compare words to each other in a sentence, they can only respond so far in pre-programmed ways. They cannot initiate real, creative understanding of word strings, nor create them. The COMP does both.

Because recognition requires that a word be known and stored in a working or long term memory(LTM) place(s) in the brain, the brain can hear a word, compare it to the heard word, and recognize it by a close match. That’s what word re-cognition means. It means when the heard word is processed, it can refer to the same/similar word in the memory, recognize it and all the details around the word, describing the word, giving it meaning, and then respond to it. Recognition is one of the basic cognitive task/process of the cortex, the basic function of the Comparison process. It’s how the brain Knows/Understands words and meanings. If the brain doesn’t know the word, it can be spoken about in terms of other related words the mind knows and then understands the meaning of a new word. A word’s definition is explained in terms of other words with related/associated words. Each new word is built upon the meanings of words learned earlier. No word is an island. No word stands alone. Definitions/explanations of words are given in terms of other words.

It’s the relationships among the many words based upon the Comparison process which creates language and understanding and knowing and comprehension. The COMP clearly shows the relationships/associations among words. That’s how we know words are related and can be used together. Once we understand that the COMP does this, tho we cannot YET understand the complex neurophysiology of the 6 layers of the usual cortex which do this work, and how this neurophysiology creates the COMP, we can still use the COMP as a model of the mind/brain interface. The COMP is a critical-to-understand complex system shortcut. And language is a complex system. The COMP is what verbal thinking does and is. The COMP has other tasks too, but verbal processing is part of what it does everyday, all day long.

Thinking and the COMP are often the same. This gives essential and useful insights in terms of understanding what we mean by “thinking”. If the COMP is being used, that is one kind of important, fairly common thinking process. We can see this when we go to find a word in a dictionary, or any other list or index. We are looking for the word, comparing and matching the other words until we find the entry we are looking for. This is one kind of Comparison Process of a multiplicity of types.

So far we have early AI on google. We put in a word and perhaps some of its modifiers into the box, hit the enter button, the massive google search engine compares/matches that word to those in its huge data base (machine LTM memory) and spits out all of the matches. There is very little thinking going on. In a very primitive way, google matches the word(or closely related spellings) and finds all the references to it and puts them up on the screen. That’s simple recognitiion, or matching, both COMP.

But to ask the machine to UNDERSTAND what the word means is something entirely different. So far, tho, If the machine can TALK/WRITE about the word using coherent meaningful sentences, which are sensible, and those can be pre-programmed into it, then it would give a superficial sense of meaning. However, if it were asked a not pre-programmed question, such as to describe “What you had to eat, today?” it’d stumble. Or if it were asked to describe the meaning of a phrase in more than 2-3 ways, it’d also stumble. Either that, or talk like a dictionary reads, which would be fairly obvious to most human listeners. This also has been achieved.

But to be able to talk and learn new words requires the ability to frame sentences, that is to create a string of meaningful, interelated/associated words Comparing to each other, which makes those relationships real and effective. And that requires creativity. and a lot of processing power, as well as a considerable memory, too. If such a question is asked, then it’d fail, and all machines today would fail. They cannot create language which is flexible enough to sound real. Neither can they describe an idea new to them, either. Because in order to do that, the machine would have to know meaning and be able to compare the words in a sentence to each other, to create meaning. And no computer can do that, yet.

Further, no machine can describe a simple image, either, let alone a mountain scene, and ID where it’s from, such as the Valley of the Yosemite. NO machine can see Half Dome and pick it out of a picture from one of its many seen angles and views. It can be pre-programmed to do so from a stock set of images, but it’d fail the task on something it didn’t know, when observed by a normal person. No computer systems yet can turn a complex image such as Half Dome around in its circuitry and recognize it as Half Dome from different directions. But humans do this and analogous processes all the time by using the Comparison Process.

In order to create real AI the programmer must first learn how to create language from the comparison process, which is how the human brain creates, learns, discovers and teaches/explains language. Until that time, by trial and error, there will be simulated and superficial AI, but not true AI which cannot easily be distinguished from a human source.

The AI computer will have to be able to recognize words and to talk about words. It will have to comprehend what those words mean in real, contextual terms, not just automatonic, pre-programmed responses, which are what goes for voice recogniition today. We can see those work. We simply state Linda Eder at the Youtube entry and it transliterates the spoken word (hopefully) into the words “Linda Eder”, then matches those with what it has and up pop all the Eder tunes/images associated with her. I’ve done this. It works, and it was not fooled by the “eider duck” form, either. It grasped that Linda Eder was pronounced the same as the duck,spelling, too. So some progress has been made, but not enough. Basically, it transliterates the spoken word into letters and then does the primitive matching work usually seen on the google box, just as it does when we write ” Linda Eder” into the box on the Goodle starting page. Sometimes you have to type it in, anyway!!

The ability to string words into a sentence which makes sense, is what humans do. No machine can create that string of words, yet. Because no machine is yet creative. True human creativity comes from the comparison process in our cortex, and that’s how we make new sentences. That’s how we discover and learn. The programmer must create an electronic analogue to the brain’s comparison process in order to speak, coherently. Otherwise it’s just all automatonic words coming out. That’s hard enough.

True speech and meaningful language require that AI understand most of the essential relationships among words/categories/classes of each word. Whether the word is living or not. Where it may be on a map. How the object is used. Whether it’s soft, hard, wet, dry, and much else. Until the AI computer can get/discover/learn these essential, meaningful relationship/associations, which are created by the Comparison Process at work in our cortices, then AI will not truly be able to satisfy Turing’s test.

In “Le Chanson Sans Fin” articles, there are posted many examples of the comparison process which creates creativity from Darwin/Wallace, to Einstein thru Edison. Here’s another: “Peleset”, from Rameses 3’s mortuary pylon, Deir el Bahari, West Bank from El Luxor, shows the Peles-et reference, translating as Pelesi–Land of. Just as the ancient Kheman name for their capital, Uas-et translates as the Place of Power. Ramesses 3 stated he settled those people there, on Egyptian Mediterranean territories, tributary to him.

Trudie and Moshe (Moses) Dothan ( People of the Sea) , noted Philistine area archeologists, who excavated Ashkelon, among other Philistine cities, found the Achean/Mikunan stirrup jars, art motifs and Grey Minyan ware pottery at the lowest, foundational levels, there. Philistine is an anglicized version of the same word, Peleset. The area is also known from Roman times as Palestina, and from our time as Palestine, anglicized from Palestina. The Comparison Process shows that both the geographical areas, and names from 11th BC to present are the same word. This verbal/geographical/archeological comparison shows the creativity process at work, seeing that the words, places and peoples when carefully compared are of the same origins. It’s the comparison process at work.

But images? Recognition of images is what our brains do pretty well. And we can talk about them, recognize what we know and don’t know, too. AI and Machines cannot yet do that. A pciture is worth 1000 words, it requires enormous processing power to analyze images,even by our own visual cortex.

But what about a simple drawing of a banana in color? Some AI programs might be able to call it a banana, but just try to ask them what they’d do with one? Or how it tastes? The ability to discover, learn, then talk about a subject is uniquely human. Until, by trial and error and much creative work, AI computers are able to do these tasks using a system which imitates/models the comparison process of creativity, creating new sentences using the COMP to create the relationships among the words, which make sense, and learning, then we won’t see true, real AI. Because that’s what language can be shown to be, repeated use of the Comparison Process. That’s the key to true AI, which until recently has not been understood, or recognized, or discovered.

Now that the goal of AI can be defined using the COMP model, then they know where to go. And having some sense of direction to the goal, is often half the task done.

Most animals can recognize territories, prey/food, predators and other simple recognition tasks. The primates have very similar cortical cell columns to humans. Those of other mammals, popoises, dogs, birds and reptiles, probably use different sorts of Comparison Process neural structures to create recognition by comparison to learned/discovered LTM analogues of the same. Biochemical recognition by the lower forms of life, such as protozoans, bacteria, fungi, etc., probably qualify as analogous recognition structures. It’s been done before. It can be done by humans using AI. But as so often seen, when the problem can be more exactly defined, it’s easier to do. Otherwise, there’s a lot more of trial and error to work through and that can take a long time.

Now consider the value of a computer which processes at about 10 GHz, compared to the human brain’s parallel processing at about 10Hz. About 1 billion fold speed. If a computer can be made to model human COMP creativity, it could potentially find more useful, creative findings using the COMP by a factor of at least 1000 times. This could potentially result in a speed up of creativity by the same factor, that is, inventiveness. What would happen to progress and output of the arts and sciences when that happened? What could happen to programming when a computer was used to create possible approaches to solve real, significant computer programming problems? Imagine the possibilities. The computer as programming problem solving prosthesis for humans. Speeding up problem solving by 1000’s of times. Or far more. Awesome potential for solving all programming problems. And for solving our own very human, very serious problems of chemical and radiation pollutions.