# Origins of Information & Understanding

Origins of Information & Understanding; and the Wellsprings of Creativity

By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014

Working with the comparison process, comparison methods, Least energy, Complex systems, and structure/function methods, the complex system origins of information have become clear.

From the article on “Descriptions and Measurement”,

https://jochesh00.wordpress.com/2014/04/09/languagemathdescriptionmeasurement-least-energy-principle-and-ai/

Essentially, this strong origin & deep equivalency of the outputs of language, verbal description, and measurement, mathematical description or numericity can be easily established. It’s best exemplified and most easily shown to be the case using measurement and the Einsteinian relativity epistemology this implies. First, take a simple saw cut wooden stick of sorts. Lay it down next to a meter stick and compare the length of the stick and read off the length in centimeters. This act essentially, by comparing the relatively fixed, stable measuring standard of the metric system, then shows about 19.5 cm. in length. Measuring the width we see about 10.2 cm. in width, and about 2.1 cm. in height. We have just created information about the stick haven’t we? It’s numerical information, but the comparison of the centimeter scale does that. Using a relatively fixed, standard, mercury thermometer, we then compare the temp of  water on the scale, measure it at for instance, 72 degrees C. This creates information/data about the water temp. Hanging the same thermometer in the air out of sunlight, we can read the ambient air temperature as well. This also creates numerical data, description and information, does it not?  Thus comparing a mass of a simple metal block on a balance scale, we place set gram standards on the opposite scale, and find the balance at 85.5 gms, for instance. This creates data, info.

Thus it is with each measuring scale we use, regardless of speed, velocity (kms/hr, for instance), hardness, Moh’s scale or Kilo-pascal measurements, even density, which is the ratio, comparison of mass/volume. In each case, we get a measure, numerical description of each event we measure by comparison. Thus, most all measurement is a comparison against a relatively fixed, standard measuring device. And that creates data, which was not there before, by comparison processing of the event. This is very clear. Measuring of most all kinds creates information.

Now, to extend further this same model of verbal description, measurement using language.  We have a standard scale for color, which is ROY G BIV; basically, red, orange, yellow, Green, etc., by which we use in comparison to describe colors. We know what green is, as it’s largely the colors of plant leaves. We know the reds of sunrise and sunset and the blues of the sky. We know the white of clouds, and the black of night, and the unlimited grays in between, the white to black color scales, which we use to describe colors of objects. So we sight a cloud, and we call it grey, comparing to our Long term memory (LTM) recognition of what that color means, do we not? The comparison process drives the recognition, that is the “re-knowing” of the comparison to our memory of any and all colors. Thus color descriptions which are verbally created also create color information which is verbal and using such standard words, which reflect our standardized, conventions for each color. That processing of sensory information creates information/data of colors. Which we can then write down and record this measurement, just as we do with lengths, temps, hardnesses, etc. Sensory information is thus processed by comparison to create information and data. The processing of internal information occurs in the same way as we recognize pain, pleasure, areas of our bodies which are moving and hurting, too.

This data is then processed to the larger concepts which create our knowledge and understandings. We know that grass is green, tho of many shades. In the category of “green”: we know this to be the case. Thus our descriptions are kinds of data generation based upon our LTM standards/references.

Take adjectives, for instance. We have the base adjectival form, high. Then the higher, and last, the highest. How this scale using low, lower lowest works, is strictly analogous to a number line being linear, as well. The base form, the last, the superlative, often marked by the ending, -est, or -st, Identifies this usage. But the middle form is the “Comparative” And there it is again, hiding in our language, as all the high, higher, highest forms, within all their myriad ways are the same comparison processing. Bigger than a bread box, smaller than a pea or marble. Big as a house. Fast as a falcon, faster than a speeding bullet. Most all are simply comparing new events to our LTM and creating data regarding the event, are they not? Thus our verbal descriptions do give meaning by these unlimited forms of comparison standards, which we call words.

Please peruse the 1/3 central section of the article beginning “But there are deeper depths within depths hidden in our language and here….”

This is where meaning comes from. Data/info from comparison standards built into each word. And this is why language is so complex. Each word acts as a comparison standard, just like our more limited measuring scales, does it not? And derived by the same comparison processing of sensory & internal data, as well. Thus as stated in the above article, description verbal is the equivalent of measuring numerically. Will not go into why we use math at all, except to say it’s least energy.

Thus we have the basis of most verbal language and how it describes most all sensory events, AND the relationships/associates of such events. Our understanding is very simple, given by the not widely recognized keen neuroscientist, Albert Einstein, who wrote in his 1936 book, “Physics and Reality” that essentially, understanding was derived from the relationships of events to each other. This deep insight readily provides a basic standard to understand how we know how events are related, and how things work. Structure/function relationships are a widely used method in this type.

We derive relationships by comparing events in existence to words, and then comparing those ideas/words to each other. In the same way we explain words in terms of other, related words, do we not? Thus this complex network of words, each acting as standard, relatively fixed meaning, compares and measures most all others in some way.

This formally explains the previously not stated means within our cortices of how it works.
This kind of complexity is essentially why AI doesn’t “understand” how language works. Comparison processes which create recognition of all sorts, work within this method. Bayesian math is used with massive number crunching to find the words to describe events in existence, that is to recognize, that is to Comparison process data to give names to images, faces, and so forth. Those early AI systems also give meanings to sounds, which are standardized versions of words, which then in turn related generally to the categories of descriptions which we call ideas.
This simple system largely describes most all languages, and how they relate to each other. And why

Ich bin Hier.
Je suis ici.
Estoy aqui.
Hic sum.  &
I am here.

Each uniquely translates the others, by a close identity among these expressions.

In the same way, the words in a single language can all translate each word into others which explain, identify, and describe what each word means. In the same way, words are used to describe all parts of mathematics. But not this comparison. Very few words can be efficiently expressed mathematically!! And that shows the problems of using math alone, to describe verbal descriptions. While words are used to teach math, math cannot be used to teach most all words. This shows exactly why numericity is not translatable to efficiently expressing most all of Shakespeare for example. Or as Ulam stated, most presciently, so many years ago, mathematics must advance substantially before it can describe complex systems (viz. language). This is a VIP point. And strikes to the heart of the AI problem.

But there is a way around this, and it’s Bayesian methodologies. And that’s why the above article on AI is also relevant here. AI cannot figure meaning. And it cannot because meaning is NOT inherent in mathematics, universally, as it is in ideas/words. Meaning can, however, be given to words by using the same kinds of standardization of word meanings by comparison processing of words. That simple model shows how to create general AI using languages. Words are complex, they have many denotations and connotations. They have many contextual meanings as well. All of this driven by comparison processing, which is WHY context of the meanings of words can be derived by comparing the words around the unknown word. Context verbal, social, and implied is everything many times in language., And this again, shows HOW to use the recognition potential of Bayesian math to create valid language and meanings. How to get the AI system to “understand” words and their meanings. By comprehensive understanding of what each word means, in comparison to the social, verbal contexts of the other words around it, meaning is derived by our human brains which have general intelligence.

And that’s the point here. Our words create descriptions of many kinds of events, from pain, to the emotions, to feelings, to specific forms of loves, and our brains’ language centers, augmented by the various visual, motor and spatial and auditory centers, all work together, to create meanings.

Thus we end this simplified version of what’s going on to create information in brain by looking at the wellsprings of creativity. & it’s simple. Recognition creates a ‘re-knowing’ of events. Comparing LTM of events creates knowledge by a standard, relatively fixed (but efficiently, to extend Einstein’s epistemology and use a thermodynamic term)  and that’s how it goes. The systems is efficient, too.)

So thus we have the wellsprings, the roots, the origins of creativity. Each time we ID a new event via LTM comparison, that is recognition, we are Creating new knowledge, facts, information, data. When we understand that blue-green is a mix of blue and green, we have done this. When we understand the relationship between pi as the comparison, ratio, proportion of the Circumference to the diameter of a circle, then we have new knowledge. This creates a NEW standard, Pi, and also as it’s comparison process, that is algebra, we can describe it verbally, as well as mathematically, because those related terms all translate efficiently and fairly exactly into each other. We have found a way to express our words in terms of numbers, and given those numericities. This is but a simple, however, cameo part of understanding and creativity. This new ratio, pi, can be used as a comparative standard to describe the numerical relationships between a circle and a piece of a circle. The volume of a sphere and area of a circle, and of arcs of known degrees, the lengths of such arcs, and so forth. The complex relationship of Pi to spherical geometries are well worked out.

This is how creativity works with words, at first. We see a new relationship. We see a new kind of beetle. And we find the elytron, the abdomen, and the jointed 6 legs, and the cephalon, thorax, and the wings, as well. Thus, we ID and create the data to fit into the category of beetle. We do the same kind of creative work via the history in the medical work up and create a diagnosis by comparing known diagnostic cases this way.

In radiology, an even more clear way to show this, we “read’ the chest x-ray, PA and left lateral and “compare” it to the known, clear cut, descriptive, NOT mathematical standards of what we know the heart should look like. What we know how the bones should appear to be in the ribs, spine (cervical, thoracic, etc), scapulae, clavicles, etc Those reading X-rays know this, intuitively, but it’s not been formally stated before. We have set, efficient, standardized words & X-ray reading methods which describe what’s normal. and by comparing those to the complex issues of “normalcy” regarding an image, be it CT, MRI, angiographic, ultrasound, etc. We compare those set normal standards to what we see. & then with a known set of Not normals, we ID the condition and “make the creative diagnosis, by comparison processing of recognitions. This is how it’s done. This tells precisely how to create AI diagnoses of all types of radiological images.

Have written about this before, in the discussion of styles, methods, skills used by professionals. and how we know [professionals from amateurs. It’s all the same thing. Professionals use highly efficient, standardized comparison methods which let them do their work. Comparing to amateurs, they do the work faster, better, with fewer problems, and with greater completion and outcomes. Thus, they are efficient, that is least energy in all those fields.

Thus, with radiologists, we take the best 12 radiologists we can find in each radiographic procedure. ID what the methods they use for “standard of normal”, against which they compare each image. And also find those ‘standard of abnormal identification/recognition they use to find the variations of normal versus not normal. Once those are ID’d, and then fed into an AI system, not only can we teach medical students, nurses and technicians of all kinds how to read each of those studies, much faster, but we can actually teach AI systems how to do this as well.

But there is a deeper understanding here. Because each of those standards of description and recognition of normal versus not normal are clearly identifiable, & can be written down, we can exam, study and work on each method used to make it MORE efficient, more streamlined, more Least energy applied and with out limit improve each comparison method/device/skill, until it grows better and better, without limit. Because each method/skill is NOT perfect, but necessarily incomplete, thus thermodynamic, has a limited efficiency, most all can be improved on this scale.  This can create improvement without limit, up the exponential scale of the unapproachable “perfect thermodynamics efficiency” scale. This is what’s offered with the comparison process model of what creates knowledge and information/data & understanding. Unlimited growth in efficiency.

And that of course, is the bottom line of professionalism, how we create information and knowledge, how we understand, and how we mathematize verbal descriptions of all kinds. This is the Promised Land of the Undiscovered Country of complex systems understanding. & there it is.