By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014

.

It’s now possible to observe, dissect, explain, and study problem solving using the basic tools previously deeply explained and shown: Comparison processing, the basic function of the cortical columns; Least energy, part of the 2nd Law, and the unlimited apps/methods which can be generated from both; complex systems thinking, and structure/function relationships.

.

Essentially, what does the brain/mind do at a high level in the cortex when it’s trying to make sense of, understand, and comprehend events? It’s problem solving. When faced with a person we know, as he begins speaking we try to understand what’s being said. This is problem solving we do every day, without limits, in fact. We essentially compare the words and context of the words interacting with those words to understand. Essentially, we compare the sounds which we hear to our LTM tracing of those words which we know, in a sequential way to get the context of the word, such as “can”. Have written about these 5-6 listings of the meanings of the word “can” all of which are distinguished by comparing other words with it, to give the correct meaning. This is largely, in most all cases, the source of context, the comparison processing of words.

.

Peruse down to the 5th paragraph:

.

This is a very facilitated, skilled sequence of methods, acts, which we call understanding speech and speaking, which is very efficient, what we call fluency. It’s done very quickly and we are likely unaware of most of what we are doing. But the CP and LE methods allow us to empirically introspect what we and others are doing when we understand & process words.

.

In the same way that we create data from a measuring standard, by Comparing the length of a piece of wood to a meter stick, we create a meter/cms. output of data. We do this to distance, height or heighth, length, width, altitude, shortness, etc., the multiplicit synonyms again, which show the many complex system descriptions and apps possible.

.

In the same way that our adjective trios, repeated endlessly as in La Chanson, high, higher highest, short shorter shortest, big bigger biggest, etc. forms. The middle one of which is the comparative form, but in fact they most ALL are comparisons. Bigger than a house, Big as an elephant, small as a pea or bee-bee, it’s all comparison. The hot, hotter, hottest, and cool, cooler, coolest essentially show the linear nature of the concepts which we model by using their linear nature to create a linear meter scale (standard, convention, relative fixed and stable, a la Einstein’s measuring standards); a linear temp scale based upon the STP 0 deg. C. freezing point, as water is ubiquitous, and thus efficiently can standardize those scales by exact comparison. And the boiling point of water, which gives us the 0 to 100 scale which is used, commonly. The two points act as scales by which we can compare most all temps to some extent. The same with speeds, being distance (length) comparing (ratio, proportion, viz. Algebra) time as in kilometers/hour, miles per hour, or per second, etc. Comparisons all.

.

The scales are also efficient, because those are based as the STP (standard temp & sea level pressure) volume of the cubic cm. of water, being a gram. Easy to recreate using common water. We find the balance scale, put the unknown on the right and then place the exact, standardized gram weights on the other scale pan, and where it balances, exactly, i.e. Compares, we have created the datum of its weight. & we can do this repeatedly.

.

In the same way as measuring & maths order/create data, the language/speech,arises neuroanatomically & interdigitates with..in the left hemisphere speech centers, we do the same. Math arises from the left post. speech centers which interdigitate iwth Wernicke’s area. When the speech here is damaged, so is math, and proportionately to the damage. And it’s also repaired in the same ways as it was developed, by recruitment of surrounding cortical columns.

.

Thus, the relatively fixed, idea/word system, which is self reinforcing in our memories, also provides this same, fixed landmark, standard, convention of a word, which we compare to all else. The more words we have which are understood and meaningful, the more comparison standards we have, verbally, to describe events in existence. Thus the simplicity of mathematics/mearurements shows us that we build up relatively fixed, stable, efficient ideas/words each of which the complexity of many 100K’s of words we use to describe events in existence.

.

For instance, ROY G BIV, provides the comparison standards for the colors, most exactly and even children can tell us what those words mean, and can point them out because they’ve created a comparison standard in their LTM, by constant reinforcement of seeing those repeatedly via their senses. This is how language creates meaning, by comparison processing at first against events in existence, and then by creating the hierarchies of meanings, creates more and more useful, widely applicable, abstgract ideas/words, which do the work, efficiently,that is, least energy (LE).

.

Thus we have a more complete, more widely applicable model to understand understanding, which is consistent, also due to the consistency of comparison processing, We can model a model, and then model that. The same with analyzing analysis, and so forth. Each of the mental processing words have these characteristics. The comparison process word cluster. As does adding adding and adding that, etc. throughout arithmatics. We can compare a comparison and then again and again, without limit, inputting the output, and then again and again.

.

IN addition, Comparison Process which is NOT globally exclusive, but specifically and very rarely excludes more than tha specific cases. This has been shown before. This is where consistencies come from, and how it works.

.

Now, with this solid basis, we move to how do we solve problems? And using this model it becomes astonishing easy to see, describe and observe working in brain/mind. We create recognition by comparing events to our LTM of same, efficiently, of all types. Therefore, HOW do we figure out if a chemical, or medication, or drug, has an effect? How do we thus relate an event, I.d. a tgreatment, drug etc., to the outcomes of using those for specific tasks?

.

Dr. Paul Stark has shown this extensively in “The Method of Comparison” Ch. 28:

.

& will not ignore his deepest, specific insights. We compare outcomes of each chemical, to the desired goal, do we not? Does one form of penicillin work better and with fewer systemic untoward effects (side effects), than the others? Is it cheaper to manufacture? Is it long enough acting to reducing dosing, and how many micro-organisms does it kill? Each of these are easily seen to be least energy outcomes driving the T&E process of comparison.

.

The best ones are chosen, when compared by Trial and Error (T&E).

.

And that is the key. Trial and error, or as it’s sometimes more specialized referred to in AI, back propagation, as it models the T&E methods used by our brains, to compare outputs towards a specific or more general goal. That is, does the treatment work, and how well?

.

Thus pharmacology is essentially massive trial and error, & have gone into the high inefficiencies of the top of the line method called combinatorial chemistry for creating 100K’s of drugs in a single day in a properly equipped and managed lab, as a good way to deal with bacterial resistances. Because how can any be immune to 100K’s of new antibiotics? Thus the problem is solved by an efficient use of combo Chemistry, if properly recognized as the solution to drug resistances. It’s very clearly, problem solving by T&E, massively so.

.

The next problem solving will show how detailedly Edison created the electric light and the methodology can be extended to literally EVERY creative act in almost every field, as well. Thus the invariant, stable nature of this method of problem solving and how and why it’s used. And not just by us, but in nest building, hive creation, prairie dog tunneling and so forth.

.

How did it arise in Edison’s brain/mind that the electric light was possible? Using the CP and LE templates we can read his mind, & explicate and describe, AND model the steps he used to do this.& each of his creative acts, which changed our civilizations and bettered our lives. He was running current through a copper wire and noted that it glowed, did he not? That was light, he recognized. Thus as he turned up the current, it glowed more and more until the copper melted, and did bit of oxidizing too, please note. See here now the empirical introspections that are possible with this model of comparison processing, viz. the heart and core of observing cognitive neurosciences on a practical level.

.

So, he had to find a substance which would not burn up, and would not breakdown/melt from running a current through it. He did this by trial and error testing. And soon learned that oxygen was a burning problem with materials as he tried one after another. So he put the material to be tested in a roundish globe, the glass bulb, which is stabler than any other shape due to its roundness. And also the least amount of material, far, far better than a cube, which could enclose a stable vacuum. Thus least energy again, we see!!!

.

So he carefully placed that material into the glass globe, created efficiently by generations of professional glass blowers, via LE and CP developments of their skills. (The difference between an amateur and professional is the latter’s highly developed least energy, efficient skills, and how those are ALSO developed by T&E! Multiple CP’s again) So the material to be tested once in the globe evacuated of air, greatly. And then sealed.

.

Then it was attached to a rheostat like device and he steadily turned up the juice until it began to glow, slowly, so as not too quickly to burn it out, or otherwise waste all that time. Also he could control and not exceed a limit of too much current. He tried, be it noted, 900 materials before he found that carbon filament which glowed for 80 hours. and then duplicated it to confirm, by repeating methods, that it was workable. He had the electric light, tho he, the Tinker, as Tesla called him, did it by brute force. And thus the electric light was born.

.

Now we have the real guts and core of how inventions are created, but do we? And this is where the recognition systems cut through to the heart and core of creativity. The higher level abstractions which solve the problems faster, than 900 trials. The one closed to Edison because he was NOT educated in physics and such. But would have at once occurred to Tesla because he was in fact, educated, that is more information, more complete facts about how materials work and their characteristics. He could have cut a year off his efforts, and more efficiently, LE solved the problem with only testing a few proper materials.

.

The solution to this problem is clear, Refractory materials do NOT break down with high heats, either. Carbon was such a one. Tungsten and molybdenum and some Pt group metals also are this way, but those latter are WAY too expensive & rare to use and violate the least cost, least energy rule. He needed a commonly found material and that was carbon and those are still used today in the carbon arc lamp systems to create high intensity long lasting search lights and so forth.

.

We use tungsten which is commonly available to create incandescent lamps today. More durable, with higher refractory traits, and less delicate than carbon and far, far more long lasting due to its innate characteristic that it will not easily melt in high temperatures within a vacuum. & that’s the cutting of the Gordian knot of complexity by least energy trials of such substances, which we use today. It saves time, expense and development cost. Thus, once we understand how creativity is created by the brain/mind, we can more easily solve our problems, can we not? And here is the wellspring of creativity described, characterized and shown how it comes about. If we know where to go, we get there faster. It’s least energy. Thus this model.

.

Then we have the P and NP problem. Or how to solve these problems. The P are those which are solved, and we simply, as with the equations for miles per gallon, fit the figures into the equation and figure that out. Or the broader one, how many gallons will it take to get to X city, and will I have to fill up the tank to get there?

.

But the NP are NOT solvable,and most believe, quite rightly as can be shown below, that P is NOT = NP. And this is very easy to show. Simply stated, in order to convert an NP to P, we must add information. Specifically the above proofs that incompleteness is a thermodynamic problem is the case. Information content of a solution/method to a problem is higher, than in NP, not solved situations. Thus incompleteness is equivalent, and related to entropy. and as solutions are least energy, then they are more efficient than NP by necessity. Thus the TD argument shows, very clearly, that P is more information rich, technically more complete than the NP from which it arose, and was solved. Information was added (usually as above, by T&E methods), and NP became P. This is the proof which has so far eluded professionals in the area, because they did NOT have the understanding, that is, the least energy,comparison process method to understand, more completely, what went on in most problem solving.

.

Thus, we have the proof here. But how do we find answers to those NP problems? We must use trial and error outcomes, until we do so. But, as shown above, there are often ways to solve brute force problems by cutting through the complexities of the Gordian knot, by higher concepts and abstractions, which more generally provide the answers. This is why P is NOT equal to NP. Info Theory and thermodynamics shows us why, because the higher concept of incompleteness as a thermodynamic value, has been shown to the case, by least energy considerations. Solutions do a LOT more with less and permit solutions of problems by T&E methods. This is the general verbal solution to the problem and will let the mathematicians find the corresponding experimental math methods to do so.

.

Buit will also show HOW it’s done, likely by the below truths of how we mathematize verbal description or how Einstein and Minkowski mathematized relativity using 4D space time mathematics. I.E., the experimental maths. Those are what Fermi/Ulam used to begin to describe complex systems, of all kinds, finding the landmarks, the stabilities of events, which are least energy forms, in those complex system events, which showed the solutions to it. Tho Fermi and Ulam had NOT realized those were least energy forms, which created the stabilities which repeated themselves. & and then pass by natural reinforcements into our Long Term Memories. Thus neatly combining Behaviorism into the Cognitive neurosciences, by adding more information to behaviorism, you see?

.

.

Have already mentioned deliberately the way linear, verbal method of the unlimited high, higher highest approaches, by which we converted lengths into measuring scales. The same with temps, and speeds. And by extension the Moh scale of hardness, using the relative hardnesses of each mineral to create the linear 1-10 scale. And that was mathematized using the same creative use of comparing relatively fixed standard of pure quartz, at 7, to a scale which created the pascal measure of hardness Thus Pascals more precisely defined by math, which is its least energy value and why we use math, BTW, This create the current mathematical, linear scale of hardness, is it not? Creativity being seen by studying the history of how the Moh scale was created, by comparing relative mineral hardness, talc gypsum, limestone, quartz (7), & up to sapphire, corundum, to diamond at 10, was the same method.

.

So, with a final example, how DO we mathematize growth? By understanding its basic natures. If the growth is arithmetic, it’s simple, & we must use simple addition to do so and multiplication, yet another, the 3rd hierarchy of arithmetic, thus counting by least energy methods.

.

But if it’s exponential, then we must find guidance from a great mathematician, Whitehead. and from his words find, by T&E, a math system which corresponds, very exactly and can be widely adjusted for the kinds of growths found, too.

.

Here is how it’s done. and if I can do this, then anyone who’s educated, can!!!

The elements of understanding, a good education and speed of processing also essential to solving problems, too. Recognitions are the key, and that’s how Ramanjuam, worked & was the source of his remarkable creativity. Feynman’s magic used the same in creating the diagrams, which within a few minutes could duplicate the results of weeks of computations in his time. So he was lazy, AKA efficient, and found the solution by using horizontal, hierarchical, diagrams to symbolize, visually, the processes occurring. No accident there, as hierarchies are very efficient and near universal problem solvers, too, by showing deep, existing real relationships among events. & visual thinking takes huge amounts of events (a picture being worth 1000 words), thus handling efficiently masses of data.

.

But how from Whitehead? Again, the subtleties of Einstein, “Raffiniert ist Der Herr Gott, aber boshaft er ist nicht.” Subtle are the ways, but not mean, and the universe of events also shows us the ways to do it. And here is yet another way to describe, mathematically, growth of all sorts, based upon the verbal description, process thinking ideas of Whitehead.

.

He stated, “Almost anything which jogs us out of our current abstractions is a good thing.” In other words, to get us to grow, learn, and develop. This is also stated in his own way by Thomas Jefferson. “I hold that a little rebellion now and then to be a good thing.” All the myriad ways, we see.

.

But Whitehead was subtle, and he saw the truest form of most all growth. “A society which cannot break out of its current abstractions, after a limited period of growth, is doomed to stagnation.”

.

We know that growth of this kind is mostly exponential. But it starts out slowly, then gets going, goes exponential, but forever? Not likely. Most All methods, tools, & techniques have their capabilities, which in a least energy way create compound interest growth, that is exponential growth. But they have their limits, which also reduce that growth at the top of the curve. And what is the shape of this curve which Whitehead’s statement describes verbally? The mathematical S-curve, very likely. And how was that found? By translating the words into a series of descriptively measuring complex mathematical curves. By using T&E, these can be used to model, simply by measuring it. Then tweaking & adjusting that curve to model more completely, the verbal description. Just as the linear hot, hotter hottest gets modeled by a temperature line. It’s found by trial and error and by the same trial and error tweaking can create a very good model of each kind of observed, real, existing growth. We find the S-curve at the bottom of every business cycle and at the top when the growth stops, too. Or why economies grow so well for a while, and then peak out. Or corporations in the same ways.

.

Let us advance to physics and show how this model can create a more universal, unifying model of thermodynamics and gravity. We see the avalanches, which are created by having a least energy growth phase where innumerable tons of snow, rocks, etc, slide down the mountain. Just a bit more weight from some snowflakes can precipitate this effect. & this also enlightens us as to how a butterfly flapping its wings can, in complex system, create a tropical storm, over time. Least energy growth combined with a system ready with energy to grow. Even weather and winds.

.

The avalanche at first grows and grows in size, exponentially, until part of mountainside slides down the gravitation, least energy gradient. And then? Reaching the bottom, slows down & eventually stops,mostly. Although for weeks even months after, a bit more settles, slides down hill, and so forth. The top of the S-curve is seen. Inverted, no doubt, but still real and existing. Most all landslides can be modeled by this method. And this is how, by comparison we can model growth, and landslides and avalanches, by comparison processing. Creating creativity.

.

Gravity is least energy function. Things tend ot fall from higher orbit, position, or levels to lower ones. Least energy is the case. This can begin to combine TD and gravitational systems.

.

That’s essentially how we create solutions to problems. Using comparison processing relationships which we find, and then by comparing the events, find a relationship. And then by further T&E comparing, find a math, or create a math which can model the events. That’s how Newton did it for gravity. How Archimedes got his Eureka moment as he created the mass/volume concept of density, which gave him such a dopamine boost, and likely saved his life, too.

.

This model can be extended with time, T&E & work applied to most all forms of the unlimited professional outputs, also without limit. For each method is not final, but a LE way station, to the next more efficient methods. Unlimited growth and improvements from this understanding of how events work.

.

And if we want to understand the complex systems of flow, then we find the stabilities in it, and report and organize those, to build up an increasingly efficient model of how to describe turbulent flow.

.

And how a Viagra 50 mg. tab can be made to work for 3 days in those who respond to sildenafil. & in 20′ onset of activity, too.

Advertisements