NP not = to P: Further Considerations & Proofs, Evidences.

.

Comparison function, language creation, creativity, mind/brain interface; AI; epistemology

NP not = to P: Further Considerations & Proofs, Evidences.

.

By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014.

Copyright © July 23, 2019

.

For P ~= to NP, there are three big, logical, empirical proofs that it’s the case.

..

First of all, the NP is Sudoku and X-words, & picture puzzle solutions. In order to make them solvable we MUST add information. The hardest Suduko has the fewest numbers, usually 26 or less of the 81 total possible.. Adding numbers, that is info, makes it more easily solvable.These are the cases of the problems being known with specific solutions. Not ALL NP is like that, however.

.

Now if we add more columns in Sudoku, 10 by 10, and upwards of 100 by 100, it cannot be solved at all, even with our best computers, until those get faster. At which time it will likely solve those, too. That increases the sorting difficulty, which always makes NP harder.

.

The same is true of X-words. The most specific clues make it easiest. The more general clue, which create a much larger sorting problem, makes it hard. IOW, if the clue states that the word is a name of a state capital, the sorting problem becomes very hard. Esp. if the state capital is NOT in the US, but included all other nations with states. But if it’s IDAHO’s capital, it’s Boise, easily. The sorting problem is easier. If the clue is for a foreign word, then it’s even harder, too. If the word is not stated to be one language or another, then it’s impossible to solve.

.

And THAT also shows us why NP can be hard to solve. It’s a matter of the neuroscience of problem solving. The harder the sorting problems, the longer it takes. Using a computer can sort out things faster, in many cases. But creatively solving problems most computers cannot do. They cannot create music a la Tchaikovsky or Mancini, but a good composer can create “In Stylo” of. Nor can AI duplicate the findings of Einstein.

.

The same is true of the visual sorting out of solutions to picture puzzles. We use in those a series of strategies. To start, we find the straight edges and put those together. This creates a structure which simplifies our sorting. As we continue to add more pieces, the solutions get faster and faster, kthus showing that the pictures with the least numbers of pieces are the easiest to solve, because we are not checking against all the others.

.

Next we find the patterns of the colours, shapes, and person’s faces, etc. to combine around the edges. And then emplace those it. But note, if we don’t have an image of the puzzle solved, it’s very hard. So if we do, Information is Added yet again and that facilitates the solution. In every case, adding information by reducing the numbers of puzzle pieces by steadily solving it, increases the solution speeds, as well. Just as in Sudoku and X-word puzzles. This is a generally known fact. But these are for puzzles which we KNOW can be solved.

.

What of the other category of hard NP, where the solutions are NOT known, and which must be sorted out and solved by other methods? That’s a problem, very much so, too.

.

However, understanding professional skill sets being more efficient than amateurs WE can solve problems more easily simply by doing the work of finding out the least energy skill sets of the professionals, & then improving those also, without limits. Or how to solve the NP problem of creating a self-driving care by expert systems methods.

.

.

Or why Uber Grew and did so well: A Growth S-curve of more efficient, least energy methods.

.

.

In more specific cases how did Einstein solve the problems of physics? First of all, he never left many clues or references which specifically stated 1 way or the other. So we must guess. & that’s not helpful here.

.

However, Einstein did states some general rules. If we simplify the problem, then the solutions are much, much faster. And indeed, he writes that simplifications ARE the hallmarks of solutions to major problems. He also stated that almost every advance in physics is preceded by an epistemological advance, which is just another way of saying the same thing. IOW, it’s part of the puzzle solving tools in our professional tool box between our ears.

.

How did Edison create the electric light? That’s easier. First, he saw that when a current was run through copper at high currents it glowed. And then he realized that electricity would create light. So he had to solve a series of problems, heating and melting of the filament, and he did that by Heavy Duty sorting, of Trial and Error. It’s goal oriented. Match the outcomes of the tested substances to the goals, AGAIN, CP. Did the material work or not? Have written before of Tesla’s likely method of Refractoriness of materials, i.e., adding a big concept capable of reaching the solution in a few weeks, compared to Edison’s “brute force” approach. Which please note, ALSO simplified down the solution from 2 years to only a few weeks. & that cut the Gordian knot of complexity, very likely. It vastly simplified the sorting task down to Mo, Rh, Tungsten, carbon and a few others.

.

Scroll down about 1/2 way to the discussion in detail of the creation of the electric light by Edison and HOW, very simply Tesla could have solved it in a few weeks. Or how they found tungstne eyars later as a successor to carbon filmants of Edison.

.

.

And we see the huge sorting problems ongoing yet again in the practical solution fields. & those are analogous to creativity problems in Info Theory (IT). We must FIRST figure out how the brain solves problems in order to solve the NP ~ = to P. And this has been largely done. We must add information OUTSIDE of the problem conditions, in order to solve it in an Einsteinian way. Which is essentially what he stated. Problem solutions usually lie OUTSIDE of the mindset which finds the problem. IOW, we must go outside of the box, see the forest for the trees, see the wider problem, etc. And THAT here, is also the case. We MUST see the wider picture of how problems are solved in IT. And that has been done here to some extent, but not completely.

.

Thus NP is hard because it’s a very hard sorting problem, not easily amenable to polynomial solutions. Adding information makes that more solvable, because, like in Sudoku and X-words those are made Easier by adding numbers and letters & words. It cannot be solved easily by logic, words, or math. But in visualizing what’s going on. Much like how we walk about using Least energy, optimized routes for distances and times.

.

.

This is the general problem of problem solving, in short. Using IT, a statement with more information has less entropy and is a higher energy state. It has more information because it circumscribes the possibilities more, too. Therefore, the NP ~= to P is easily solved. If NP is not solved, it’s hard to solve. When solved, it has more information. We can see this by comparing most all NP problems in the past, with their solutions today. Information has been added to solve it. And adding information means that NP is NOT = to P.

.

Some problems are intrinsically insolvable, with this proviso: by using Current methods!!!

.

The solution to those is likely the Qu Computer, when that can be sorted out to a good solution.

.

However, and this is the case, most all time consuming sortings CAN be sped up, as in the Edison light bulb case above. IOW, by, as so many have stated including Einstein, H. D, Thoreau and others, such as Feynman creating his diagrams, “cutting the Gordian knot of complexity” by simplification.

.

Here’s is yet another method to solve the prime number sorting problem of the RSA. If we take a fast and vast simplification of the number and repeating prime patterns down to a quartets method, which repeats every 30 numbers in the decimal system, we get rid of all but 24 numbers per 90. Then by a simple repeating algorithm we sort out all of those not prime, Prime multiples (PM) numbers ending in the quartets of -1, -3, -7, -9. That can be done at ANY point in the number line, high or low. by sorting out the KNOWN primes, & that remaining number line can be reduced by 1000’s, leaving mostly primes. By noting that most, the majority of primes are most often in the Rem1 quartet, it can be sped up even more. There will be other ways found of doing that, too. In addition the Rem0 quartet of the trio is more likely to have primes than the Rem2 quartet. & other such findings.

.

And then by using that method to rid the number line of PM numbers in the 100’s of digits, we can use another system (compound method) which is efficient to find the primes, very quickly.

.

.

This is how it’s done. Therefore, given the likely unlimited number of ways of simplifying down the problems of sorting, the Polynomial limit isn’t likely the case, either. Thus NP is Not = to P, because any new, effective creative system is Also improving without limits.

.

Some NP are more solvable, such as how far is it to Houston from St Louis, but that depends on HOW we go there, and all of that complexity involved. There is NO exact solution to that distance, either, unless we specify more completely the route and means we are taking. And even then the math does not apply except as an approximation. There is NO absolute, nor exact, nor perfect solution to that practical problem. But we can nearly universally add info to make it solved.

.

We need only look at the developments of the autos and our flying methods over the last 200-100 years to see that. Improvement is possible without limits, very likely.

.

In making diagnoses of medical conditions we solve these problems of NP all the time. We must FIRST make a medical diagnosis in the region of the organ affected. That narrows the field, and it adds information. Then we test/CP by history and physical and get as much information to solve the problem as possible. If it’s still not clear, we get more information by doing the tests which we know are likely to help us the most, that is efficiency. In every instance we are adding information using efficient methods to acquire that and then use it to make the diagnoses.

.

We create the treatment protocols and medications and methods by the same ways. In each and every field of medicine.

.

This is part and parcel of the problem solving which NP Not = to P is part of. So if we want solutions to a problem, in a practical sense we MUST show, how, creatively we take NP to P. And in every case, that’s adding medical information until the solution, if possible is reached. Not all problems can be solved. Not all conditions CAN be diagnosed. Even in the bvst diagnostic centers, for instance, we cannot figure out about 50% of peripheral neuropathy Cause, or more likely causes. It’s NOT a true or false, yes or no, it’s “this or not this”. It’s NOT logical, mathematical or verbal in the first forms. No Empirical False Dichotomies allowed!! Solutions can be partial. In addition better solutions over time are very likely possible in most every area, too. Sometimes it takes time to diagnose a problem. We do that by successive exams and testings. Occ. it takes a test a long time to come back. And this is the case.

.

Sometimes the diagnoses are NOT fully clear until the Parkinson’s like syndrome matures and the double vision starts, as in Progressive Supranuclear Palsy, which often in early stages, acts like Parkinson’s but rarely responds to L-dopa therapies. But when, and this has happened to many, the eye movements start to vary from normal, by CP testing, then the DX is made. We ADD information to solve NP to P. And there that is yet again, the solution to NP not = to P.

.

NP has less information than P. NP is not complete, either. And in order to solve it, we add medical information. It’s NOT is NP = to P or not. It’s a probabilities of NP being = to P. Thus logic fails in these cases. It’s not All or none, white or black, A or Not A. Those are false dichotomies exposed by empiricism. It’s the multiplicity of complex systems at work here. Shannon’s rules of information content of NP compared to P show this consistently. The better the description of an event, the more information content it has. Thus NP is not = to NP.

.

That’s yet another proof. and the last is that “There is no absolute information.” The epistemological of Einstein, altogether true, states that it’s the case because of the nature of measuring, no absolute space or time. And when that is converted to describing events, by CP methods, we see that the same injunction against absolute descriptions is the case. There is NO final, complete description likely or possible. The point being that because there is NO perfect heat engine, there is also NO Info Theory perfect description, either. Our information is highly likely to be most all incomplete. And thus is there is unlikely to be absolute information.

.

That’s the last proof of NP ~ = to P. As long as they are not both tautologies, that is.

.

Simple, elegant easy to prove at least 4 ways!!!

.

& info almost always decays in time as well, thus making it more incomplete than ever!!!! We cannot solve some problems of past history because the info is gone. That is a huge problem in genealogy, law, forensics, paleontology AND history. But not quite.

.

That’s why the Tanis site in South Dakota was so incredibly useful to us regarding the KT boundary event. Robert DePalma worked on It, ADDED information and solved the big 3 meter geological layers problem, where there were before no Dinosaurs found. They found Dino remains!!!! It showed largely what happened ON That Day!!! But only to probabilities. It mixed the terrestrial and marine layers in a lovely way, too, showing the asteroid impact effects both on land and coastal marine systems.

.

.

Thus NP is not = to P.

Advertisements

The Addenda to the Walkabout article.

By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014.

Copyright © July 2019

.

Some extra thoughts to expand upon the practicalities of how we walk from place to place.

.

.

The first rule is very simple. What we do the most, will save us the most LE points. IF we do something all the time, the most efficient, time, distance routes will save us the most. If we do it rarely the pressure is off because it’s diminishing returns.

So what we do the most, we save the most on, & that’s a higher level least energy growth efficiency skill, as well.

.

Next this applies very clearly to growing beards. We save lots of time not shaving, getting the materials, buying with costs, and so forth. The cheapest way to shave with the least blood loss/injury is the best. An electric razor for those with lighter beards is the case. It’s fast and as most sites are electrified, not a problem. Going camping or on trips is a bit tougher because many nations don’t use the same outlets nor currents as the others. Or we just let it grow.

.

But the major other point is we shave/trim every day or not. The more we do not shave, the more Time we save. & we cannot buy time. And that will build up VERY quickly. Have over the last nearly 40 years saved scores of weeks of 40 hours days by not shaving. How much work and more time to do the necessities or relax is that? As we get older, our time becomes less and less. As disability and death come nearer, and we make the most of that time by not shaving and by growing a full beard. Too much trimming “isn’t” least energy. (apostrophes are least energy, too). This is often why men grow beards after retiring.

.

And then there’s the joke. Hair has many, at least 20 uses. So, why do older men grow beards?

.

Well, most older men have a memory problem sooner or later. So if we want to remember what we last had to eat, we lick the beard or mustache and are reminded. It’s a comparison process mnemonic method. grin. Ahh, was it spaghetti or pizza? (Lick, taste.) OK, spaghetti, the sauce comes through.

.

First of all, this VIPoint. If we don’t minimize time we are missing the boat in most cases. We cannot buy or get more time. We can save time doing tasks more efficiently. We can go a least energy route, or least distance route, then that MIGHT take us longer to get there. So, Primum, we save time, unless we are low on gas, etc. We can buy more gas, we can run the bike or car longer, but we cannot get more time.

.

And the least distance route usually SAVES time but not always. This was told to me by a professional driver, who was a sales person and had to get to places on time. He said, go the freeways, unless it’s rush hour. Then by the side routes to the destination. So when was going up the Jahant Rd. from Valley Springs to get to CA99, was taking a right at Clements, going N up the side road over the river and then left on Jahant Road. That got me to 99 least distance; and was then not driving up the hypotenuse past Liberty Rd. But Jahant was slow and had many stops and such. But the point was, Liberty Rd was a bit to the east of the way up to it, and longer to get to Ca99 by about 1/2 mile or so, compared, relatively to Jahant Intersection with 99. But it was WAY way faster, too. So using energy, fuel and a bit more wear and tear saved TIME using Liberty Rd. which is not replaceable. And got me there faster, too.

.

The point being, time is the limiting factor in many cases. And if we drive least distance it’s also least time in most, but not all cases. The Uber drivers and other professional drivers know this as well, and use it. Which is why Uber had such a huge growth rate. It was well marketed and got the customer faster and sooner to their goals, destinations. and so those happy customers saved time and aggravation from the usual cabs, and Uber grew very fast. Because it was efficient by its best routes tactics. For a while as the S-curve set in, and then the growth slowed. But their efficient route management systems, which they had created WERE the savings which created the S-curve growth, which then tapers off after a while. altho it CAN be unlimitedly improved, but at a cost going up the exponential.

.

The next point is that we preferentially take the least distance most of the time. We take the “short cuts’ which are LE, too. But the ways in which we choose ARE complex systems (CxSys). There are many factors we use to choose our routes, and for N=/>3 it becomes Cxsys. & then we must use Outcomes by Trial & Error searching to find which is best for our needs. Those routes cannot be found easily by thinking, math or computers. But must rely upon Outcomes, T&E, Empirical trials to find out which is best most often. that sorts thru the Complexities, significantly. It’s a probability NOT a certainty. So anyone who drives, knows that. The best routes can be beset by accidents, road construction, flooding, bad weather and so forth. Rail crossings & all the rich CxSys events which create blocks and advantages to travel, we see/solve using T&E methods to find out ways, and cut the Gordian Knots of complexities to do our work.

.

The various ways we use the multiple hypotenuses to walk are much the same. They are more complex and if too much so are unworkable. So we take the simplest routes to the flowers as do the bees. But it’s easier and simpler and least energy to Think about planning the way we will walk/drive, than to actually drive it. We create a plan and then TEST it for value. Experience teaches us the best ways. Our LTM are energy, time, distance savers, because the best ways are recalled at once; and thus create growth and benefits. Experience counts!!!

.

We can also sequence the multi hypotenuses, too, to reach the faster routes. Or avoid the hills, for the ridges as in orienteering, another sport which is keyed as well to this article about walking. One man told me it was often easier (saves energy) to run along a longer ridge than do all the ups and downs which might be shorter. And from that simple theorem, we realize at ONCE, that the Mountain passes are, all of them least energy routes because we do not have to climb as much, taking the most direct, shortest and lowest altitude route to get through. Again Mtn. passes ARE used because they are Least Energy. Least energy rules the Mtn. passes from the Kyber to the Berthoud Pass & The Eisenhower Tunnel (least energy) and so forth. The Mt. Blanc Pass tunnel is much the same. The pass saves time, energy, distant, but the tunnel saves a LOT more, esp. when snows block the higher passes.

.

And that is more about what’s going on, too. The tracks, trails and such are more of that. It’s easier to travel on a flat firmer surface than over grass, bush and rocks. It’s easier to go on a well established trail than going thru all that brush and off trail. The cows come home over the hills and such by least energy tracks, which they build up generally and steadily to the best routes to the barn for milking. & they do this naturally. Again, all of the migrating beasts do that!!1 They learn the landmarks to travel, be it bird migration, wildebeest and elephant migrations and more.

.

Once saw 3 big cranes flying in circles over the downtown towers. They began a few 100’s of feet above, and then as I watched, they got higher and higher & higher. The Birds were flying the uprising Thermal currents!!1 They knew about those as do most glider pilots. Eventually they got very high & took off. Now why not earlier? Because the thermals lose power and then they lose time. So it’s time to move onwards, building up energy savings for more travel, too.

.

Also, we see the fledglings flapping their wings all the time. but the mature birds flap a few times, then glide, then flap and then glide, saving energy is not? Masters of flying. Least energy routes and travel means are most all least energy, too.

.

The routes of the bees to the flowers are all least energy, time, etc. If a wind is blowing away from the flowers, they choose other flowers to fly to, do they not? Least energy rules the honey bees’ routes to the nectar, is not? Even the great whales, from the bees and others use LE to migrate and navigate. There it is again, unifying our observations among the humans, bees, whales & many others. Least energy rules!!! When they save time, they save energy release for metabolism, do they not? Again, another form of LE. Universally applying.

.

LE drives the bees, which Von Frisch never addressed and did not know. Now we do and their solutions to the traveling salesman problems become ever more explainable, our knowledge becomes more complete and descriptive. Least Energy guides us without limits, we see. And if we cave distance, and time, and energy with the same routes, the the savings, growth are maximized, is not? & it would easily guide Uber, as well. LE underlies most all manjor efficiency expert work. It’s universal, and deeply applying without limits.

.

Those are the keys to understanding most ALL routes to creativity, performance & much else. The Cameo of the Walkabout routes thus applies to most everything, & not quite, but nearly so. Thus the walkabout guides us in most all we do, too. Least energy, least time, least distance, least food, least heartbeats, least metabolism, in all the rich Kategoria of Aristoteles, building up a good description of CxSys workings, does it not?

Towards a Model of Everything (MOE)

.

By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014.

Copyright © 2019

.

“You can see a lot by just watching.” –Yogi Berra, Greatest NY Yankees Player/Manager

.

“Look and See. Test and check everything.” –Ms. Schutt

.

“Calling the Universe non-linear is like calling biology the study of all non-elephants.” (The negative does NOT tell you what is IS!!) –Stan Ulam, paraphrased for simplicity of his key insight.

.

“Almost Anything which jogs us out of our current abstractions is a good thing.” –Alfred Whitehead, noted mathematician of “The Principia”, & greatest philosopher of Modern sciences.

.

“I hold that a little rebellion now and then to be a good thing.” –Pres. Thomas Jefferson

.

“Any society (or group), which cannot break out of its current abstractions, after a limited period of growth (the S-curve!), is doomed to stagnation.” –Alfred Whitehead

.

‘Understanding comes from finding the relationship among Events.” –Albert Einstein, “Physics and Reality” 1936

.

“The Big pot doesn’t go into the Little Pot.” –Abe Lincoln

.

“Quantum Mechanics is not necessarily wrong, but it is Incomplete.” –Albert Einstein

.

Addenda

.

Usually these are at the end, but they are so interesting and so deep, that will affix them here and the astute readers can see how they are created, generated by following the advanced sorting and creativity creating methods (the virtually unlimited “Wellsprings of the Creativities” ) in most all fields shown below. They can be used everyday in endless practical forms for LE guidances without limits.

.

Information/Data & its organization into the “Hierarchies of knowledge” are created using Comparison Processes. This is simply how Knowledge is created by our minds/brains.

.

First of all, laziness and taking “the easy way out” are Least Energy, LE. But the best ways accomplish the goals of Trial & Error(T&E) by more efficient and better outcomes being chosen. Laziness IS LE, but a decadent form of it.

.

The Analogy Kategoria of Hofstadter IS the key to the many methods of understanding by explanation of how we can “think outside the box” . Which is why Hofstadter fixed upon it, because it works by Comparison Processing (CP), which he intuitively knew as important. but he missed the CP which drives each of these forms.

.

Analogy category: Analogy, metaphor, story, fable, allusion, parable, simile (the elephants parts are like a snake, smooth curved stick; like a column; surface like camel’s skin; and like a Rope.), koan, and the phrases synonymic as extensions of the basic, related, connected ideas, helps us see, visualize, understand, etc. what’s going on. These are ALL applied for understanding/explanation by the “Comparison Process.” Which is why & how (S/F) the analogy works.

.

QM is not complete because it cannot generate biology (Complex systems, CxSys).

— Dr. R. Feynman.

.

“The Grand Design” by Dr. Hawking missed the CxSys of most all events & biologies, as well.

.

Epigrams are short cuts, LE summations of wisdom which can guide us. Taken from “Poor Richard’s Almanac” by Dr. B. Franklin. Note the Poor/Rich contrast in the name which showed if the reader could get that, “depths within depths’ were to follow, the subtleties of Franklin’s insights.

.

Most All Information/Data are created using “Comparison Processes”.

.

Long Term Memory (with CP ID of events) is a Least Energy, metastable form in most all instances. (See Grid Field Model of the Nobel Prize winners, the Mosiers, Norge, and Dr. John O’Keefe, UCL UK)

.

Visualizations are NOT verbal, mathematical or logical (can be “partly described by”), but are the greatest, most high performance functions of the human mind/brain. Einstein’s methods, “thought Experiments” showed that clearly. This can lead necessarily, with work, to General AI.

.

Most all Shopping is massive CP of cost, value, quantity, quality freshness, weight/volume, etc. to reach our LE goals and needs. Each also, uniformly, a thermodynamic value of the 2nd Law.

.

Outcomes processing using sorting among possibilities (T&E) is the origin of most information, like structure/function (S/F) methods universally.

.

For me, personally, this has been journey and search for the last 50 years. Wanting to understand most of what’s going around inside and outside of us, using a simple, yet very powerful system, which is yet applicable (comparable) to most everything and combines our sciences and religions, humanities, sports and musics and the rest of the arts, and much else, into a rather unified whole. Realizing that logic has its limits (Godel’s Incompleteness Theorem) and the empirically seen false dichotomies of logic (limits to dualities), trichotomies, even Polychotomies are more likely the case., and then finding a kind of “Uber brain logic”, which creates most all the logics, has been realized.

.

It’s the work mainly of the last 55 years, and am pleased to have made some substantial progress in that direction.

.

This MOE requires a Pentad beginning with the comparison process (CP). This creates the necessary complexity to create a MOE. And it’s composed of nearly universal processors which can thus unify most all knowledge. The CP creates the detections of LE and shows the Least Energy outcomes of various methods. The CP also CREATES information and data, as well as these date creating the new standards upon which more information can be created. The 3rd part is the structure/function (S/F) relationship, which details and creates information about how the brain works. The S/F of atoms & molecules and much else. yet another nearly universal processor using the endless comparison of the structure, with the function and the function with the structure, without limits. The same is used with the genes, which are the structure and their often multiplicit functions which arise from those. Thus showing many of the nearly universal natures of this. & can be extended to the enzymes and most other structures in the cells, and organs, too.

.

Much of this is all contained within the overall, Fourth fundamental of the Complex System (CxSys)high level framework.

.

And the last of the Pentad are the nearly unlimited methods, approaches, ways of doing things, models and techniques, technologies, devices, tools and instruments without limit as a way of creating information, understanding it, organizing it,and thus applying it without limits, too.

.

Thus this kind of complexity is needed to create and generate a system which can be applied, to nearly every event, because of the near universal methods which process most all of that which we can perceive.

.

Each of these very practical, down to earth recommendations, methods, and basic processors can be used most all the time, every day by nearly anyone who understands basic Second Law of Thermodynamics in its Least Free energy, Least Energy(LE) forms. It’s very simple to understand and apply. It’s a way of doing most everything, thinking, working, performing, which creates & finds, & mostly sorts out the most efficient way(s) of doing most everything all day long. It’s almost universally applicable in nearly every case.

.

.

The fundamental start of understanding is to quote Anil Seth, Hinton, and Karl Friston, that there is a repeating function in the brain which creates predictions & outputs..

.

.

As written 5 years ago, that basic process is the comparison process (CP), which does this very likely throughout our cortices in the individual cortical columns (CC’s) of Mountcastle. First, it creates LTM recognition, but at first the reinforcements of repeating events in existence, naturally reinforce and create those stable LTM’s. Next comparing events in existence to LTM clearly shows the CP is the basis of most all recognition. We Re-cognize, that is, “know it again”, because it’s in the LTM. This simple mechanism is what creates most all recognition and the larger pattern recognitions series which drive predictability.

.

The neuroanatomy of the Cortical Columns creating the CP is very simple. Subtle are the ways, but the confirming clue is in the nature of CO poisoning of brain. Very often with this poisoning, the corftex alone is affected due to the hypoxemia. Cortex is a high O2 requiring and consuming structure. And when that structure is selectrively damaged, all of the higher processing functions are lost. No language, no visual system (cortical blindness). No social interactions. The lights are one but no one is home.

.

The same is seen with the Autism Spectrum. When the MRI and CEP’s show selective damage to the cortex, we find the same losses of language function, social interaction impairments, and so forth. Where cortex is functional, there are fewer known impairments of the higher functions. This again, confirms the higher cortical functions with yet another S/F relationship application.

.

And the clinical implication for Autism is that EVERY suspected Autism diagnosis must be studied careful using a high resolution MRI to show what Cortex is normal and what’s not. And confirmed by Cortical Evoked Potentials to further confirm and strengthen the more objective S/F normal areas and not normal. That will give a very good idea of how much damage is done, and how much improvement upon training can be worthwhile as well. Thus it’s prognostic as well.

.

But there has been a major omission in our understanding, the CP as well as Least Energy rules of the 2nd Law of Thermodynamics. & those omissions block a more complete understanding of how that understanding, as Einstein showed in “Physics and Reality”, results from seeing the relationships (connections) among events. That’s largely what creates understanding.

.

And seeing those relationships is what CP does. And here’s how, specifically. When we use a relatively fixed set standard, which is efficient to measure events, we are creating data by comparison of events to that measuring system/method. When we take a piece of cut wood, and lay it up against a tape measure, we read off its length, its width & thickness, which CREATES data. And in fact, very likely any kind of measuring standard, can create those lengths, distances, heights, altitudes, etc., & measures of how far events are by using that SAME, repeating comparison process against the Einsteinian standards, which are relatively fixed, stable & efficient (LE, Thermodynamics). Most all measurements, of speed, weight/mass, volume, temps, etc. do that, i.e., create information numerical, which can be called Data.

.

And so is description creating of information. By comparison of fixed verbal standards(words, ideas) such as ROY G BIV we can create detailed, almost unlimited descriptions of colours of events in existence. As our vast hierarchies of the Paint Store colour chips Also show!!! This is clearly the case. And HOW do Physicians create information? By using mostly Verbal description against the fixed standards, of what we know to be NORMAL, versus not normal, simply. There are always the gray areas, but that’s how it’s done.

.

How do the radiologists READ images? By comparing the PA and Left Lateral Chest X-ray to the Fixed, stable, efficient standards of what they know to be normal. The details of checking the bones, the spine, the heart image, the mediastinum, the ribs, and the inverted cups of the diaphragms, etc. Thus we compare massively using those stable efficient standard specifically, to create the information that the CXR is normal, variants of normal or clearly abnormal with respect to lung & bony, etc., lesions seen. Thus reading and interpretations of most all images in their billions, is driven by efficient CP. The apps of this system to directing developing of AI are obvious.

.

To read the functional MRI, fMRI, we use the SAME Comparison process standards. We do a BASE scan of brain in the area we are interested in, and then we do the activated scan, creating a structure/function (S/F) system. By “comparing” the not active to the activated areas, information is generated, and created. By comparing S/F we create information about what is going on Where in the brain. and so has the entire knowledge of the brain’s function been elucidated, by this simple S/F relationship which is CP, LE driven!!! Formerly the laborious neuropathology of brain was used. But the MRI, CT and fMRI plus comparison with MEG are used now, because it’s incredibly faster than pathology. And that is, once again, the LE efficiency of the 2nd Law at work, once again, and deja vu all over again. Play it Again, Sam, IOW!!! La Chanson Sans Fin.

.

To whit, The Praxis

.

.

That is the case going on here. & then modifying each of our skills by comparing their outputs to LE rules, we make them efficient and better, and this can be done without limits.

.

Therefore, we see how this model of CP, LE, S/F and complex system rules create the methods, skill sets, approaches, techniques & the technologies, devices, tools, etc., those create. The Pentad of understanding Cx Sys. We use CP & LE rules to find (often by Trial & Error, T&E) the “best methods”. And that again, marks very simply and detailedly the differences between the Professionals & Amateurs, & where the former find the best answers quickly, create the outputs of products and services more efficiently, with less time, more detail, better quality, and above all, ALL mediated by CP and LE rules.

.

That’s how it’s done in a nutshell. That such simple rule/processes CP and LE can describe and create most all information, descriptive, AND data, measuring and math is a vast simplification and thus extension of our understanding of How the brain works. I.E., it establishes the rules by which the brain creates information & then by seeing the same patterns again and again, can Predict what’s going to happen based upon solid, efficiently set, confirmed, stable standards. That is the basis of most all predictions, prophecy, seeing ahead, visualization process and so forth.

.

And THAT is what’s missing in AI, because if we know where we are going, that’s 1/2 of getting there. If you knew Where you’re goin’, that’s the most of getting there!! & the above model fits the facts very widely, well, and all but universally. IOW, if we want a model of everything, we need nearly universal processors, that is CP, LE, S/F and complex system understanding to do so. To bring the fields of TD, Newton/Einstein, mass/gravity & space/time, and QM together. & that has now been well and truly begun.

.

But it’s NOT absolute, but efficient, and thus more stable and useful. It’s not absolute knowledge, but very fine and working. And it can be, by study and work using CP & LE and all of the techniques and tools created from same, to get better and better. & to create the working, “self correcting” processes by comparing our ideas to events in existence, which the sciences are known for.

.

Using a simple methodology we have created a vast, new, very widely applicable (CP) understanding of what’s likely going on not only Inside of our bodies, but about events Outside as well. & this goes a very long way to creating AI, once these facts are known. & creating the category of prediction, prophesy, figuring out what illnesses will have what outcomes, thus survival, and the methods to optimize those survivals.

.

This is the beginning of the “Grand Design” of Hawking. The Unified model of Einstein, the Model of Everything,(MOE), but not quite, which has now been found and is being put to use.

.

CP, which finds LE solutions, LTM, recognition & pattern recognition and visual thinking which we humans do and which AI cannot without huge, slow, lumbering, number crunching.

.

What a child does in seconds, that is speak language, recognize patterns, and repeating events in existence, AI does slowly & with great difficulty because the makers of AI have NOT understood what’s going on in mind/brain, and so they by trial and error and brute force attempt to simulate what they do not know and how it’s working!!

.

In short, we do visualizing of events, that is process thinking which is NOT verbal, Nor math & Not logical, either.. That’s what Einstein stated, and is why understanding him is necessary to understanding the “Relativity of Our Cortex.” As he stated:

.

.

Thus, these new methods cut through the Gordon Knot of complexities as have so often written about before, and give vast new methodologies and unlimited skill sets to better understand virtually everything. But not quite, as that would be thermodynamic, perfect efficiency and is NOT possible. That’s what going on, largely.

.

It’s simple but vastly powerful as it applies to virtual everything. More below.

——

.

Next let us develop this more, given the above Pentad of near universal processors allowing the creation of the MOE, we must state how and why that “Likely” works. There is NO complete description/measurement possible, very likely, & due to TD no perfect heat engine. This is also consistent with Einstein’s measuring epistemology of NO absolute space & time, Being likely. The incompleteness is a TD measure of that, as well. This is a fundamental truth in IT of Shannon. And that’s where it comes from with this deeper CP, LE analysis. Incompleteness of QM, the lack of absolute space and time, and the inability to perfectly measure NOR describe events, all combine. They are deeply connected. They all confirm and reinforce these findings, do they not? That’s the elegance of the CP and LE. Do you see it? No absolutes relate to TD inefficiencies AND Incompletenesses. We have combined, and more unified our deeper understanding, have we not?

.

Using this model we can create creativity without limits. These are the Wellsprings of the Creativities , very likely, be it in microbial, pre-life forms, and up thru the anemones, the worms, the fish, the amphibs, the reptiles and birds, and the mammals. Each of them create information using CP, LE systems of processors analogous to our own.

.

Measurement creates information. Description ALSO creates information shown by ROY G BIV colour terms. And by that creation, creates creativities, without limits. That is, simply, the Wellspring of our creativities, very likely. Because those Descriptions/measurements create new standards, and those new standards are applied in a LE way, and become the rules, laws, standards, morals, etc. As has so often been describe by these. & those systems, being LE and efficient, Grow!!

.

Now, these rules and standards are NOT complete, and so cannot be absolute, very likely. It’s the probabilities which create the consistencies. CP creates the consistencies and when we add the probabilities, which our brains use and create as well (but NOT with math!), then we have the Likely/most likely rules. There is NO exact correspondence, comparison between our concepts and ideas, and the events to which they refer. The word is NOT the event it refers to (Korzybski, founder of general Semantics, “Science and Sanity”), and that’s highly likely to be the case.

.

That disparity is the fallacy of the idealisms, logics and maths. Where the math claims certainty, it’s not the case. As Einstein wrote. There is NO exact comparison between our ideas, logics, maths. beliefs and events in existence. And it’s BOTH proven and shown to be confirmed by Einstein’s epistemology AND thermodynamics which come to the same likely conclusion.

,

This is why, the QM equations MUST be probabilities & not certainties. Why Heisenberg’s UP is the likely case. The probabilities create the way out of the logical fallacies of idealisms. and this is the model in a nutshell of the Model of Everything. No neologisms, no radical new concepts, and it gets built upon the knowledge we already have, re-assembled in a very new, very growth capable, efficient means. Thus the MOE.

,

Here is how it works in a nutshell. A Mother’s Wisdom, and the Walkabout article show exactly how those trial and error (T&E) corrections and methods arise & work, almost universally applicable because the Pentad is that, and without limits, very likely. & they work for each of us all day long and without limits, too. Immensely practical and applicable to our daily tasks most all fo the time. Universal processors. But not quite.

,

,

This details more of Einstein’s very supportive thinking on the above:

,

,

This is a very much more practical, working cameo/example of HOW to apply the above methods. And it also shows HOW the travelling salesman problem can be solved more and more completely within each specific app.

,

,

As a field biologist first, physician in the Clinical Neurosciences next, we find that Most of Biology AND Medicine are very clearly detailed, well worked out, empirically, as well as tested to deal with Complex Systems, which are about 99+% of events in existence. And sadly, which Math/Physical Sciences have so far, largely ignored and not dealt with much at all. Thus the biologists and the vast fields of the medical specialties are far, far more ABLE and experienced to understand almost everything using those unlimited, Universal Processors than most all other fields.

,

The medical history is mostly verbal descriptions. The examinations general and Neurological are also the same. The Diagnostic protocols, again hierarchies of listings of possible differential diagnoses, for confusion/dementia, Chest pains, Headaches, etc., are huge categories of Aristoteles, which are hierarchically organized by Comparison Processing (CP) vastly. And the treatment protocols are like those of the tree diagrams and work from the Ebers Medical Papyrus of 3500 years ago, too!! That is ancient knowledge.

,

And whenever we see those categories, we know we are dealing with Cx Sys. Because as Aristoteles, first realized in his ” Kategoria”, his Method was a much more complete, detailed, and fuller description of such complex entities nearly universally. He knew it worked, just like we knew at first for 40 years, That the penicillin worked, but not very much about why. Here’re the WHY’s of it.

,

The very existence of those hierarchies, and categories in the life sciences, globally and in medicine act as huge back up confirmation, & shows this. And this has NOT been seen before, nor appreciate, nor developed to its fuller implications and outcomes of value. IN biology, as opposed to the other “hard sciences” of physics, chemistries, etc.” we do NOT rely upon mathematics very much. Our methods instead are vastly, verbally, visually descriptive. But these are of much greater flexibility and vast descriptive powers of Cx Sys thinking, Driven by CP, LE, S/F relationships, and all of the 5th part of it, the endless and limitless, synonymic Plus (categories, of Aristoteles again) methods, skills and skill sets, ways of doing things, and the techniques, technologies, devices, tools, ways of doing things, etc., which come from those methods. Again, the Kategoria of Aristoteles in modern forms.

,

& this leads us inevitably as will be shown below, to empirical investigations into most everything we know about. & converting the current sciences into a far more applicable form. It means it can deal with the arts, cultures, history, and much else heretofore not understood empirically, either. & without limits, including sports!!! & nearly without limits of outcomes and developments of all fields, most likely.

,

We must understand, that wherever there are Cx Sys there are probabilities, & NOT very many certainties. & a very great deal of incompleteness in understandings. Weather, the whole of living systems, the outcomes of treatments, the solar “system”, to the Cx sys and structures of the atoms from complex system nuclei (Proton/neutron interactions, N =/> 3) and NOT easily calculable due to those complexities within the Cx system of electronic levels, which by S/F rules creates what we know of physics & chemistry. Thus we biologists must take the lead in this new path of human development of knowledge, because we were there first. with the best techniques and technologies to understand what’s going on with the massive Cx Sys which are biologies. & 99+% of events in the universe, very likely.

,

And it must be realized that the probabilities of QM are the first steps in understanding the complex systems of atomic, chemical, and physical outcomes.

.

Yet how do we do this? Ulam, the great mathematical physicist has shown us the way. In most every Cx Sys studied, surprisingly there are found “repeating stabilities” which arise in those. And in each of those, which we recognize and then reinforce into our Long term memories, naturally, we gain solid, stable knowledge, often probabilistic, which shows us HOW Events in our universe work. Not linearly, not logically but by the higher logics of CP, LE, S/F relationships within the overall structures of Cx sys, process thinking (Whitehead), the visualizations of Einstein’s methods (thought experiments). And our brains were developed, evolved and grew larger quantitatively and qualitatively from the earlier primates, to do just that. The efficiency of those brain S/F method & outcomes, even 500K years ago, were so life giving and powerful, that it lead to us. & that biological brain growth continues to this day. As genetics shows.

.

And it’s so hugely confirmed by events that we are surprised we did not see it 100 years ago, either, when systems theory and such was just beginning to give way to Cx sys modern methods.

.

This is in a nutshell the very basic Least Energy of most all efficiency work and fields, simplified, refined and workable. If any master these methods, they will be Efficiency Experts in any field(s) they choose to work in at any time. It’s that universal, powerful and applicable. So let’s launch into the basic processors and ways of thinking we need to in “almost every way, every day, get better.”

.

Essentially, we have the concepts of Einstein, those epistemologies by which show that Measuring is NOT absolute, and that there is no absolute space and time, due to most all events being measured against fixed, stable, relatively set but arbitrary, efficient standards. That has become a founding basis of the modern physics, and is called Relativity, or as Einstein called it, Invariance Theory.

.

We see at once, as great new insights, that linear systems are NOT synergistic, where the two linear interactions are additive. But we see that Cx sys are innately Synergistic, because the sum of the two interactions create more than the sum of a whole. The unique, highly characteristic of Cx Sys!!! It explains synergy in an entirely new way. & more deeply, too.

.

We see at once comparing simple chemistries of burning H2 and O2, the linear stoichiometric outputs of H2O Those outcomes are simple & easy to understand being highly linear. Now let us compare what happens when we use organic compounds with their known reagents. We do NOT get linear outputs, but Complex System effects. The compounds interact with each other, with the reagents again & again, and produce not the simple inorganic events, of A yields B, but A + reagent yield, B, C, DE F’, Alpha Beta/Gamma ‘ and so forth. Organic chemistries are most all CX sys, are they not? And initial conditions of temps, pressures and surrounding gases can also influence the outcome products. Organic chemistries, and their biochemistries, and physiologies ARE Cx Sys!!! & Medical Pharmacology? Cx sys all.

.

We see at once that Probabilities & the uncertainties (incompletenesses) are very often the Hallmarks of Cx Sys as well, do we not? & there is this corollary, that the rich unlimited probabilities of QM become the possibilities without limits for the vast details and unimaginable complexities seen in our real, empirical universe, too. Life itself likely comes about because of that unlimited richness of endless possibilities of living systems. As does the structural creativity of living systems, which are NOT often seen in simple linear, logical systems.

.

These are the hearts & cores of Cx Sys and how we deal with those, very likely.

.

However, some years ago it was noted that the moral Laws, the customary Laws, & the legal Laws are ALL constructed and created according to the same basic rules systems.This is a great unifying insight. They are structurally and functionally very, very close, simply variations on a simple theme, Comparison Process (CP). IOW, we compare our behaviors (& those of others) to the moral Laws. and by finding ourselves in compliance with them, or essentially not following those, so we are more or less moral. Now, in comparing our behaviors and those of others to the legal Laws, we find this same essential system. And if we are crossing the street with the lights, or while driving not “running” a red light, but moving with the green lights, in those ways, observably, and provable, and confirmably, we are obeying such specific Laws. And that can be extended to most all legal laws, too. Altho we have judges to decide the gray areas, however, as those rules are not complete. Thus we can tell by comparison processes, using such rules, standards and laws, as the case may be, if we are in compliance or not, most of the time. The set standards of the laws and rules is what guides our actions.

.

The insights of Hinton & others that there is a simple, repeating principle (more correctly, Process) operating in brain, which creates predictive control, has now been realized. It’s CP, with LE guidance.

.

Now, let’s look for more of the same, structure/function (S/F) rules. We find the scientific Laws doing just about the same thing, functioning through their structures in our models/theories, which tell us what’s supposed to be happening when those laws are compared, yet again, to events in existence. Thus the CP is likely universally driving the same kinds of moral, customary rules in most all human organizations (schools, military, governments, corporation, churches, etc.) using the same basic methods, tho widely variable. Yet this fits neatly into the CP of rules/standards and the behaviors of events within those applicable, empirical events, situations. Again The Kategoria of Aristoteles!!! The many kinds of rules, laws, standards, customs, etc., but all simplified & connected by the CP and LE insights

.

This is largely La Chanson Sans Fin, the variations on the endless themes created largely by the CP in all its 100’s of transitive words which show how this is done. And confirms, very likely, with unlimited examples, the reality and ubiquity of the CP operating in our cortices. We can think about thinking and think about that. Viz. Introspection and there is a frontal area in brain which does that. We can analyze analyzing and then again. We can model modeling and model that. We can read about reading, and write about writing again and agian. We can add, subtract, multiply and divide in the same way. Again, the transitive verbs are NOW seen in a new way. They are whenever they have this transitivity most all CP words. Because we can process processes again, then again. and check & test our testing and testing that yet again..

.

But we cannot ski skiing, or work, working. the Motor words do NOT work in that way. This has not been pointed out before.

.

And the root word in all of this, hiding in the dictionary’s vast number of com- words. Compare the comparison and compare it again, which creates the vast hierarchies of our biological nomenclatures, does it not?

.

This is entirely consistent with Leon Festinger’s highly fruitful “Social Comparison Model”.

.

We find exactly the same with the rules (laws, Standards, or grammar of nearly ALL languages, BTW), which we all start to learn by rote, and Trial & Error when 8-9 months old. & Those have been to some extent codified, yet not without some huge variations (dialects), etc. But the standard English in the US (& Canada) or the Accepted Pronunciations & usages in England, or indeed that of any other languages, including Mandarin, German, French and so forth ALL have their individual, set point, basic rules/behaviors. The same relatively fixed standards, with yet that arbitrariness (Einstein) of the set points for sounds connected with each word meaning (S/F) , of each major language group. The dialects are again, sub-Variations on the Themes, La Chanson. S/F relationships found by comparison with the common, day to day speech using such rules. Thus COMparative Linguistics is the case when studying most all languages. Romance languages, Teutonic, the Slavic and Hindu, the Semitic, and others all within the Indo-European hierarchies. There those are again!!!

.

And we find the Comparative anatomies, physiologies, Ethologies, and much else using the CP to create new insights, data and understandings.

.

The quest for the Universal Grammar of languages is very likely, in fact, the same S/F rules of CP in our cortices. There is unlikely a “thing” (fallacy of reification) as Universal Grammar, but instead, very likely repeating, nearly universal sets of “Processes” in Brain which create and use the grammars as standards to understand. That is why children can learn any language. Because the speech and cortical structures in normal children are nearly exactly alike using CP and LE!!! So very early by speaking languages, we become acquainted with this basic, general, nearly universal human trait. Which also are the bases for customs, Morals, and the Laws. It was not a grammar but a series of analogizing, related cortical brain processes!!! Following the rules by comparison processes, repeatedly. And universally. We drop the Universal grammar and substitute the Universal Comparison Processes and Least Energy rules, instead.

.

CF Karl Friston:

.

.

& that is the case, very confirmably without limits. And we have the Kategoria here of Aristoteles, which has many, many members, the Laws: 1) Moral; 2) Customary, as in families; 3) Legal which vary from state to state, nation to nation and culture to cultures, as well; 4) Scientific Laws of the many subcategories of the sciences; 5) Organizational, those of the schools, the churches, the military, the companies, clubs, unions, etc., endless in their scope and varieties; 6) Professional ethics, which are created to maximize outputs and outcomes (& occ. Incomes and successes) of each professional’s behaviors; 7) Grammatical rules, which are the set points of each language with respect to pronunciation, words/sounds (often arbitrary designations, as per Einstein), which name events, etc. This provides a very deep description & understanding of the same system across most behaviors and conditions of peoples, world wide. It’s a species wide commonality, too. It’s a vastly uniting processings/principles set.

.

And many animals known also use similar methods as the CP and Least Energy (LE, 2nd Law of Thermodynamics) guiding the most useful and valuable Laws as outputs. Animals can recognize events by comparing to their Long Term Memory (LTM) analogous Neuronal structures to events for recognition just as do we. We do the same,comparing events to our LTM and create recognitions. & then create the recognition, pattern recognition, pattern recognition trains as well. most all of it by CP. Thus we unite recognitions for most all species to this simple, repeating pattern.

.

Tying this into a larger model, we look for a unifying model of most everything, using this same series of methods/tools and rules. These are also structure/function (S/F) methods in a nutshell, a nearly universal method. We compare S to F and then F to S, back and forth, to create information. And repeat that again and again to learn. Just like LE & CP. It’s Complex System thinking, and the limitless methods, skill sets, ways of doing things, PLUS approaches, techniques AND the technologies, devices, tools, instruments, etc., which are created from those. Categories of Aristoteles, we see.

.

The set point structures of our grammars and pronunciations, and usages of words and their combination, are the key to understanding AND translating most all languages into most all the others, and also in generating them, as well. We use those known set points of accents and words to ID which languages are which, as well. Often quickly with hearing only a few words!!! Those are the nearly Universal processors, which can help create very much more unified models, which are necessarily more complete, too.

.

With respect further to languages, Have gone into this at depth in previous articles,

.

Je suis ici.

Ich bin Hier..

Estoy Aqui.

Hic Sum.

.

All of which are a very exacting comparison to

.

“I am Here.”

.

In English, Ingles, Anglais, Englische, etc., & upon that simple basis we learn other languages, because their structures, tho inherently varying, and arbitrary with respect to meanings attached to sounds (the arbitrariness of our measuring system a la Einstein’s arbitrary Measuring standards, inches, feet, meters, stadia, cubits, etc.) relies upon the same CP in most all, normal, language capable humans. It’s not a common grammar but a common set of “processings”, meaning by common Processors (the Cortical columns of Mountcastle), which create those variations on the themes of meaning, we call languages. Again within Comparative linguistics, the drivers of CP & LE.

.

Those who are looking for a “universal grammar” are looking in short, for a black cat in a very large dark space, and the cat’s not there!!! CP and LE create the grammars!! Processes rule the Cortices!

.

Thus, La Chanson Sans Fin. The variations on the themes in music as the vast, unlimited analogizers of the comparison processes of brain functions.

.

And that’s the point. If we are searching for, as Einstein did, a “unifying model”, (or “Grand Design” as Hawking wrote about, sought, but did not find), then we must very likely at least mostly create/use a series of efficient, universal processing methods to build the same. This now has been largely done.

.

For instance, the CP which drives our laws’ S/F relationships (& the creation of the hierarchies of our organized knowledge and listings of living systems, the so called binomial nomenclature) creates it most likely, In most all of grammars, customs, morals, legal laws, organizational rules, and the scientific laws. Thus it’s a universal processor for much of what we do and use. All the time, every day. and so widely used we have in fact, ignored it most of the time due to habituations and related subconscious methods. We did not see it, we ignored it because it was so vastly omnipresent & thus was missed, very likely!!! This is why the system is universally applicable, practical, day to day guide to most all we do all the time. IOW, it’s very hard to see what we are doing inside of a system. & for the Very reason we cannot See our Eyes in the front of our faces, without a reflection of some sort!!! The Eyes of themselves cannot see the eyes!!! That metaphor shows this essential truth.

.

See Einstein’s quote on this likely deep truth.

.

& the same is true of another basic part of the MOE. That of the Least Energy or least free energy rules of Thermodynamics. Dr. Karl Friston, Dept. Chair Neuroimaging, UCL, UK, has clearly shown this, confirmably and without much limit working in our brains/minds, nearly universally. & also in our brain structures/outputs, or functions. It unites the same within, a nearly universal processor of events in existence, least energy. And Inside as well as Outside of the brain/body. My work found much the same over the last 40 years of using/applying Least Energy principles and rules, which first learned from ca. 1800 James Hamilton’s Least Action ideas.

.

Almost all processes/events in our universe, visibly, repeatedly, observably and confirmably require energy to operate, functionally. & they do so by least energy rules. Thus our laws whether they be moral, grammatical, legal, scientific or any others, MUST embody this least energy rule, to be efficient, effective and stable, which are yet other characteristics of the complex system of the Second Law.

.

Now, we have to detect least energy and we do so by comparison rules. We compare the outcomes of how much a method costs, how much materials it uses, how much time it takes, how much distance, all of these being least energy rules. And by THAT comparison learn the outcomes of each of those methods, and choose from our Trial and Error methods, those which do the most with the least. that is, least energy rules. The fundamental rule of creativity is thus driven, much to our surprise, by universal thermodynamics working in our brains, very likely. If we can make it work, that is learn to do so. CF here:

.

.

We should add that shopping is composed of most all CP, and most of us do that nearly daily when we look for good buys. We compare price, value, volume, freshness, needs/goals, and so forth. Each of those are comparisons. And it’s applied by Trial & Error, (yet another highly efficient processor for goals) from which we build up skill sets, which we use to efficiently find the best bargains and goods.

.

& that can even be extended to a simple analysis of the difference among the methods/skills, tools, vocabulary of most any professional, compared to the outcomes of an amateur. It applies nearly to all of it. What makes a Professional? His methods, skills sets are very, very efficient in a complex system, TD sense compared to an amateur. It’s that simple.

.

This nearly universal process, Comparison, has been noted as ubiquitous by Dr. Philip B. Stark, Dept. Chair, Statistics, UCB, California, in his video and detailed confirming work on “The Method(s) of Comparison”.

CF here:

.

It’s confirmable without limits, it’s all but universal, ubiquitous in events, and further, it’s markedly Multi-disciplinary, which if we think about it, means it’s nearly universally applicable Thus it tests & is confirming once again the nature of the CP’s we use most all the time. & Cx Sys are, as well.

.

And the evidences are massive. We create information by CP, by measuring events in existence against the set, stable, measuring methods we use. By comparing a stick of cut timber to the tape measure we find out its length, its width and thickness. And that CREATES in nearly every case, data, numerical information. & indeed ANY kind of measuring creates from those set, stable, efficient, repeating measurably the same, confirmable data, as well. It creates numerical information. Altho the units used are, quite, quite arbitrary, in most cases.

.

Where does information come from? By comparing to a series of relatively set, stable, efficient standards of Einstein’s epistemology.

.

But the huge inference we can conclude and also confirm and prove without limits by S/F, is that the mathematics of measuring comes out of the Left posterior language areas. This clearly by S/F proofs and evidences, AND that the language does the same kind of function, in all the myriads of ways, and is closely related to math. This has been known, or strongly suspected for some time, that language and Math are very closely related in both form and function (S/F) of verbal descriptions and mathematical modeling.

.

Language uses relatively set, stable VERBAL standards, and geographic landmarks (the grid field, Long Term Memory system, analogy) to create verbal information.

.

.

How physicians create new information shows this convincingly as well. ROY G BIV, plus blacks grays to whites, etc.

.

.

& that creates the set, stable, relatively fixed standards, AKA words, by which we “describe verbally” most all the colours, the neuroscientific equivalence of measuring, highly likely. Thus we can extend relativity and its epistemology to the brain’s language and communication systems as well, and our verbal, descriptive standards by this same, simple analogy and confirming and proofs systems. The relativity of the brain, which creates information by using such vast numbers of comparison standards and words, which we call conceptual tools, as well. It’s virtually all the same, simple, repeating comparison process, variations on unlimited themes.

.

.

That’s the beauty of the system of CP and Least Energy(LE) rules. They are simple but powerfully applicable to most all events. They efficiently create the rules by which we speak, describe and create more information both verbally and mathematically, as well. This describes well & simply, what’s going on, largely. & we recognize LE by comparison. It detects least energy methods efficiently. We compare antibiotics to see which kills the most bacteria, the fastest and with the least costs, and with the least untoward systemic effects, viz. complex system, & Not “side effects”. Then we see Which treatments work the best, as per Dr. Stark. How do we know treatments are effective? By the Methods(plural) of Comparison. That creates the data which sort out the best matches to our goals. It find the best answers within each framework & search.

.

Now, having drawn upon Drs. Einstein, Friston, Festinger (https://en.wikipedia.org/wiki/Leon_Festinger) , and Stark, we continue onwards. We have complex systems as a next part of the 5 part model elements. Cx Sys do a lot with a little, they are LE efficient. The dopamine(DA) molecule has about 20 receptor sites, as it’s the oldest, and we can use that to describe as the DA standard, most of the rest of the neurochemicals. The broad all kinds of receptor sites methods are, basically, Complex systems!!! The “systemic effects” of our DA and its catecholamine daughters, Norepinephrine and epinephrine (Adrenalin) are essentially dopaminergic, as well. Alpha and beta Adrenergic receptors are DA, too. So are nicotinic receptors, BTW. This is the key. There is NO nicotine in brain, normally. What activates those? Catecholamines, largely!!! We see by comparison the extreme similarities among the catecholamines & adopt the DA standard to understand the rest of the neurochemicals, with which they all must interact in many ways. This vastly simplifies understanding the lot of endogenous 125 neurochemicals, and the 1000’s (likely unlimited) of synthetics, as well. It follows closely the use of morphine sulfate method to standardize and understand the major pain killers, too. It’s nothing new. It’s a well established Method of Comparison, thus driven by the ubiquitous, CP.

.

.

Thus we have shown that the “modular, complex system of the brain” works in those ways. Gazzaniga’s “Cognitive Neuroscience” states this, in a lovely way. and that chapter is worth the price of his whole, huge text.

.

In order to neatly mesh, IOW, Unify, Behaviorism with Neuroscience, it’s done in this way, yet again vastly simplifying the whole. This shows yet another way how we begin to create unifications of the models, as well.

.

.

But there is a LOT more, in fact. Cx sys thinking is very multidisciplinary as we see in Dr. Stark’s work on Methods of Comparisons. Thus it’s another key, yet produces a nearly universal processor and the 4th one of the five.

.

Thus we have the CP which finds the outputs, viz. least energy rules which are nearly universal, as well. And our visually driven, CxSys thinking, which is nearly universal, too. And the comparison of the structure/function (S/F) relationships (The relationships among events creates understanding. Einstein’s “Physics and Reality”, 1936). & thus we have a widely applicable, vast system which applies to almost everything because our processors are almost universal. But not quite!!!

.

At this point we must introduce the hierarchies both natural and those of our understandings, to show all of this at work again. The hierarchies of the riverine systems, which are LE, topologically driven. Water seeks the lowest, LE levels (& that unites gravity in part with thermodynamics, does it not?), and that drives the motions of ALL of the rivers, worldwide. The hierarchies of the trees, from the trunks up to the large branches, then smaller branchings, then to the sticks and then the twigs. & then the amazing, unlimited hierarchies in all the kinds of leaves. Once more, La Chanson, Toujour La Chanson. Then down into the largest roots, the smaller roots, and finally the tiny rootlets which bring up the minerals & water, among other things. The Vast delta regions can be seen as the roots of the rivers, in their multiple branches off the main trunk rivers. Least Energy all. The tiny rivulets and springs which coalesce into the streams, then the smaller rivers, then the vast trunk rivers of the Amazon, Orinoco, Mississippi from the Ohio, Missouri and many others. Again, least energy, hierarchical, topological structures, repeating endlessly.

.

Then the trunks of our bodies, and the large arms, then forearms, then wrist and the hand bones and finally the 5 fingers each. Exactly comparable to the legs which are then big thighs, smaller calves and the tibias and fibulas, then the bones of the foot ending in the metatarsals and toes, of 5 each foot. Then the hierarchies of the spinal vertebrae, all hierarchical, cervical, thoracic, lumbar and sacral.

.

And the fundamental neurovascular bundles which start out in the spine, the aorta, and then return centrally as the superior and inferior vena cavae; all ending peripherally in the tiny capillaries, where the arterioles supply, and then larger & larger venules, until they reach the big trunk veins, etc. All hierarchically arranged, and thus, very efficient. The nerves closely imitate that same, variation on the Hierarchical themes. Nothing gets missed. It all connects to all else. & it’s VERY LE efficient without limits and applicable to most all higher animals, as well. And the higher, vascular plants, too!!

.

The same hierarchies are seen in our water systems, telephone, gas, cable, & etc. lines. The same in the dictionaries, the encyclopedias, and so forth. All of our streets, roads and interstate systems. Even the Bible is dominantly hierarchical, beginning with the OT, the Apocrypha, and the NT, and then divided into the Books, roughly organized by an hierarchical time line. Then the chapters of each book, the verses, the sentences, the words, and then the letters. And at the bottom of each page, the pagination which is ones, 10’s, 100’s, and in some cases thousands. Most ALL hierarchical universality is NOT an accident, but are the consequence of LE efficient organizations.

.

It’s efficiently organized, & LE (often in a metastable, but not absolute way) throughout. And ubiquitous, and found globally without limits, too. In both Space And Time. observably the case, in our universe, everWhere and everyWhen. Yet another LE efficiency set of events, which are there in all their multiplicities, myriads of ways, of CxSys. These cannot be ignored, and they are basic to understanding what’s going on.

.

We take the natural hierarchies, & create the binomial nomenclatures, put together by comparing of vast number of creatures & plants & organisms of all sorts to each thereto. another Comparative Structure system Driven by the CP and organized by LE, most always. Universally the case, as well. Millions of species and yet unlimitedly capable of expansion to millions more, past and present. These endless, vast confirmations of the LE and CP cannot be ignored.

.

The chemical IUPAC has such an hierarchy of chemicals, some 34 millions of them all related to each other by comparison to the categorical standards of chemical naming. Every one confirmation of the CP, LE nature of hierarchies and how our brains work to create such. Then there are the lists of the pharmaceuticals & biologicals, vast as well. Not to ignore all the polypeptides, and proteins, without limits

.

Then we must discuss the Social Comparison Model of the great Leon Festinger and his cognitive dissonances, which are much more the case, finally ending by applying his ever present cognitive dissonances to Kuhn’s “The Structure…….

.

Essentially, the CP is a very large part of Festinger’s very successful, work in progress, social model. It is because it’s how the cortex processes most all higher level information, including but not limited to, Social information. So, at least in these ways, it not only explains the successes of his model, but confirms that CP is very important socially, too. IOW the cortex which creates social interactions is doing the CP just like in all the rest of the brain surfaces.

.

When we look at his cognitive dissonances, those are basically, most ALL created by comparing our expectations to the actual, empirical outcomes. We expect a birthday present, but we don’t get one. We expect that the day will be clear of rainy weather and so we can go about our duties. and then it rains. We expect the car to run, and it suddenly loses power and we are at a loss to explain why. All of those ideal expectations are exactly like what happens when a scientific law/model is shown to be in error, either in major or minor ways. We compare what we expect to what empirically, testable occurs instead. When they do not match, Cognitive Dissonances of Festinger, without LIMITS!!! Siempre Lo Mismo!!!

.

Thus we entrain into the scientific observations so well described in Thomas Kuhn’s, “The Structure of Scientific Revolutions”. What happens there is a cognitive dissonance between what the science expects and what is actually found. And Kuhn relates several examples of this from the change from the circular planetary orbits of Ptolemaios & the earth centered planetary system, to the more accurate & likely circular orbits in the solar system (Copernicus). Then to Kepler in which orbits of the planets were in ellipses. Then he showed how Newton refined that model more by this famous F is equal to the product of the 2 masses, m1, and m2, times g constant, divided by twice the radius R.

.

The interesting point is that the equation for the ellipse (an oval type), is the product of the two axes, and Newton modified it for empirical finding, M1 times M2, using G, a constant, which is ALSO a comparison process, being a constant, such as Pi; and the relative distance between the two interacting bodies. Almost all constants are ratios, proportions, & are likely THUS created by CP. That is unlikely an accident. The problems were that Newtonian/Keplerian models were for an ideal, planar ellipse and that such are not real, (cognitive dissonance again). & that in fact the planets more or less rosette in an altering plane above & below the ecliptic. So that led to the more exact 3D “elements of orbit”, which are now used.

.

Note that at each stage, the earth NOT being the center of the solar system, and that the orbits were NOT circular, as shown by Kepler’s famous quote, “But for that 8′ of arc error for Mars, I would have accepted circular obits.” His very own Cognitive dissonance!!! And then his work showing how speeds and times of orbiting were related, when Compared to each other. And the greater equation of Newton, which was eventually shown to create a cognitive dissonance, because the elliptical orbits were NOT exactly what was going on, either. And each of those problems led to cognitive dissonance, & then by Kuhn’s observation to a fundamental change in our paradigms. & So Festinger applies to most all scientific progress, as well. Most all Scientific progress moves largely by “cognitive dissonance”. Another universal!!!

.

That Newton’s equation solves a least energy equation for N =2, but not N =/> 3, etc., & that created the many body problems of the solar system calculations. And that the orbits of Saturn, Uranus and Neptune not being quite right by observation, showed, respectively the existence of Uranus, Neptune & Pluto. Again, cognitive dissonances (CD’s) which created Kuhnian revolutions in our understanding of the solar system. That goes on without limits in most all fields, today. & thus shows the ubiquity once again of the CP in our thinking, models & so forth. Once again confirmation without limits, of the use of the CP by our cortices. Virtually EVERY CogDis is a CP. Unlimited confirmations, once more. Do we need any more proofs? Well, likely only for the brain hardwired and the stubbornly bloody minded!!!

.

Notable, the missing 2/3 of solar neutrinos recently also had a huge effects on our particle physics, being the “absent neutrinos” of solar fusion cognitive dissonance. Thus the Comparison of our expectation, to what we find creates scientific progress of all sorts. This must include the entire Cx sys of plate tectonics ( Tectonics is Cx sys? What a shock!!), which showed most all plates in “relative” motion to each other, not absolute, and which are CxSys, not absolute structures, either. & not very predictable into the futures, either, as most CxSys work & act.

.

Thus, we can, at present, create a very nearly, but not quite, due to clear, measurable, conformable TD reasons, a Model of Everything, MOE, using those. The good answers to Dr. Einstein’s quest for “Unified theory” are using those universal processors named above to create that unity!! & we have those, in the S/F relationships, as well. Now, can most all see this?

.

It takes the vast complexities, unlimited details of events in our universe, & THEN simplifies them down to repeating events, standards & methods. Just like our some 500K cortical columns repeat their structures and instructions. Sans limits to process the sensory data, does it not?

.

The further evidence of the value of the CP is that of the work by Leon Festinger on the Social comparison model which has been shown to be very, very useful & fruitful in social sciences and modeling. and his “Cognitive Dissonance” is essentially CP moderated. What we expect to find we compare to what we do not, which creates the cognitive dissonance. & how we solve that is part of T&E seeking and working out.

.

But this also applies to Kuhn’s “the Structure…..” because in most every instance, there was found with Ptolemy’s model of the solar system, a serious objection, which created a major change. But for that 8′ of arc difference, Kepler would have accepted Ptolemy’s circles. And upon Kepler, Newton built his new model of orbits, as well, which were as Maupertuis first pointed out in late 1600’s, LE orbits. There those are again.

.

La Chanson, Toujour La Chanson!!!

.

But there is much, much more. As cognitive dissonance drives the sciences, as Kuhn pointed out, & Festinger also applies, it still does. when the number of solar neutrinos was found NOT to be the case, being 1/3 of the number expected, then the model had to change. Neutrinos had to have mass for this to be the case, and in addition, there were 3 of them which could convert into one of the others. This forced a major revolution in our understanding of particle theory, which also neutrinos do NOT have quarks or gluons, as Leptons class (Kategoria), they still have mass, albeit very tiny. And that cognitive dissonance, so often found in Kuhn’s “The Structure…”, drove the paradigm shifts from Ptolemy, to Kepler, to Newton, and to the elements of orbit, the current orbital modeling system.

.

And also drove the solution of the solar neutrino problem, by the cognitive dissonance between the Model, the expectation, as Festinger states it, to the resolution of that disparity by yet another addition to particle theory and physics.

.

Information was added and the problem was solved. See below.

.

Where all of this new neutrino business will end is not in sight, but part of dark matter and energy could well be cool, collected vast masses of gassy neutrinos as part of the missing dark matter/energy, which are also very, very hard to find, as are neutrinos. As in the LE, Ockham’s Razor, we do not need MORE particles!!! But there is NO logical nor empirical necessity that dark matter/energy is due to ONLY 1 source, but can be due to many, many events, as the system is complex system, which permits this. Indeed expects it!!!

.

So, what is the last crucial element of those quaternary, complex system elements, CP, LE, S/F realizations and Cx Sys? Very simply, it’s those methods, skills, techniques/technologies, devices, tools, instruments, etc., which can create the outputs of information, which we need to understand most everything. All of those ideas, concepts including maths, and the basic constructs of our modeling do that, and efficiently.

.

And that creativity is driven largely by not only because it works, but mostly due to creating growth, development, evolution, & self organization, etc., by those means. The driver of creativities is LE efficiencies. We can also include the DA driving systems of the emotions, also at this time. But if the outputs of those are not efficient, then those will not be retained & are relegated to fads and fashions, instead. Compare outcomes of the AC power system, the ICEngine, and the vast numbers of machines & tools and such which such CP, LE, S/F rules work and create via processes.

.

We see at once that Snap On tools succeeded because of LE. Instead of having vast numbers and huge weights of individual wrenches, for 1/8.1/4 1/2, 5/8, etc. and screw drivers of many types, as well as hex nuts, etc. All of that mass of metal was replaced with reversible ratchets of at least 3 sized types, to which could be affixed to the proper heads. thus replacing 100’s of pounds of tools, with a simple set of tools, which can fit into a modestly sized case. & instead. It does a LOT with a little. It’s Least Energy efficient, is not? That’s what created the growth of that kind of tool system. Does a lot, with less cost, far less weight, less space, etc. and does it well, durably and reliably so. Missed that didn’t we? It’s universally applicable!!!

.

.

The outputs of the professionals are better, work better, are done in less time, less cost, less material, etc., which is the case. Again, LE shows us the professionals and how they and ARE the professionals, when compared to amateurs, and most all of the unlimited gradations in between. They do lots more, their methods grow and expand their outputs and thus the system is growth and development driven..

.

The very important subject of how we solve problems, and how we create creativity are essential aspect of this new model. It neatly shows how problem solving occurs, by using T&E methods which compare the outcomes of each possibility to what we want, and then upon finding the most efficient solution(s), we compare those by least energy and make the choice. It’s not an absolute solution set by any means, but often efficiently, metastably better, until we find another, more LE solutions set.

.

How this is done is basically explained in The Neuroscience of problem solving (essentially sorting methods) and how this creates creativity is simply more of the same series of methods we use. Most all is done by the least, simplest of rules.

.

.

And two more articles about creating efficient, capable, self driving cars AND the P ~= to NP problem. Add information to solve. Simply that.

.

It’s largely recognition by comparing an external or internal event to our LTM, and then we do the pattern recognition, pattern recognition, etc. until the job is done. And thus the sorting thru the possible solutions is also done, by the same means. And when we find the pattern, the match, and the good solution which matches our sorting, T&E matching processes, we know we have arrived. It’s very simple that. and it’s used almost the same way in most all from those who become professionals, to those solving any kind of major problems.

.

Further, it also addresses the hows, why’s and drives that create the specific specializations of EACH professional vocabulary. Those were developed and driven by finding better & better ways of doing the job. IN EVERY case, the legal vocabularies in Latin, the vast medical terms and words/concepts; the chemical vocabulary, the sports activities, and most all the REST of those verbal sets, are specialized to operate because they are, much more than common vocabularies, suited best to do the job most efficiently, in the least times, and with the least efforts, are they not? Thus we see yet another confirmation of the ubiquity, vast applicability and near universality of the system.

.

Let’s use some examples to show how math creativity comes about, largely. The deeply useful predictive and applicable universally to growth, development, emergence, evolution, etc. “S-Curves” will be developed in a future article. 1 simple sentence by Whitehead can develop/generate a new huge, empirical mathematical field without limits. Incroyable!!!

.

For instance in creating a very new and original means to sort out, as well as generate the primes, it became at once clear, that because for numbers greater than 10, the primes all ended in -1, -3, -7, and -9. We could easily skip the even numbers & those ending in -5 and -0, too. This cut through the complexity of the Gordian knot. And reduced by using a simple pattern, the number line by about 60%. Leaving us with the PM’s (Prime Multiples, a highly useful, VIP number only created by the products of numbers ending in ONLY, -1, -3, -7, -9.) of 3.) And then we note yet another major pattern. That the sequence of 4 odd numbers numbers repeats itself every 3 units of 1, 3, 7, 9. thus the 11, 13, 17, and 19, repeat themselves with a cast out 3, remainder one, giving 41, 43, 47, and 49, the latter of which is not prime, but 7 squared.

.

.

Then examining the next quartet also by casting out 3’s, we see the repeating PM3’s in the -1, -3, -7, & -9 ending numbers, such as 21(PM3), 23, 27(PM3) and 29. 23 and 29 are primes. We move up to the next cast out 3’s, Re(mainder)2 quartets, and we get 51(PM3), 53, 57(PM3), & 59!!!. And yet again, the 51 and 57 are PM3’s and the other two, 53 and 59 are potential, & in fact primes. and then again and again. And lastly we note the last & Third major pattern of PM3’s, the Rem0 quartet and as in 31, 33, 37, and 39, we note that the number ending in -3 and -9 are always PM3 and thus can be eliminated as primes. Leaving the the 31 and 37, which can be primes and in this early in the number line case are.

.

Thus we can recreate a new kind of number line, removing the PM3’s very quickly by rewriting the number line as 11, 13, 17, 19; 23, 29, 31, 37; then the next set of 30’s, 41, 43, 47, 49 (PM7 removed); and then 53 and 59, then 61, and 67,both primes wherein the Rem1 quartet recurs, as it does truly without limit, to the number line as 71, 73, 77 (PM7&11!), and 79.

.

This then allows us to remove the 77 by noting it’s 49 7 squared, plus 28 which is 4p, or 28. Then we add 2p or 14, to that, get 91, which is yet another 7 x 13. Then we add the next 4p which is 119, and get 7 X 17. And then another 2p or 14 which is 133. Noting that 11 squared is 121, we can skip that one, too. leaving the primes sorted out to 133. The next is 121 (11 squared prime series start) plus 22 (2p), which give 143 as the first PM11 after 121. That gives the primes up to 143, very efficiently. & then the process is continued with each prime squared plus the appropriate 4p or 2p added sequentially to it, which when removed from the repeating -1, -3, -7,. -9 quartets, and reduces the numbers to be sorted every 90 to merely 24 (3 times the repeating Rem1, 2, 0 quartets), and essentially reduces the number line to only the primes, when the PM’s are removed. That leaves, uniquely and always, the primes.

.

This is given in the series of the above article, but the previous 4 articles add to the details needed.

.

Essentially, then we can clear the number line using PM7, plus PM11, plus PM13, etc. by a simple repeating equation, until we can remove about 99.,9% of the number line using this method. That removal/simplification of the number line can start anywhere, too, no matter how large the numbers. And that vastly simplifies even further what we have to do to find the primes Another interesting effect such as a built-in, PM5 check system, to eliminate all errors, plus an effect of generating the primes as the product of the primes times the primes, too. This Quartet method largely eliminates the PM’s to the prime 113, at which time the thing begins to generate the first big prime gap, 113 to 127, or 14. wherein we must remove the already found PM’s which are then being included in the odd number line of primes.

.

It’s a very simple additive process which sorts AND generates the primes, as a double back up system. AND has a check sum system of PM5’s built into it, to be sure of accurate accounting using computerization, too. There are other details, but that’s the major means by which the Primes can be investigated and understood better, and why the number 30 keeps recurring in the primes.

.

.

Essentially, very likely this system can be improved without limits and thus find primes in huge numbers of digits values. By a compound method.

.

The Lyapunov numbers which allow us to find the least energy solutions to problems, are treated in Friston’s lovely “Consciousness” article in Aeon.com Essays. & shows another example of how empirical maths can be created and applied to LE physics.

.

The last great example of math creativity is that of the simple phrase written by the great Philosopher of science, Alfred Whitehead.

.

“Any society (or group in a society), which cannot break out of its current abstractions, after a limited period of growth, is doomed to stagnate.”

.

This verbal description can be shown to very closely correspond to the S-curve of growth, etc. This will be treated more extensively showing its nearly universal applicability to growth, prediction of growth, and emergence, thus markedly dealing with the biggest problems in CxSys, and Predicting those, that of growing, emergent phenomena, which make forecasting hard in CxSys. IN fact, and very likely, it can create a math of Psychohistory of Asimov’s “Foundation.” Human beings can prophecy. The S-curves can empirically confirm and predict the occurrences of such events, reliably and even show which products will, if marketed well, have the most growth, that is profits. It’s all very simple from Whitehead’s dictum. “After a limited period of growth, stagnates.” Deep, which is why he was the best science philosopher of our time. Except of course for Einstein, whose methods he studied & knew , too. Thus, Process Thinking.

.

And predict using maths, what Emergent events are MOST likely to occur in most all cases. A study of avalanches shows how this occurs, which is simply an inverted, S-curve, physically. & this solves Arnold Toynbee’s, “A Study of History” questions quite, quite well. History is a series of metastable, LE forms & events!!!

.

At the last, let’s take a very wider and inclusive new tack in order to more completely show & better understand what’s going on. This will touch upon the various deep and wide relationships of all parts of this new model of everything and show the connections of most everything to everything else. But not quite completely, as that’s thermodynamically unlikely. No perfect heat engines, no prefect description and no perfect measuring systems. Thus TD shows us WHY there are unlikely to be any absolute anything, because it’s mostly all relative. And that acts as yet another confirmation of Einstein’s epistemology, does it not? The lack of absolutes, and the lack of complete description/measurements is part of, and intimately tied into TD outcomes, is not? Thus Einstein’s no absolutes, is a thermodynamic consequence!!! And that’s the elegance and beauty in part of this new Model of Everything, but not quite!!!

.

Now we will in a peripatetic way touch upon all those major areas which are combined into the basic model. How do we detect and model and understand complex systems is the key here, and why this model is so very deep, nearly universal and applicable? Because Cx sys are nearly 99+% of events/processing going on in our universe, is why.

.

And how do we deal with, model and understand those in the medical and biological sciences?

.

By using the CP, LE, S/F relationships created by comparing S with F and F whit S unlimited in both directions. The S/F of the genes. That of the enzymes. Of the atoms, elements and isotopes, of the S/F of the brain; of the structures of the molecules which leads to function biologically, as well. Of medical pharmacologies which are clearly S/F comparison processes in most all cases & thus nearly everything!!!

.

This is shown clearly with Penicillin and the whole hierarchical class of the PCN’s. We knew that they worked. That is we had the structure (S) of HOW those worked, but we did not know the F. But we do NOT need to know that, to KNOW that it works!!! It’s an amazing observation. But once we found out that the PCN basic molecular structure, the beta lactam ring was what disrupted the bacterial cell wall, then we knew how it worked, is not? And for the cephalosporins, this SAME beta lactam ring was also operative. All the hierarchies on the PCN’s & Cephalosporins, the Variation on the themes, NOW. And there it is again!! La Chanson.

.

But in finding that many bacteria grew resistant over time to PCN, we made the methicilin group only to find that pesky resistance arising again. & then with the cephalosporins more effective than the PCN’s, but then the same repeating pattern of events, more instances of the resistances of the ESKAPE bacteria, as well. Variation on the themes. So we created clavanulate, whose S/F blocks the beta lactamase of the bacteria which can be resistant to both Ceph’s and PCN’s. And with Augmentation, Amoxacillin & Clavanulate, it got rid of the resistances. For a while… though not quite effective, nor absolute effects again. The bugs developed resistance again!! But how do those bacteria sort out the solutions? It’s easy to see and understand, comprehend, apprehend, visualize, see, figure out, put it together, connect the dots, solve the problems, etc. The whole rich CxSys panoply/multiplicity of synonyms and word phrases of synonymic value.

.

And we now have a method to get rid of about 99% bacterial resistances (& those of the other microbes as well), that is not quite all of it, for most all time, using triple therapies. These are used empirically, and which work as the function because it creates an N body N=/>3, problem because it’s CxSys. IOW it creates a complexity which the bacteria & other organisms, cannot easily sort thru or solve. It creates as it does for us in most NP problems a nearly insolvable, time consuming Sorting problem!!! And because we can create 100K antibiotics of each family of antibiotics Of some 20 families), we can thus combine those into triple therapies by T&E sorting methods. & thus create endless sorting problems for most all bacteria, but NOt quite!!! & as soon as they sort out that, we hit them with endless more kinds of antibiotic triplets!!! That is the solution to bacterial, viral and microbial resistances of all types. And for the insecticidal resistances of the insects, as well as the Herbicidal resistances!!!

.

And for the cancers as well. It presages cures for most cancers.. Hit the wild growth of cancers with THREE agents, and those wild cells cannot sort through, but die back, perhaps permanently if the immune system is working to eliminate them as it so often does normally, when younger.

.

Universally applicable standards of the MOE, but not quite we see. Again, the model is so much more complete it can handle almost everything, as well as explain it, too.

.

Let’s continue on this theme. of Dealing with complex systems using the CP, LE, S/F method and all of the unlimited techniques(etc.) and Technologies(Etc.) which those can create!!!

.

Taking the 100’s of comparison process words, begins, in a synonymic way, which we extend, too, to understanding HOW ubiquitously the CP operates. Testing, and checking are CP. Indeed many words are the same

.

.

IN short as written above Transitive verbs are most all CP words. It explains Transitive verbs!!!! If we can think about thinking, and then think about that Introspection and a brain are for it, as well), we have a CP being done. & there is brain area which does that, the introspection are of the frontal lobe!!! If we analyze an analysis and then that again, we have another CP word. If we add to an addition and then again, we consistently are doing CP words. If we write about writing and then read about reading and then observe our observation, again, it’s CP, because it’s transitive. any word which can do that is a Comparison process word, because we can compare a comparison, and then compare that again & again. No limits. We can check, test and so forth. It’s endlessly done again and again. Input/output, S/F, cause and effect, etc.,are much the same. most all, CP words are of that form, and they are endlessly variations on La Chanson, do you see?

.

Empirical introspection is yet another capability of the CP and LE.

.

.

.

.

Trial and errors we use again and again, and it’s comparing the outcomes of our method to see if it reaches our goals. yet more CP.

.

Synonymics are the same, too. The words have relationships to each other, which is CP detected & then organized into a Category. We have categories of same elements, or very similar without end. the millions of species in the binomial nomenclature. the IUPAC of 34 Millions of compounds, the phone/city directories, the indices, are all the same. The categories of Aristoteles, are the same as well. All of them CP created, read, and extended as needed or changed around, as well. ALL CP.

.

Thus the CP process created the hierarchies from the categories, and organizes the materials. We see the same with the 1’s, 10’s, 100’s, 1200’s, 10K, and millions, tens millions, & then 100 millions & the billions. ALL created by the CP, is not? Ubiquitously used.

.

And because the categories tells a great many things about what’s going on, those are very efficient ways to deal with complex systems, as well. Categories and hierarchies give us a much more complete understanding of CxSys . & those the maths cannot follow at ti time either. The ability to create AI, thus requires the ability to understand the categories, and how those are created, & then to arrange them into hierarchies of our understanding.

.

Therefore for AI to be general AI, it must satisfy the Hierarchical Turing test for AI. The same is true for languages, which are yet more of the same, CP hierarchically uses and organizes methods. & this shows us how we organize and arrange our understanding of most everything, too.

.

.

Now we launch into a closely related method, that of visual processing of information. That is not logical. and as a picture or image is worth often, 1000’s of words, which do NOT show what’s going on in brain, In the cartooning, the visual system of the brain, then it’s not verbal. It can be described verbally but NOT completely.

.

These are the vast visual, process thinking of Einstein, Whitehead’s process thinking and so forth. It’s a uniquely human trait, but the birds can do it too. As have so many times written before, tracking is visual process by comparing consecutive segments of the track to each other and creating the illusion of motion. But it works. & by imaging events in the brain, we are doing comparison processing which a a VERY high level, and likely one of the highest level functions that we do. & AI cannot do it very well if at all, because it’s a huge number crunching method, and that’s very, very clumsily done with math, too.

.

But by using those methods, we can understand vastly better, and describe CxSys, which is why those are used. Physics and math do not very well deal with CxSys. Indeed, those are ignored by Newton who was mostly N=2, and N>/= 3 is complex systems. & that’s what’s going on. Hawking MISSED biologies & CsSys in his “The Grand Design”.

.

Feynman & John Bell talked about the incompleteness of QM, but in fact, ignored the probabilities which was the math approximate to those probabilities, which we use to describe most CxSys. The case is NOT either/or, A and not A, or black or white, which is what logic and most math do, but it’s A yields B, C, D, EF, H’ and then alpha, beta, gamma, Eta/Theta & Iota prime and kappa/lambda double prime, which are the empirical case.

.

The many side effects of drugs are NOT A or b, and not side effects at all, but CxSys effects. That slight shift allows us to understand much better and deal with events in pharmacology using CxSys thinking and apps. The many receptor sites of dopamine are not just 9 or 10, but include the pain relieving effects, the alpha/beta adrenergic receptors and their various types, as well. Cx Sys effects are the new understanding of what “drug side effects” more likely are!!! In all their rich, categorical natures, we see. Receptor sites are yet more forms of synonyms and categories, describing CxSys effects!! Do you see the connections? There that is, too.

.

This system richly connects almost everything to everything else, if we do the work. And these are but a few of the richnesses of the methods being developed & used there. Math won’t do it. Nor will logic. But the rich, unlimited new logics of the CP, the LE, the S/F & CxSys WILL

.

IN medical and biology we are blessed with the tough chores and tasks of dealing with CxSys . Thus we have the hierarchies & diagrams of the differential diagnoses, which are hierarchical, diagrammatic forms,, are they not? &have discussed the very efficient forms of same, before,

.

Now compare this to action words. Those are not transitive, at all.and thus they cannot work working, run running, walk about walking, or do doing, either. They can be repeated like the CP, endless, but they are Not transitive. & thus we find out even more about motor systems, with the S/F of the motor CC’s lack a level 4 and have a big pyramidal neuron as well. Thus does the motor cortex reflects the lack of transitivity function as well. That has not before been seen as to why, and how the S/F of the motor cortex does Not have the transitive function of CP, altho it DOES have the Input/output processing endlessly about it, too.

.

So to sum up and we are ignoring huge amount of the methods which we DO use to solve and understand CxSys, suffice it to say, this can be applied over time to physical events, as well. & that’s why math and logic don’t work so well, either. but how those are related and limited subsets of the processors in brain. & highly valued because with Precisions/accuracies of the maths, tell us more and do it more efficiently than descriptions ever can. Maths are useufl because those can in a limited way, describe more precisely than “it’s that way and you drive about an hour to get there”, with it’s 225 miles away, at 65 m/h you get there in about 3.5 hours.” shows that clearly. yet it’s not completely accurate, either for TD reasons.

.

These then, are a good part, but not completely, of the bases of our new, nearly universal, unifying systems, which are just now being created. That all of these are so clearly confirmably connected in this way is not a coincidence, but lies at the basic structures of our brains, which is LE driven.

.

& what is the structure of our brain cortex and the single, simple unifying concept which explains, shows and develops its CxSys structure? The CP operating.

.

Our visual system & this has been shown, clearly but not just here, but by a profoundly insightful British neuroanatomist of the early 20th C. whose findings show that, & have been described in my work, & have also been ignored.

.

Again how we unite more parts of most everything once again; Including most all mammalian brains, and those very likely of the birds, reptiles, amphibs and even some fishes:

.

.

A final summing up and listing of more events and possibilities, as well. The area is large, and hard to completely discuss. But a peripatetic walk through much of this material is possible, to give a good idea of how, to a field biologist, the newly found Promised Land of the Undiscovered Country can appear, to the many to come & work, pioneers of the field.

.

All of these statements are interconnected and show how the model works.

.

1. The rejection of Idealisms; that our brain outputs do NOT necessary show us what’s going on in the universe is very likely the case.

2. Our knowledge is NOT complete, and no model can be, due to thermodynamic efficiencies. There is NO perfect heat engine possible. None are likely, nor possible. Thus our descriptions/models/theories can’t be complete, either. They can be good, and workable and increasingly efficient, but very unlikely to be complete, final, or absolute.

.

3. Thus, this then gives a huge hope, that because IF we can always improve our methods, descriptions, & models, without limit, there is not any real limit to our ability to find new and important ideas and techniques and Technologies.. We can always find more & better.

4. The universe is that large, and our brains are that small, (Big Pot vs. Little Pot) we can always do much better, regardless. that is, there likely exists the capabilities of improvements without any limits for us.

.

5. The probabilities of complex systems NOT being totally complete, nor final, indicate that the possibilities for us are likewise endless and unlimited, as far as our tiny brains are concerned.

.

6. IN addition, the probabilities of QM create nearly unlimited “possibilities”, which using CxSys possibilities connected to the Probabilistic, unlimited possibilities of QM, ALSO reinforces this outlook of endless possibilities for exploration, growth, development and creativities, as well.

.

7. Most All events are interconnected at the deepest possible levels in our universe. Depths within depths, and this is confirmed by Whitehead’s “process thinking” where he stated that the major ‘Interconnections of our universe of events” is likely the case, too.

.

That it takes creativity to appreciate creativity, and how limitless it is, is the case, very likely. If we cannot create, then we cannot understand very well, how to create creativity, which is part of the “wellsprings of creativity” & underlies our brain operations, processes and how it works. Largely without limits for us.

.

For instance take ATP creation. The odd facts are that ATP requires ATP to create it. This was noted years ago in Biochemistry. So the ATP creates itself!! And that’s impossible linearly. So this is what it means. An earlier system created ATP from the basic biochemicals and using enzymes. Then as ATP became more and more omnipresent in cells, ATP using ATP systems could be bootstrapped, and a newer, more efficient ATP creating system was developed and evolved by the cells.

.

And that implies earlier systems which can create the newer systems, as once again the hallmarks of CxSys, i.e., Emergence, and then disappear, largely. Thus we have the potential using this subtle, small comparison process finding, that we need to look further in the evolutionary biology of metabolism for OTHER such systems, which have made invisible by obsolescence of the earlier ways, less efficient, of doing the metabolic work. The remains of those systems might be present in the unused and turned off genes we find in the “Junk DNA’s” parts of our genomes. That is, the least energy efficient methods in metabolism are creating evolution by obsolescences due to Efficiency gains in the newer methods. That’s highly significant.

.

Recall that about 20% of the known gene structures we cannot figure out their functions (S/F again!), at present. Incompleteness, again.

.

These subtleties at work here are remarkable. But as Einstein still guides us in this, Subtle are the Ways of events in our universe. “Raffiniert ist Der Herr Gott, aber Boshaft ist Er Nicht!! He’s subtle, but not mean and always gives us ways(pl.) to see/find these events. Our escape hatch, our way out of the problem solving dilemmas.

.

As in the impossible ATP creating ATP methods, too.

.

Thus do we see better HOW events work, by using nearly universal methods, but not quite.

.

9. Lastly, this is why the sciences are self correcting. The Kuhnian Cognitive Dissonances of Leon Festinger, create the means by which we can see our ways to solutions of the problems. Our brain models are NOT complete, but the Kuhnian paradigms shifts are Driven by Leon Festinger’s nearly universal “cognitive dissonances”, which are created by Comparing (CP) what we expect to what’s very likely going on.

.

There is No absolute space, time,or knowledge, likely, either. Einstein’s relativity applies to our verbal descriptions as well as to our measurings and maths. And thus there are possible improvements without limits using this model.

.

The solutions to the cognitive dissonances, the paradigms shifts of Kuhn are intimately related to all of the above. And this shows once again, clear cut ways of solving our problems, without limits.

.

Everything, most all events in the universe are very likely connected to all else, very likely at the deepest levels. and those are the major confirms of this Model of Everything,, but not quite complete, either. grin

.

“Depths within Depths”

.

Created by the greatest driver in the universe of events, also likely unlimited and universally applicable by CP, the Least energy aspects of the 2nd Law.

.

It’s likely that simple. Simplify, simplify, simplify; Efficiencies, efficiencies, efficiencies (Adam Smith, the Wealth of Nations”)

AND,

.

Least energy, least energy, least energy Rules.

.

Simple, elegant, explains vastly much about most, and it fruitful beyond limits, too. The very requirements of a good, nearly universal, Model of Everything. a MOE.

.

And we are barely begun in applying (comparison processing) it as well, as my voluminous works and articles show, without limits.

There are coming about 250K to 300K more words in articles, already written, plus extensive notes about unlimited details and directions further to look into, expanding this new knowledge and skills sets of universal, nearly, processes.

9 July 2019

Updated 17 Nov. 2018

1. The Comparison Process, Introduction, Pt. 1

https://jochesh00.wordpress.com/2014/02/14/le-chanson-sans-fin-the-comparison-process-introduction/?relatedposts_hit=1&relatedposts_origin=22&relatedposts_position=0

2. The Comparison Process, Introduction, Pt. 2

https://jochesh00.wordpress.com/2014/02/14/le-chanson-sans-fin-the-comparison-process-pt-2/?relatedposts_hit=1&relatedposts_origin=3&relatedposts_position=1

3. The Comparison Process, Introduction, Pt. 3

https://jochesh00.wordpress.com/2014/02/15/le-chanson-sans-fin-the-comparison-process-pt-3/?relatedposts_hit=1&relatedposts_origin=7&relatedposts_position=0

3A.. Extensions & Applications, parts 1 & 2.

https://jochesh00.wordpress.com/2016/05/17/extensions-applications-pts-1-2/

4. The Comparison Process, The Explananda 1

https://jochesh00.wordpress.com/2014/02/28/the-comparison-process-explananda-pt-1/

5. The Comparison Process, The Explananda 2

https://jochesh00.wordpress.com/2014/02/28/the-comparison-process-explananda-pt-2/

6. The Comparison Process, The Explananda 3

https://jochesh00.wordpress.com/2014/03/04/comparison-process-explananda-pt-3/?relatedposts_hit=1&relatedposts_origin=17&relatedposts_position=1

7. The Comparison Process, The Explananda 4

https://jochesh00.wordpress.com/2014/03/15/the-comparison-process-comp-explananda-4/?relatedposts_hit=1&relatedposts_origin=38&relatedposts_position=0

8. The Comparison Process, The Explananda 5: Cosmology

https://jochesh00.wordpress.com/2014/03/15/cosmology-and-the-comparison-process-comp-explananda-5/

9. AI and the Comparison Process

https://jochesh00.wordpress.com/2014/03/20/artificial-intelligence-ai-and-the-comparison-process-comp/

10. Optical and Sensory Illusions, Creativity and the Comparison Process (COMP)

https://jochesh00.wordpress.com/2014/03/06/opticalsensory-illusions-creativity-the-comp/

11. The Emotional Continuum: Exploring Emotions with the Comparison Process

https://jochesh00.wordpress.com/2014/04/02/the-emotional-continuum-exploring-emotions/

12. Depths within Depths: the Nested Great Mysteries

https://jochesh00.wordpress.com/2014/04/14/depths-within-depths-the-nested-great-mysteries/

13. Language/Math, Description/Measurement, Least Energy Principle and AI

https://jochesh00.wordpress.com/2014/04/09/languagemath-descriptionmeasurement-least-energy-principle-and-ai/

14. The Continua, Yin/Yang, Dualities; Creativity and Prediction

https://jochesh00.wordpress.com/2014/04/21/the-continua-yinyang-dualities-creativity-and-prediction/

15. Empirical Introspection and the Comparison Process

https://jochesh00.wordpress.com/2014/04/24/81/

16. The Spark of Life and the Soul of Wit

https://jochesh00.wordpress.com/2014/04/30/the-spark-of-life-and-the-soul-of-wit/

17. The Praxis: Use of Cortical Evoked Responses (CER), functional MRI (fMRI), Magnetic Electroencephalography (MEG), and Magnetic Stimulation of brain (MagStim) to investigate recognition, creativity and the Comparison Process

https://jochesh00.wordpress.com/2014/05/16/the-praxis/

18. A Field Trip into the Mind

https://jochesh00.wordpress.com/2014/05/21/106/

19. Complex Systems, Boundary Events and Hierarchies

https://jochesh00.wordpress.com/2014/06/11/complex-systems-boundary-events-and-hierarchies/

20. The Relativity of the Cortex: The Mind/Brain Interface

https://jochesh00.wordpress.com/2014/07/02/the-relativity-of-the-cortex-the-mindbrain-interface/

21. How to Cure Diabetes (AODM type 2)

https://jochesh00.wordpress.com/2014/07/18/how-to-cure-diabetes-aodm-2/

22. Dealing with Sociopaths, Terrorists and Riots

https://jochesh00.wordpress.com/2014/08/12/dealing-with-sociopaths-terrorists-and-riots/

23. Beyond the Absolute: The Limits to Knowledge

https://jochesh00.wordpress.com/2014/09/03/beyond-the-absolute-limits-to-knowledge/

24 Imaging the Conscience.

https://jochesh00.wordpress.com/2014/10/20/imaging-the-conscience/

25. The Comparison Process: Creativity, and Linguistics. Analyzing a Movie

26. A Mother’s Wisdom

https://jochesh00.wordpress.com/2015/06/03/a-mothers-wisdom/

27. The Fox and the Hedgehog

https://jochesh00.wordpress.com/2015/06/19/the-fox-the-hedgehog/

28. Sequoias, Parkinson’s and Space Sickness.

https://jochesh00.wordpress.com/2015/07/17/sequoias-parkinsons-and-space-sickness/

29. Evolution, growth, & Development: A Deeper Understanding.

https://jochesh00.wordpress.com/2015/09/01/evolution-growth-development-a-deeper-understanding/

30. Explanandum 6: Understanding Complex Systems

https://jochesh00.wordpress.com/2015/09/08/explandum-6-understanding-complex-systems/

31. The Promised Land of the Undiscovered Country: Towards Universal Understanding

32. The Power of Proliferation

https://jochesh00.wordpress.com/2015/10/02/the-power-of-proliferation/

33. A Field Trip into our Understanding

https://jochesh00.wordpress.com/2015/11/03/a-field-trip-into-our-understanding/

34. Extensions & applications: Pts. 1 & 2.

https://jochesh00.wordpress.com/2016/05/17/extensions-applications-pts-1-2/

(35. A Hierarchical Turing Test for General AI, this was deleted after being posted, and it’s not known how it occurred.)

https://jochesh00.wordpress.com/2016/05/17/extensions-applications-pts-1-2/

35. The Structure of Color Vision

https://jochesh00.wordpress.com/2016/06/11/the-structure-of-color-vision/

36. La Chanson Sans Fin: Table of Contents

https://jochesh00.wordpress.com/2015/09/28/le-chanson-sans-fin-table-of-contents-2/

37. The Structure of Color Vision

https://jochesh00.wordpress.com/2016/06/16/the-structure-of-color-vision-2/

38. Stabilities, Repetitions, and Confirmability

https://jochesh00.wordpress.com/2016/06/30/stabilities-repetitions-confirmability/

39. The Balanced Brain

https://jochesh00.wordpress.com/2016/07/08/the-balanced-brain/

40. The Limits to Linear Thinking & Methods

https://jochesh00.wordpress.com/2016/07/10/the-limits-to-linear-thinking-methods/

41. Melding Cognitive Neuroscience & Behaviorism

https://jochesh00.wordpress.com/2016/11/19/melding-cognitive-neuroscience-behaviorism/

42. An Hierarchical Turing Test for AI

https://jochesh00.wordpress.com/2016/12/02/an-hierarchical-turing-test-for-ai/

43. Do Neutron Stars develop into White Dwarfs by Mass Loss?https://jochesh00.wordpress.com/2017/02/08/do-neutron-stars-develop-into-white-dwarfs-by-mass-loss/

44. An Infinity of Flavors ? https://jochesh00.wordpress.com/2017/02/16/an-infinity-of-flavors/

45. The Origin of Infomration & Understanding; and the Wellsprings of Creativity

https://jochesh00.wordpress.com/2017/04/01/origins-of-information-understanding/

46. The Complex System of the Second Law of Thermodynamics

https://jochesh00.wordpress.com/2017/04/22/the-complex-system-of-the-second-law-of-thermodynamics/

47. How Physicians Create New Information

https://jochesh00.wordpress.com/2017/05/01/how-physicians-create-new-information/

48. An Hierarchical Turing Test for AI

https://jochesh00.wordpress.com/2017/05/20/an-hierarchical-turing-test-for-ai-2/

49. The Neuroscience of Problem Solving

https://jochesh00.wordpress.com/2017/05/27/the-neuroscience-of-problem-solving/

50. A Standard Method to Understand Neurochemistry’s Complexities

51. Problem Solving for Self Driving Cars: a Model.

https://jochesh00.wordpress.com/2017/06/10/problem-solving-for-self-driving-cars-a-model/

52. A Trio of Relationships and Connections

https://jochesh00.wordpress.com/2017/08/04/a-trio-of-relationships-connections/

53: Einstein’s Great Subtleties: Einstein’s Edge

https://wordpress.com/post/jochesh00.wordpress.com/583

54. The Problem of Solving P not Equal to NP

https://jochesh00.wordpress.com/2018/04/28/the-problem-of-solving-p-not-equal-to-np/

55. How to Create a Blue Rose

https://jochesh00.wordpress.com/2018/06/02/how-to-create-a-blue-rose/

56. The Etymologies of Creativity

https://jochesh00.wordpress.com/2018/06/14/the-etymologies-creativity/

57. A Basic Model of a Unifying System of Most All Knowledge

https://jochesh00.wordpress.com/2018/07/06/a-basic-model-of-a-unifying-system-of-most-all-knowledge/

58. Understanding Psych with S/F Brain Methods

https://jochesh00.wordpress.com/2018/07/11/understanding-psychology-with-s-f-methods/

59. The Wiggins Prime Sieve

https://jochesh00.wordpress.com/2018/08/02/the-wiggins-prime-sieve/

60. The Complex System of Love

https://jochesh00.wordpress.com/2018/08/22/the-complex-system-of-love/

61. The Limits of the Comparison Process

https://jochesh00.wordpress.com/2018/08/27/the-limits-of-comparison-processing/

62. The Bees, Cortical Brain Structure, Einstein’s Brain, etc.

63. The Wiggins Prime Sieve, Version 3.

https://jochesh00.wordpress.com/2018/09/15/the-wiggins-prime-sieve-version-3/

64. The Prime Quartets Method

https://jochesh00.wordpress.com/2018/10/04/prime-quartets-method-capabilities-insights-sans-limits/

65. Is Goldbach’s Conjecture True And/or False, Conditionally?

https://jochesh00.wordpress.com/2018/11/17/is-goldbachs-conjecture-true-and-or-false-conditionally/

66. The Magic of the Prime Multiples and Goldbach’s….

67 The Wiggins Primes Sieve: Cycles of 30’s in the Primes

https://jochesh00.wordpress.com/2018/12/17/the-wiggins-prime-sieve-cycles-of-30s-in-the-primes/

68. Winning at Solitaire, Basic Strategies

jochesh00.wordpress.com/2019/02/04/winning-at-solitaire-basic-strategies/

69, The Failures of Idealisms & Brain Hardwiring in the Sciences

jochesh00.wordpress.com/2019/04/04/the-failures-of-idealisms-brain-hardwiring-in-the-sciences/

70. The Break Outs: The roots of Growth & Unlimited Creativities

https://jochesh00.wordpress.com/2019/06/06/the-break-outs-roots-of-growth-unlimited-creativities/

71. How to Find the MH370 Crash Site

72. Walking Shortcuts, a Cameo

73. Einstein’s Quotes & Neuroscientific Insights on Creativity & Understanding

74. Towards a Model of Everything 14 Jul. 2019

https://jochesh00.wordpress.com/2019/07/13/towards-a-model-of-everything-moe/

75. Addenda: The Walkabout Article 22 Jul 2019

https://jochesh00.wordpress.com/2019/07/22/the-walkabout-article-addenda/

76. NP not = P, Second considerations

https://jochesh00.wordpress.com/2019/07/23/np-not-equal-to-p-2nd-considerations/

Einstein’s Quotes and Neuroscientific Insights on Creativity & Understanding

.

Copyright © 2019

.

Einstein’s amazingly deep neuroscientific understanding of creativity, creating understanding and visual, process thinking (thought experiments) are shown by some of his quotes:

Einstein’s amazingly deep neuroscientific understanding of creativity, creating understanding and visual, process thinking (thought experiments) are shown by some of his quotes:

.

“My scientific work is motivated by an irresistible longing to understand the secrets of nature and by no other feeling.

.

“I want to understand the Mind of God.

.

“Since the mathematicians have invaded the theory of relativity, I do not understand it myself anymore. “

.

His methods of understanding were NOT verbal, logical nor mathematical, but Visualization of events & information processing by process thinking (Whitehead, as an important creator of same).

His methods of understanding were NOT verbal, logical nor mathematical, but Visualization of events & information processing by process thinking (Whitehead, as an important creator of same).

.

“I have to understand you see.” —-Richard Feynman, Nobelist in Physics for QED.

His 1st wife said he was always doing his calculus, and it drove her crazy. IN fact he was testing, sorting, searching for his answers by T&Error means! She did not understand that. As all good scientists, do.

.

He also stated, that “If I want to understand events, then I have to be able to generate those from principles.” Exactly what Einstein found.

.

Einstein, again, in confirmation of Feynman:

“The supreme task of the physicist is to arrive at those universal elementary laws from which the cosmos can be built up by pure deduction. There is no logical path to these laws; only intuition, resting on sympathetic understanding of experience, can reach them.

“The supreme task of the physicist is to arrive at those universal elementary laws from which the cosmos can be built up by pure deduction. There is no logical path to these laws; only intuition, resting on sympathetic understanding of experience, can reach them.

.

That was in short how Feynman created, The Magic of his “Feynman diagrams” which revolutionized the particle physics field. And it could summarize most particle processes by a simple set of diagrammatic, drawings, the visual rules (process thinking) taking a few minutes, instead of, which at the time, took & were solved by 2 Weeks of laborious calculations, Even after the advent of very efficient computer solutions, his diagrams are still used, they are so very efficient and time saving. It sorted out what was likely and not likely in a few minutes. Thus saving literally years of work for physicists. Short cut, efficiencies, LE, Indeed.

.

Least energy (LE) rules in a nutshell. That was Einstein’s rule: simplification is the goal in physics. Which Feynman did so well.

.

When asked if Bell’s supposition was correct, that QM was incomplete. He thought for a moment and said, “Yes. We cannot generate biology from QM .” IOW, complex systems.

.

And Einstein agreed: Einstein believed that quantum mechanics, though it may not be wrong, was at least incomplete.

.

“Through purely logical thinking we can attain no knowledge whatsoever of the empirical world.

&

&

.

“I never made one of my discoveries through the process of rational thinking.”

“I never made one of my discoveries through the process of rational thinking.”

.

” Any fool can know. The point is to understand.” Exactly what generates the understanding.

.

“Understanding arises from finding the relationships among Events.” Paraphrased from “Physics and Reality” 1936.

.

“You can never solve a problem on the level on which it was created.”

Exactly Right you have to ADD information(!!!!) to solve problems. NP is NOT = to P!

.

.

“The only thing that interferes with my learning is my education.”

.

Exactly. If you learn too much it can act to prevent more learning by at least 3 ways. 1, the physical amount of learning fills up the memory access units in the Hippocampi. Or as 1 very fine pediatrician told us more than one, “You want me to learn something new. I have ONLY so many pots on the shelf now. If I try to learn one more, then it knocks a “pot off the shelf” of something I need.

.

2. When playing the piano for the first time after years of only typing a keyboard, I played a number of tunes I knew, and recalled from my music books for about 45′. When I tried later that day to type on the keyboard, I could NOT!!! The act of typing Over-rode my my typing programs, and it was nearly 3 days, before I got back to the usual typing fluency again. Am sure others have also experienced this “don’t add too many more pots to the shelf” memory problems, when we get in our 50’s or so. Why this occurs is yet a mystery.

.

3. Next, we can learn too much which is wrong, or even not complete, and then we cannot easily changed, due to brain hardwiring, and learn more. the facts as we believe them can inhibit finding out more ideas/concepts which often will extend our understanding and thus knowledge. We can know too much, and can’t learn much more!!!

.

If a written language, is so complicated, that it takes up huge amount of long term memory space, then the person can’t learn other languages very easily. It inhibits learning. Comparing instead to Western children learning to write & read at age 3-4. It takes about 5 years longer for Chinese children to learn reading/writing, usually in Chinese,starting about 8 years old. That is a huge educational disadvantage from not using alphabetic writing.

.

If however, an alphabetic method is used, many languages can be learned using those, instead. the visual/spatial problems of hieroglyphic and character writing are thus not efficient. The inefficiencies of many languages have been addressed before. But the fact that English does NOT use gender for any names/nouns other than those with biological basis, is a fact. Thus we don’t need all of those la, las, Le, Les, & Die’s and Ders, etc. Only “a”,” an”, “the” & “The(e)”. That is an enormous simplification allowing us to speak English, saving knowing all the genders for 100’s of K’s of words is it not? And likely the main reason English is learned a LOT, lot faster than languages with such ancient forms as genders. This is a clear cut observation & eminently provable and true, coming from the MOE being developed.

.

Deng Xiao Ping knew this about alphabets compared to characters writing, as did Mao, but Mao failed using brute force, and the strong traditions system persisted. Deng initiated & promoted the Pin Yin means of alphabetizing characters, which meant they could THEN use keyboards, which they could not before. AND the printing of all books and news became much, much easier and faster, because they did not need the 5,000 character forms which had to be hand inserted into the printing machines.. Such that now about 30% of new readers using Pin Yin in cannot use nor read the characters, any more. And printing is much, much faster. When they finally use ONLY pin yin, then the modernization and huge efficiency advantages of Western alphabets, will be theirs, at last. That will create much growth, too.

.

The same problem was known in ancient Egypt, where the huge number of pictorial hieroglyphics, impaired learning other languages, and created a sort of xenophobic, culturo-centric problem for the Ancient Khemetans. & they used a demotic form, which partly got around this.

.

IN addition the Ancient Maya used hieroglyphics methods to do the same, and they collapsed as well. Once it was gone, it could not be recreated by their illiterate descendants, either. Least energy most always, in the long run, wins!!! Alphabets are very much easy to remember and use, compared to 1000’s of glyphs.

.

Simplicity is the goal, stated Einstein and it’s a universal processor that EVEN applies to most all languages in how they are structured & used. That was his genius, partly and in short.

By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014

Copyright © 2019

.

What do all of the above have in common?They are all complex system (Cx Sys) manifestations of Least Energy(LE) rules, in all of its enormous, unlimited applicability.

.

How did Einstein do what he did? That’s a deep question.But mainly answerable, now. By using CP (a detector of LE, Comparison Process), LE, S/F(Structure/Function) & Cx Sys’s processes and methods we can cut thru the complexity and reach a better understanding of understanding and then of events.

.

Einstein was a good visual thinker. Sagan’s “Cosmos” points that out with Einstein’s thought experiments. First of all he asked what it was like to ride on a Photon? And he realized that at light speed, all time stopped for the events around you, but the photons were still in process, which is time. Thus most all of everything around that photon is either instantly processing, or not processing at all. That is QM instantaneity, which is the case. All processes appear by the photon to stop outside of the photon, or they occur so quickly, that it’s not detectable. This gave rise to a basic part of his Relativity.

.

The second visualization was how is being in a gravitational field like being in an accelerating system at 9.8 m./sec.sq.? The answer he gave is that we cannot detect the difference. & he used that comparison process of visualizing those two very similar states to make another part of his Relativity. But that assumes that the acceleration is regular/smooth, and in a gravitational field it’s VERY regular,. And that was the point.

.

However, they are NOT the same, exactly. They are not Quite alike, and thus the regularities of acceleration does NOT match what’s going on in real events with gravity fields. That is a disparity and from the “cognitive dissonance” ideas of Festinger, which are comparison process detections/mediated, we see this. And those create scientific progress when they are seen, recognized and then largely solved. “Thomas Kuhn’s “The Structure of Scientific…” clearly shows that, successively. Much like the 1/3 neutrino’s seen compared to the predicted, expected, 3 times as many. And that led to a major advance in particle physics. CP mediated, most all of it.

.

Relativity is CP mediated. Einstein stated in his “Physics and Reality” 1936, that “our understanding is based upon finding the relationships among events in existence” And that’s exactly the case. IOW, the cognitive neuroscience of Comparison Processing (CP) is finding of the relationships among events, by CP of those events. Much as he did with riding on a photon, compared to events around it, comparing acceleration to gravitational fields, AND the relationship between Mass and Energy, the famous E=MCSq. So we see relativity everywhere in these cases of understanding what’s going on in events within and around us, do we not?

.

Now, when we want to understand how arithmetics and most of math is created, we looking for the relationships among the numbers. We count, and create information along the number line starting in base 10 at 1, 2, and up the hierarchical organization of 1’s, 10’s, 100’s, 1000, 10K, 100K ,1 Millions, 10 Millions, etc. And the hierarchies are created by CP, as well.

.

So we take 2 and 8 for example & from that create the entire sets of basic arithmetic. When we count from 2 to 8, laboriously, we get 6 numbers. Thus the relationship of 8 to 2 is counting up 6!!! And when we count back down, we get 6 also. So the relationships between 2 and 8 is 6.

.

And then we abandon counting for the addition and subtraction tables, which efficiently, LE wise summarize the relationships among all of the numbers on the number line. It’s the first hierarchy after counting, viz. addition and subtraction. is it not? And thus we have short cut the counting, by using Long Term Memory of the addition and subtraction tables to SAVE a lot of time with counting. Least energy, yet again is not? But we are not done yet, we have counting which created information, and we have the addition/subtraction tables which efficiently short cut that counting. & save us lots of time, too. But what of the relationship among 2 and 8 as a function of 2? We move from 2 to 8, by addition, giving 4 twos to get to 8, do we not? Thus we have more efficiently added, and counted that 2 Fours are 8!! Yet another relationship, Einsteinian in form, which give us the Multiplication function. is NOT? And from there we count down, multiply down & realize that the 8 divided by 2 is Four!!!

.

As Feynman once said, If I cannot generate an outcome, then I cannot understand it. The above model generates basic mathematics, where tested so far.

.

Thus we have by this simple means created the “Einsteinian Relationships” which give us an understanding of adding/subtracting, the next hierarchy of multiplying/dividing, and we can successfully with the latter find more quickly, than counting OR adding, that 4 times 12 gives us 48 and that 48 divided by 4 (or vice versa) gives us twelve. We shortcut the adding process and the counting processes every more, do we not? It’s all consistent. And that consistency ALSO is comparison process.

.

And what of the NEXT hierarchy? And we see by comparison processing we create the relationships of the numbers in our cortices next to the anatomy of words, Math grows Right out of word processing, left post temporal speech/language centers. & if we damage those centers by anything, the structure/function (S/F) relationship shows us that language is damaged, as well as the math. But the verbal functions came first and are extended by math in that area, as well. This is the case. Thus we have used the S/F relationships created and driven by CP to understand how math works, as well, as language.

.

And we can state almost everything in words, including all of math, but the converse is not true. Unlike 4 X2 = 8, and 8/2 is 4. It’s not the same. We easily use words to express virtually all of math. But math cannot speak or process language very well. “How sharper than a Serpent’s tooth is it is to have a thankless child!” shows this 1 way function. And it’s humbling, too. The origins of descriptions and language/speech came first and out of those came maths. Very clearly.

.

So we have this finding. The what are the origins of language/verbal functions?

.

And using Einstein once more, we come to this huge epistemology implication which works in physics and is likely true. When we take a piece of cut lumber, how do we measure it? We take a relatively fixed, stable, but with arbitrary units to measure it. and we come up with say, 16 inches, or about 40+ cms. The relationship among the conversion from inches to cms. is 2.54 slightly+. The ratio of 40 cms. over 16 is 2.54. Comparing those two, is a ratio, a proportion, is not? Comparing the circumference to the diameter gives pi, does it not? Comparing distances/time gives speed/velocity, and all the rest of the constants, such as E = h/ (Planck’s) constant times nu, the frequency. For THAT relationship & the quanta, Planck got the Nobel prize. Finding once again how energy of photons is very precisely related to its frequency. And by a simple math we can determine the wavelength of that light photon. & this also works for sound, as well. It’s most all about relationships, and those are detected and created by CP.

.

All of those relationships are found by CP of information. Density is the same, mass over volume. From the famous “Eureka” of Archimedes, which solved his problem and spared his life. F is m1 X m2 times g, over r. Sq, the square of the distance. Again, more ratios, proportions and their relatedness to each other. And 1 more constant, too. IOW, comparison process writ large. It’s a universal processor.

.

So, if the epistemology of Einstein is based upon finding a standard, efficient measuring scale, and its methodological limits, And it’s creation of data, numerical information by those comparisons, then what of verbal descriptions? And we know they are related, but exactly how was not previously known, or figured out. But it is now. When we, for distance describe colours, we use the famous Mnemonic acronym, ROY G. BIV. And by applying the relatively set, stable, fixed method of arbitrary, verbal standards of what the colours are, we can describe the color of most everything around us using those standards. We create information by using VERBAL set standards & comparing those to all around us,& within us, too. Roses are red. So is blood. Much tree bark is brown, & so can skin be. Efficient, widely useful, too.

.;

The hugest example of this, which is confirming without limits, are those of the verbal adjectives, and their characteristics. We have the base form of the adjective, such as high. And the extreme form, at the other end of the Linear line, highest. The superlative and we can also tell that because its moST “signal detection” ending in -st. nearly universally. And the middle form, Higher? The Comparative, and there it is again. It’s most all CP, we see. The 1000’s of words which can be used in this way, are almost all comparison processed. High, higher highest, or some tall, more tall, or most tall, in all the myriads of ways. All of it comparing to how big is it, is it as big as a bbee, as big as a house, an elephant, or the size of a pinhead. All relative to our descriptive standard, by which we describe and not only count, and measure which CREATES information, but by using the Einsteinian methods, the verbal Descriptions which apply those same set, VERBAL standards, altho still arbitrary in which sounds/spellings we use. But they are efficient and stable, & thus, LE events.

.

This then shows the common origins of math/measurement, and Verbal descriptions, and sets up a clear cut Relationship between the two, which is Einsteinian. Math measures, words describe. They both create information of each type by comparison to those measuring, or verbal descriptions. Both act in the same ways, as the math arises from the set standards based upon the set description standards of the words we use. Thus we have it that Words/ideas/descriptions are related exactly to how numbers/maths/measurements work to create information & thus equivalent by the same by set, fixed stable standards, he one with words, the other with numbers. That is the Relativity to The Cortex, is not?

.

And the CP which creates those relationships AND generated the information are one and the same for both words/descriptions : math/measurements. Is not?

.

So if we can create basic math arithmetician by relations among the numbers, using more and more efficient, and hierarchically arranged, counting, adding, multiplying, then in the same way we can generate all languages & words by using the arbitrary designations(set points) & the relationships among the words,or the meanings of words by sound, as how to generate most all languages. And translation is of the same.

.

.

About 1/2 the distance down, see this rigorous comparison of word meanings which leads to nearly exactly as possible, most all of the translations.

.

There is NOT likely a universal grammar, but this nearly universal processor in the language cortex which generates the arbitrary sounds, gives then meanings, but which are fixed, efficient and thus stable, which creates our languages, is not? It’s simply the repeated use of the repeating cortical columns which encode the standards of words, which create language. AND information verbal. And information numerical, is not?

.

Simple, complete, elegant, and highly efficient, & thus LE explanation of the origins of language from Einstein’s epistemology which is altogether true in physics & thus empirically the case, is not?

.

That’s what we see by looking at Einstein’s brain/mind, the structure/function of an engineer’s mind & how it worked. is not? Incredible abilities to visualize events. His father was an electrical engineer as was his uncle. He sat for hours at a time in the Swiss patent office, examining designs & models and figured out how those worked, visually, to see if they were original enough to get a patent. By a massive comparison of what he was testing for originality, to the hierarchically arranged classifications of the patented designs/mechanisms. The Relativity of the Cortex, indeed!!!

.

Now we can move on with the higher hierarchies of math. The exponents, which take massive numbers and expresses them most easily & efficiently. 10 exp. Or another form of it, “e”. And then the scientific notations which arise as well as very efficiently more than counting, adding, multiplying act to allow us to LE handle large numbers most efficiently, is not. LE Rules!!!

.

And the hierarchies of the exponents? 10 exp. X, and then 10 exp. 2, 3, 4, etc., thru those hierarchies; and then the 10 exp X, to the exp Y, exp Z until we get a googleplex. All hierarchies, all efficient, all generation of information with measurement, and that is numerical modeling and descriptions.

.

Now with the adjectives we get a linear system, do we not? High, higher, highest and Low lower lowest. It’s easily convertible to a number line, is not? & that dear readers, is HOW math is created. We create relationships among the number line, & then use that linear scale to measure high, higher, highest and low, lower, lowest. Or Smallest, smaller Small to Big, bigger biggest, and so forth. Then assign the numbers by measurement & thus we see what’s going on. That mathematics of the Adjectives is not? The exact correspondence between verbal, linear descriptions to the more precise, more efficient numbers, which allows us using math, to more accurately describe events. Thus the utility and value of math. It does many jobs better, more exactly and thus with increased information. & thus due to Shannon’s IT foundational concept, entropy declines, as the description holds MORE information with math, than the verbal. Thus yet another hierarchy, is not?

.

It’s that simple. It’s nearly universal and it applies to most all information be it verbal or numerical, be it description or measurement.

.

How Physicians create information shows this very, very clearly once more. & how we read most all radiological images by standard, set rules of what is normal, versus what is not. It’s a universal visual processor, visualizing the standards & then applying, comparing them in each case, to chest x rays, angiograms and venograms, CT and MRI images, and so forth.

.

And how do we figure out the S/F relationships in the function MRI (fMRI)? We set up the baseline resting image, first, and then activate part of the brain, and see what lights up BY Comparison Processing!! And every single image read is CP at least twice. We see the resting image, and we know it’s normal, then we do the activation of the brain and see by CP where it’s located in all the myriad ways, without limits. Simple, efficient, highly descriptive and yet, highly likely to be the case. And if we also then compare the fMRI to the magnetoencephalogram (MEG)? We gain ever more information creation as well. Triply so!!! Thus does CP create information visual and verbal in radiology. & the Readings of most all images is largely CP to set, fixed stable(efficient) standards, too.

.

And this realization shows us how to create AI to read images. We use the expert skill sets of radiologists to show us what standards they use as normal, and how in each set of skills they interpret (compare) the images. And once we have those standards, we program the computer by recognition methods (bayesian statistics) to abide by & use those exacting standards. It necessarily creates AI for imaging reading. & saves the radiologists a lot of time on CXR’s, too!!! More income!!!

.

Thank You, Albert!!

.

.

Now, how do we get Mountain Passes from Einstein? And that’s simple. We use the LE, CP standard. Why do we preferentially use mountain passes, rather than other routes? Because those are least energy & often very time saving. We climb the least, and expend the least amount of energy. The pass is nearby to areas which are convenient and easy (LE) to get to. And we know that by crossing them without measurements, that it takes lots less time and energy to climb up them and then walk or travel down them.

.

This is what opened up the far Kentuckee & Tennesee areas. The Cumberland Gap, known by the Indians, which went along the Cumberland River, which had cut a deep channel through the Appalachian Mountains into the interior. And it’s least energy, too. We know that by verbal descriptions the energy taken crossing into those areas. The reduced cost in time, food, and materials & wearing out of the wagons and horses. & when we measured the other ways to get over the mountains? By math we found it was the Lowest, comparatively way, as well. Thus the math confirmed precisely the routes to taken, as well, because the altitude numbers were lower. The energy was lower, the cost in terms of wagons and times were less. It’s all been there for 1000’s of years, from the Kyber Pass, to the Berthoud pass, from the South Pass at the end of the Wind River range, to the passes through Nevada and the Sierras. From Echo Lake Pass to the lower, I-80 pass via Donner Lake and Summit. All the same, all the myriads of duplicating equivalent ways, world wide. The universal, nearly, processor, Least energy, least energy, Least energy rules. Detected and used by CP, nearly universally, we see. And verbally describe and more accurately described then by mathematics, which confirms in most all cases, that those passes ARE the short cuts, the shortest, easiest most efficient routes, is not?

.

And it’s Einsteinian, too.

.

Now we look at weather radars, for more. What’s the use of radar to detects storms of all sorts? The basic concept is that of visual tracking, which is a series of comparison processes of what happens to events over time. When we see a bird flying, we see it move, as our visual systems sample each image over 100 msec. or so, and then begin to compare them automatically because it’s built into animals’ visual systems for at least 300 megayears to do so. When we see a walker coming down the sidewalk or trail, we use the same CP to detect what direction they are moving and if they are getting larger and large, CP, or smaller & smaller and thus moving towards, or away from us. It’s most all CP. Its’ done by most all the animals, as well. It’s universal method for understanding events. It’s how the birds get from place to place and catch bugs in flight, too. It’s how the insects do the same. It’s tracking, from the clouds moving against the side of a building, or relative to a telephone we detect the movements of the clouds by entirely arbitrary, but fixed and stable standards, do we not? And the illusions are created by such arbitrary standards, as well.

.

So this relates to radar, because we can see the weather images coming in, updated from time to time and how THOSE compare gives us the direction the clouds and storms are going to. We detect and have learned by T&E what constitutes rain, and other phenomena. Then we use those visual standards to See what’s going on. Is the storm moving towards us or not? If we mark the edge of the image at a certain town, then watch as the storm moves to the edge of another town, we simply do a distance/time and we get the speed of the approach. Then we do a simple measurement of how far that set standard, Einsteinian point is from where we are, using the D = Rate X Time, a ratio, a proportion, an algebraic expression, A Comparison Process all of it, and that gives us how much time we have before the storms gets to us, is not?

.

So we see the verbal image, mathematize the verbal description and thus get more precise information about it reaching us. That’s how it’s done. Tracking visual, become a mathematical expression by creative means of seeing the comparison (the Einsteinian relationship) between Distance, time and speed. Again, CP, again description &measurement equivalences. And that’s how math is created in a practical sense from verbal descriptions. & it gives us predictive control over storms coming in, tornadoes being seen, hurricanes’ speeds and so forth. All of it Einsteinian relatively fixed, set standards being used. And we can warn of such dangers & prevent a lot of damage, deaths, and costs, too. It’s efficient for survival. And thus moral, too.

.

For this statement here, we see the most profoundest insights into math creativity as well. “Any society (or group in a society” which cannot break out of its current abstractions, after a limited period of growth (LE growth), is doomed to stagnate. It’s the “S-curve of growth” case converted into mathematics from an easy visualization.

.

This shows how practical empirical maths are created. Note in conclusion, that Einstein’s Relativity was largely verbal, with some math in it. And it took Einstein and Minkowski to cast the earliest version largely descriptions into Space/Time 4 dimensional math models. Those could then be tested more precisely by measurements. The ideas, words, descriptions came first and THEN the math followed, by comparison process of the outcomes of the math to follow more accurately the verbal description, which anatomically and evolutionary Preceded the math.

.

We can make do without math. We cannot make do without languages. We do MUCH better, though, with the efficiencies of math, than without it, however. As the mountain pass confirmations of altitude measurements show.

.

& that, dear readers, is how verbal descriptions create information & how measurement creates information & how BOTH in an Einsteinian way are related very exactly, and ontological and neuroscientifically to each other..

.

With Simplicity. CP creates the LE outputs. Then the S/F relationship to further tie them together & f/b the higher nearly universal as well, applications of Complex system descriptions, which are far, far better than the linear, as well.

.

And as Ulam stated, Math must greatly advance if it’s to describe complex systems. But we can do that, too, NOW that we know how math creativity is in part created. Now that we know how to create creativity & Understand understanding and then think about thinking and then think about that, too.

.

And it’s most all from the great, intuitive, neuroscience visualizer, Dr. Albert Einstein who showed us very like how it works, from words to description from the creation of maths in verbal centers of brain, to the CP creating our empirical mathematics. Simple, elegant, nearly universal, and because we can see the Relationships among the languages to each other, & those relate to math, a higher degree of understanding, as well.

.

.

Simple, elegant, explains a LOT with a little, CP, LE rules with S/F, & complex systems.

Walking Shortcuts, Or How to Create Unlimited Professional Growth Methods

.

By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014

Copyright © 2019

.

Using simple walking routes, parapatetics and the unlimited ways of efficiently getting from place to place, this can be shown significantly & useful as the cameo for solutions of most all professional skill sets, work and method & techniques.

,

Have written about these insights from the first in this blog, and it works and is nearly universal in applications.

,

So, we will use the complex system of the 2nd law as a good guide from s/f relationships, and so forth, how we can create efficient ways of getting from place to place, significantly assisting us in building efficient roads, phone & telecom cable lines, power lines, water and gas lines, etc., as well. This corresponds nearly exactly to what Uber is using to direct their drivers via the most efficient, time savings method in delivering passengers. And WHY that works, as well. It’s all Least energy manifestations, which creates growth, as it has done very clearly with Uber. And underlies, mostly unrecognized in most ways and methods/techniques. Understanding that efficiencies are LE, and thermodynamics creates manifestly better, and virtually unlimited ways of improving most all methods/techniques. & using the sorting methods of Least Energy applies to S-curves estimating savings, yielding growth events, as well.

,

,

As has been stated before, the difference between professionals and amateurs are thermodynamic qualities. Professionals do their tasks faster, more efficiently, with less cost, time, movements, materials, etc. compared to amateurs, and each of these are a mathematically expressible quantity. Sadly, the mathematics cannot write all of those in terms of a single expression, such as heights, weights, costs, times, etc., but deals with each piecemeal, because as yet (a la Ulam’s, “Math must greatly advance before it can describe complex systems.”) there is not any really effective mathematical way of expressing complex systems in all of their vast, unlimited kinds and processes, too. This shows how that can be created. & is Directly related without limits to the solutions of the Traveling Salesman Problem, too.

,

Essentially, let’s take a simple case, of how to get from place to place by walking using the shortest times/distances. Most persons who drive or walk know that there are the shortest distances, and the shortest times, which are often very close, but not often the same. A driver once told me that going the shortest route, down Jahant Rd., instead of up the faster highway to Liberty Rd, which was fastest as there were fewer stops and other such slow downs. Time, he said, you can’t buy more of. Gas and distances traveled you can. Go the fastest route, not the shortest if needed. This is Uber’s insight, too. And professional drivers know that, quite, quite well.

,

.

This is highly important for Uber’s special designed efficient, valuable & empirically tested specific routes to drive around. There are time problems with rush hours. There are slow downs due to accidents, road work, and many other complex system events. Bees do this all the time with their needs to find the best routes to the most yield flowers and how to get there against winds and many, many other factors. But they can do it and so can we. Clearly the least energy solutions are growth creators, and have over time a huge pay off to those using them. AKA growth and survival. That is what Uber has found and why they have grown so very fast.

.

.

We use much the same to solve this “travelling salesman problem (TSP)”, when we are walking. When we go from A to Bee we must follow footpaths which are allowable. We can’t walk over houses, nor swim streams, but must walk round obstacles & use bridges. This is part of the complex system of the TSP and how we best solve those by sorting, T&E and such is much to the point.

.

First of all, we develop rules, meaning the first choice is the least distance. We try that way out, and find it takes about 25′ to get there. Then we find more least energy ways, a “Shortcut”, which is yet another LE, 2nd law form, & save say 150′ or so. So we use that way. However, it can be fraught with mud, and weather and other problems times to time. So we must be flexible in our routes chosen according to conditions.

.

Thus we find the best shorts cuts. I did this today. coming from the store to the library. Starting out I knew I could go several ways, but the 2 major shortcuts, were of unknown comparison lengths and had to do Trial & Error(T&E). So, instead of going down Gaston, then cutting across a parking lot and then grounds of Baylor, went down the street by the church and short cutted over behind a building on the 1 way street. Then saw at once that if I short cutted across TWO adjacent proprieties i could save even more steps. I had missed that, before which is why empirical testing is always best. Thus I saved not only 1 hypotenuse, but 3, with the 2nd one which became two combined. A very large savings, alone.

.

The rule being, moving in a straight line, the longest hypotenuse getting from A to B on the 90 deg. sq. grid is the best solution, if the routes are passable. & thus I could doing it the next time, save another 25-30 steps, those being the basic ways of measuring distances. So that passed into my LTM (memory), to try the next weekend for that.and for testing could also pace off the distances, then compare them. Again showing that we Detect Least Energy by using comparisons. this mathematizes complex systems, does it not?

.

Then getting to the parking lot by that means, found had to cross over the parking lot and then the street, but that in fact got me further south, than going by the Gaston St. way. AND it was quieter, less traffic & more shade, too. Less dirty as well. So crossed over the train tracks and then into the shade, saving more time than ever, too. What I thought was longer, altho more complicated, was shorter than first realized. Again empirical testing, correcting a visualized route plan. And could steadily improve it more and more to save more and more, too. We can’t walk in straight lines, but we can approximate those by siting a landmark which delineates the hypotenuse and walk as directly to that as possible. Yet one more method to improve times and distances.

.

Reaching the train tracks & crossing over on the street, walked another hypotenuse over the road, when traffic allowed, and saved more time. The right turn on the grass saving another 6 feet and across the street to the parking lot and then to the nearest building edge on the alley. Then down to the next main street, using the longest possible series of hypotenuses as short cuts to get there.

.

There I had a pair of possibilities. To go south and cut over on a large parking area, or to go down and cut across another parking area, and then over a large median strip with grass and then over another hypotenuse to the main street, again.

.

Looking over to the other parking lot, saw at once, that the distance from the cut thru was shorter than the distance down between buildings. The alternate path was a shorter hypotenuse than the way I went, judging by comparing building lengths. So the route I’d chosen was shorter. AND then there was the 2nd parking lot I could short cut thru PLUS the lengths over the road via the wide grassy median. So I’d picked the right way, crossing on an hypotenuses a number of times, instead of a longer 1 but once. If, however wanted to p/u an X-word, then would go that route, because the goal was different, getting a X-word AND to the library the most efficiently. I’d lose a bit of time/distance, but get a crossword for vocab building. The goals we have affect clearly the routes used. That’s Trial and Error.

.

Then finally walking up on the road going to the library, found it was hot in the sun, and the right side of the street was, far, far shadier and the same distance. Without other needs, so could miss that spot over the road’s. side, too. Again, goals change the routes. T&E is Always goal directed, it should be stated.

.

But then ran into a street construction problems, and solved that using the above methods, efficiently. & then next time on that route ever better, too. Just like the bees get better and better finding the best routes to the flowers. Successive approximations, as the tool, there.

.

So, the shortest route was indeed that route I’d decided on, avoiding obvious confirming biasing, too. Again efficiencies!!!

.

So short cutted over the rocks, saving distance, but risking injury, still it worked. then the short hypotenuse avoiding the water, & then the longer one over the parking lot, over past a Building on the sidewalk, and then over the street’s hypotenuse to the end of the fencing, where could enter the parking lot & safe more than ever. & did that.

.

Then on the street, shadier north side as above. & then down to where the shade ended about least 200 ‘ longer more shade the N side, less heating, and then to the cross over. Route was complex, often off sidewalks, but easier to walk to the end of it, and then over. & then again over a built up grassy median, and to the cross over saving more time, as didn’t need to hit the usual store on Main.

That saved more time.

.

All of this can be measured by comparison standards, mathematized and then calculated out as the nearly ideal route to go. Then the route taken approximates that ideal as best possible with significant times and savings.

.

And what’s more, the more often we take the same highly time/distance saving routes, the more we save. So if we go a route back and forth every day, then the work spent to mini-max the distance/time measures becomes ever more savings of distance/times. We do MORE with less, the more we use a very efficient route. That’s true as well of most all tasks. Those we do all the time, we make the most efficient. The ones not as often or rarely, we won’t save that much time. So we First mini-max the most used routes, methods & ways of doing things!!!

.

Then had 10′ to kill, found a shady area, good winds blowing to cool down, sat there, did some writing and the up and down the street. staying in the S side shady area. Could have gone over the parking lot, a bit shorter, and a climb, but it was HOT and that was not on.

.

So down the street in the shade mostly, to the left in the shade and then an hypotenuse thru the fencing saving more distance & time. & was there at the goal. Showing how from hypotenuse to hypotenuse would save lots of time.

.

All of this showing how walking can find the solution to a TSP with complex system operators going on such as shade, less traffic, dirt and risk of being hit, and so forth. Saw a cop, and waited till she’d moved on in her car, & then walked over the road on my hypotenuse and did the work, there too. & went well.

.

A 2nd route Out was shown me by a friend, who went over to the E side many times. If went to Canton, then would have to hypotenuse, short cut over a single parking lot, and then another and jump a low fence. And a bit over the grassy area on the street corner, too.

.

However, and this is the case, going east on Hickory, went to a big field, where a huge hypotenuse was possible. which then found another 400′ tract, to walk the hypotenuse. And then yet another large grassy field going to Expo park, where on the northern side, another paved hypotenuse was possible. & then over the media strip to yet another by a building, where yet another hypotenuse could be done. Instead of saving only on two shorter hypotenuses, saved in multiple ones lots more.

.

The existence of those multiple, long hypotenuses saved about 0.4 mile and THAT is significant esp. if a bit impaired with walking.

.

So instead of two hypotenuses one over a field and parking area, there were THREE fields of large size, more than the other two, and that was shorter, as once over the RR tracks over another hypotenuse to the street, then over the corner of a property which if wet was passable by walking by the building on concrete, and then another shorter hypotenuse. and then up the street. Walking on the W. side, which had fewer breaks, fewer problems.

.

But there was another choice, there, too. going to the Aldi’s required going thru Baylor’s 3 parking lots, which saved time. but up walking to the McD’s was on the left side, few side roads, and overall better, too.

.

So if going to the Aldi’s then would use those crossovers, as from there, too. & to the Dollar Store, a 3rd route became possible.

.

This method of LE least distance, allows to save about .3 to .4 mile each way.

.

Walking from library to the place, could as well be found to be short cutted in many sites, too.

.

And staying out of the heat as well, mean on the south sides going E and the north sides, going West out of the sun.

.

These show the ways of going places, too. and how the unlimited short cuts and least energy & time routes do the job too. Less traveled less traffic means less time used up. No lights, or fewer lights as well. & that works, too.

.

& this in a cameo is how each and every LE efficient way of doing tasks is done by the professional. & simple peripatetics way which gets the job done in the shortest time, effort, work, lifting, cost, materials and best quality, in-deed.

.

“I took the road less traveled on and That has made all the Difference.” — Robert Frost.

.

Once had, on my walk about, a cool, breezy spot by a door. When I got there, a dairy truck drove up and the driver got out to make a delivery. Did my method of efficiencies apply to that person and was he a professional? The answer was the LE methods as above ALSO applied to him. And as I watched him, he got out of the truck. to the back, put down the stairs, (simpler than jumping up and down to get inside the truck, thus the device was LE!). and opened up the back doors. Walked inside, and pulled out the dolly and set it down on the curbside of the road. Then pulled out 8 plastic, large cartons of milk & products & set them on the wide platform at the back of the truck.

.

Down the stairs he went, took the cartons over to the sidewalk and then shoved the dolly under the 1st 4, opened the door, and went inside with that, and then back out to get the other 4. after previously closing & locking the back door of his truck and made in all 8, saved thefts, presumably by me, who was sitting there watching.

.

Inside the building he went, coming back once to inside the locked door to get the other 4 cartons. Then about 10′ later, came out with EIGHT empty crates, carrying all 8 at once as they were empty. Saving a trip, we see. Out the door and to the side of the truck with the dolly. Then loaded up the cartons four at a time on the platform after opening up the doors again, put the dolly up against the bumper. Then put the 4 and then other 4 inside the back, pulled up the dolly which he’d placed against the bumper so he didn’t have to walk down again. Locked the door and went off.

.

Noted he’d also turned off the truck engine, too, to save fuel, wear & tear.

.

Now, except for not putting the 1st four full cartons on the dolly which could have been moved onto the sidewalk, saving a step, he’d taken the least energy shortcuts to the whole delivery tasks, yes? & he was a professional, that was clear. The newbie amateur would have missed most of this, but likely over time figured much of it out to make more deliveries in less time and thus save his job.

.

When a vendor came into the building to restock the soda and snacks, I could see the same times, effort savings method he was using to short cut and do the job without any real problems. And when I mentioned that he was efficient and others would not likely to be so, and he was a professional. He smiled, and said that was likely the case. My methods of least energy savings also applied to his work. And that, again, marks a professional does it not. LE rules!!!!

.

That’s the empirical introspection of the method being used, which has been talked about many times, before.

.

& how we know that our patients have real diseases & not imagined, too. This is esp. important in the clinical neurosciences, when dealing with real pain, weakness & numbness, which reports can be faked. LE and CP guide empirical introspection and avoid problems too often seen. By setting up solid, CP, LE standards, which the CP creates by T&E and sorting using LE, etc., to do the work.

.

Have gone into these methods in detail in “How Physicians Create Information”, which details those processes, as well. & my article about “Empirical Introspection” & how we know that people have real conditions and are not just fooling us.

.

.

.

The whole point of this article is to show HOW we can efficiently and accurately compare and contrast professionals in each field and how they do their work. Then by comparing 8 or so, find out what methods they use, and then delineate, measure and test them to improve them without limits. Adding new devices, such as on line maps for driving are also showing what’s going on.

.

Thus, “How to Create a Self Driving Car”. Or indeed Any vehicle.

.

.

When we teach how to be a professional in ANY field, those specific tasks, methods, and techniques which are used by professionals, compared to amateurs, show us how to improve without limits what we are doing. It also shows us HOW and WHAT to teach to students to give them the best scientific methods, empirically tested to teach them faster, & better. & then to perform at very high levels, in most all fields, without limits to improvements.

.

Would you rather be trained by someone just showing how to do a job, who is a professional carpenter, or by superior methods & those of others which have been delimited, detailedly delineated, then combined for the best ways of doing things? The effects upon our education systems could be profound. Because NOW using these methods we can scientifically study, improve, and get better without limits in almost all fields, be they the arts, the construction fields, police work, & in EVERY field, this LE method is applicable in ALL the myriad ways of methods/techniques, devices & technologies.

.

Without limits!!! The speed of improvement in all field, esp. in the life staving medical areas would be impressive. but Just. Because inevitably we create the break out. Any society, or person in that society or group, who cannot break out of their current abstractions, after a limited period of growth, is doomed to stagnate. The S-curve, mathematically, And from here we launch into the unlimited S curves of growth and how those can work phenomenally to do all sorts of useful, efficiently highly growth creating methods and technologies.

.

& this is how it’s specifically done. When a new method is created, we compare to the older methods used, how much time, cost, materials, behaviors, actions, etc. are saved. Then we graph that on a created S curve comparing it to similar savings in say time, distance and costs. If it’s significantly better, say at least 15%, then it will by the rule of 72 double the advantage over the others used by about 2 fold if used 5 times. That measures the slope of the S-curve, & shows the saving which can be made, mathematically by ANY new method or technology.

.

Thus we have created an empirical, mathematically measurable quantity to decide whether to use the method/tech and how much return to expect on our investment. It’s a relative term, not absolute, but it’s real, too. Thus for ANY new product or service, the S-curve of growth can be found, and figured out. If it does the job significantly, provably, scientifically empirically better by these measured means, then we know it’s going to go and grow. Because efficiencies create growth and driver the markets. Or in another way, least energy methods created growth.

.

The further implications of this regarding emergence of new phenomena and making the future more predictable are innate in this S-curve concept, as well.

.

Thus we can, if careful studied likely know what’s more likely to succeed and grow than not.

.

We must add that Steve Jobs had an intuitive sense of this, and which was why he was the most successful admin and product creator in history. Essentially he knew how to both create an efficient product, AND how to make it fun to use, thus marketing and utility combined to make the world’s most widely used and successfully marketed, the I-Phone.

.

That’s how it’s done. As far as Uber is concerned, once they realize the cost/efficiency of their methods and that they derive Directly from least energy, TD physics, then they can increase their efficiencies vastly, up to the point of diminishing returns. & then move on up again by improving and “Breaking OUT” of their older methods, which like any method or tool have their values, capabilities, which drive their growth, but also their limits, which cause the growth to peak out at the top of the S-rive.

.

That is what these methods portend for virtually EVERY field today, without limits and without any real end to improvements. Either.

.

Understand what’s behind growth, and then use that, using the empirical methods and scientific methods we now have to create huge growth in all fields, without limits.

.

That’s what these new neuroscience models we have here, promise. & the effects on education & training as well will be limitless; & revolutionary & wealth generating without limits, too.