By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014

Copyright © 2019

.

What do all of the above have in common?They are all complex system (Cx Sys) manifestations of Least Energy(LE) rules, in all of its enormous, unlimited applicability.

.

How did Einstein do what he did? That’s a deep question.But mainly answerable, now. By using CP (a detector of LE, Comparison Process), LE, S/F(Structure/Function) & Cx Sys’s processes and methods we can cut thru the complexity and reach a better understanding of understanding and then of events.

.

Einstein was a good visual thinker. Sagan’s “Cosmos” points that out with Einstein’s thought experiments. First of all he asked what it was like to ride on a Photon? And he realized that at light speed, all time stopped for the events around you, but the photons were still in process, which is time. Thus most all of everything around that photon is either instantly processing, or not processing at all. That is QM instantaneity, which is the case. All processes appear by the photon to stop outside of the photon, or they occur so quickly, that it’s not detectable. This gave rise to a basic part of his Relativity.

.

The second visualization was how is being in a gravitational field like being in an accelerating system at 9.8 m./sec.sq.? The answer he gave is that we cannot detect the difference. & he used that comparison process of visualizing those two very similar states to make another part of his Relativity. But that assumes that the acceleration is regular/smooth, and in a gravitational field it’s VERY regular,. And that was the point.

.

However, they are NOT the same, exactly. They are not Quite alike, and thus the regularities of acceleration does NOT match what’s going on in real events with gravity fields. That is a disparity and from the “cognitive dissonance” ideas of Festinger, which are comparison process detections/mediated, we see this. And those create scientific progress when they are seen, recognized and then largely solved. “Thomas Kuhn’s “The Structure of Scientific…” clearly shows that, successively. Much like the 1/3 neutrino’s seen compared to the predicted, expected, 3 times as many. And that led to a major advance in particle physics. CP mediated, most all of it.

.

Relativity is CP mediated. Einstein stated in his “Physics and Reality” 1936, that “our understanding is based upon finding the relationships among events in existence” And that’s exactly the case. IOW, the cognitive neuroscience of Comparison Processing (CP) is finding of the relationships among events, by CP of those events. Much as he did with riding on a photon, compared to events around it, comparing acceleration to gravitational fields, AND the relationship between Mass and Energy, the famous E=MCSq. So we see relativity everywhere in these cases of understanding what’s going on in events within and around us, do we not?

.

Now, when we want to understand how arithmetics and most of math is created, we looking for the relationships among the numbers. We count, and create information along the number line starting in base 10 at 1, 2, and up the hierarchical organization of 1’s, 10’s, 100’s, 1000, 10K, 100K ,1 Millions, 10 Millions, etc. And the hierarchies are created by CP, as well.

.

So we take 2 and 8 for example & from that create the entire sets of basic arithmetic. When we count from 2 to 8, laboriously, we get 6 numbers. Thus the relationship of 8 to 2 is counting up 6!!! And when we count back down, we get 6 also. So the relationships between 2 and 8 is 6.

.

And then we abandon counting for the addition and subtraction tables, which efficiently, LE wise summarize the relationships among all of the numbers on the number line. It’s the first hierarchy after counting, viz. addition and subtraction. is it not? And thus we have short cut the counting, by using Long Term Memory of the addition and subtraction tables to SAVE a lot of time with counting. Least energy, yet again is not? But we are not done yet, we have counting which created information, and we have the addition/subtraction tables which efficiently short cut that counting. & save us lots of time, too. But what of the relationship among 2 and 8 as a function of 2? We move from 2 to 8, by addition, giving 4 twos to get to 8, do we not? Thus we have more efficiently added, and counted that 2 Fours are 8!! Yet another relationship, Einsteinian in form, which give us the Multiplication function. is NOT? And from there we count down, multiply down & realize that the 8 divided by 2 is Four!!!

.

As Feynman once said, If I cannot generate an outcome, then I cannot understand it. The above model generates basic mathematics, where tested so far.

.

Thus we have by this simple means created the “Einsteinian Relationships” which give us an understanding of adding/subtracting, the next hierarchy of multiplying/dividing, and we can successfully with the latter find more quickly, than counting OR adding, that 4 times 12 gives us 48 and that 48 divided by 4 (or vice versa) gives us twelve. We shortcut the adding process and the counting processes every more, do we not? It’s all consistent. And that consistency ALSO is comparison process.

.

And what of the NEXT hierarchy? And we see by comparison processing we create the relationships of the numbers in our cortices next to the anatomy of words, Math grows Right out of word processing, left post temporal speech/language centers. & if we damage those centers by anything, the structure/function (S/F) relationship shows us that language is damaged, as well as the math. But the verbal functions came first and are extended by math in that area, as well. This is the case. Thus we have used the S/F relationships created and driven by CP to understand how math works, as well, as language.

.

And we can state almost everything in words, including all of math, but the converse is not true. Unlike 4 X2 = 8, and 8/2 is 4. It’s not the same. We easily use words to express virtually all of math. But math cannot speak or process language very well. “How sharper than a Serpent’s tooth is it is to have a thankless child!” shows this 1 way function. And it’s humbling, too. The origins of descriptions and language/speech came first and out of those came maths. Very clearly.

.

So we have this finding. The what are the origins of language/verbal functions?

.

And using Einstein once more, we come to this huge epistemology implication which works in physics and is likely true. When we take a piece of cut lumber, how do we measure it? We take a relatively fixed, stable, but with arbitrary units to measure it. and we come up with say, 16 inches, or about 40+ cms. The relationship among the conversion from inches to cms. is 2.54 slightly+. The ratio of 40 cms. over 16 is 2.54. Comparing those two, is a ratio, a proportion, is not? Comparing the circumference to the diameter gives pi, does it not? Comparing distances/time gives speed/velocity, and all the rest of the constants, such as E = h/ (Planck’s) constant times nu, the frequency. For THAT relationship & the quanta, Planck got the Nobel prize. Finding once again how energy of photons is very precisely related to its frequency. And by a simple math we can determine the wavelength of that light photon. & this also works for sound, as well. It’s most all about relationships, and those are detected and created by CP.

.

All of those relationships are found by CP of information. Density is the same, mass over volume. From the famous “Eureka” of Archimedes, which solved his problem and spared his life. F is m1 X m2 times g, over r. Sq, the square of the distance. Again, more ratios, proportions and their relatedness to each other. And 1 more constant, too. IOW, comparison process writ large. It’s a universal processor.

.

So, if the epistemology of Einstein is based upon finding a standard, efficient measuring scale, and its methodological limits, And it’s creation of data, numerical information by those comparisons, then what of verbal descriptions? And we know they are related, but exactly how was not previously known, or figured out. But it is now. When we, for distance describe colours, we use the famous Mnemonic acronym, ROY G. BIV. And by applying the relatively set, stable, fixed method of arbitrary, verbal standards of what the colours are, we can describe the color of most everything around us using those standards. We create information by using VERBAL set standards & comparing those to all around us,& within us, too. Roses are red. So is blood. Much tree bark is brown, & so can skin be. Efficient, widely useful, too.

.;

The hugest example of this, which is confirming without limits, are those of the verbal adjectives, and their characteristics. We have the base form of the adjective, such as high. And the extreme form, at the other end of the Linear line, highest. The superlative and we can also tell that because its moST “signal detection” ending in -st. nearly universally. And the middle form, Higher? The Comparative, and there it is again. It’s most all CP, we see. The 1000’s of words which can be used in this way, are almost all comparison processed. High, higher highest, or some tall, more tall, or most tall, in all the myriads of ways. All of it comparing to how big is it, is it as big as a bbee, as big as a house, an elephant, or the size of a pinhead. All relative to our descriptive standard, by which we describe and not only count, and measure which CREATES information, but by using the Einsteinian methods, the verbal Descriptions which apply those same set, VERBAL standards, altho still arbitrary in which sounds/spellings we use. But they are efficient and stable, & thus, LE events.

.

This then shows the common origins of math/measurement, and Verbal descriptions, and sets up a clear cut Relationship between the two, which is Einsteinian. Math measures, words describe. They both create information of each type by comparison to those measuring, or verbal descriptions. Both act in the same ways, as the math arises from the set standards based upon the set description standards of the words we use. Thus we have it that Words/ideas/descriptions are related exactly to how numbers/maths/measurements work to create information & thus equivalent by the same by set, fixed stable standards, he one with words, the other with numbers. That is the Relativity to The Cortex, is not?

.

And the CP which creates those relationships AND generated the information are one and the same for both words/descriptions : math/measurements. Is not?

.

So if we can create basic math arithmetician by relations among the numbers, using more and more efficient, and hierarchically arranged, counting, adding, multiplying, then in the same way we can generate all languages & words by using the arbitrary designations(set points) & the relationships among the words,or the meanings of words by sound, as how to generate most all languages. And translation is of the same.

.

.

About 1/2 the distance down, see this rigorous comparison of word meanings which leads to nearly exactly as possible, most all of the translations.

.

There is NOT likely a universal grammar, but this nearly universal processor in the language cortex which generates the arbitrary sounds, gives then meanings, but which are fixed, efficient and thus stable, which creates our languages, is not? It’s simply the repeated use of the repeating cortical columns which encode the standards of words, which create language. AND information verbal. And information numerical, is not?

.

Simple, complete, elegant, and highly efficient, & thus LE explanation of the origins of language from Einstein’s epistemology which is altogether true in physics & thus empirically the case, is not?

.

That’s what we see by looking at Einstein’s brain/mind, the structure/function of an engineer’s mind & how it worked. is not? Incredible abilities to visualize events. His father was an electrical engineer as was his uncle. He sat for hours at a time in the Swiss patent office, examining designs & models and figured out how those worked, visually, to see if they were original enough to get a patent. By a massive comparison of what he was testing for originality, to the hierarchically arranged classifications of the patented designs/mechanisms. The Relativity of the Cortex, indeed!!!

.

Now we can move on with the higher hierarchies of math. The exponents, which take massive numbers and expresses them most easily & efficiently. 10 exp. Or another form of it, “e”. And then the scientific notations which arise as well as very efficiently more than counting, adding, multiplying act to allow us to LE handle large numbers most efficiently, is not. LE Rules!!!

.

And the hierarchies of the exponents? 10 exp. X, and then 10 exp. 2, 3, 4, etc., thru those hierarchies; and then the 10 exp X, to the exp Y, exp Z until we get a googleplex. All hierarchies, all efficient, all generation of information with measurement, and that is numerical modeling and descriptions.

.

Now with the adjectives we get a linear system, do we not? High, higher, highest and Low lower lowest. It’s easily convertible to a number line, is not? & that dear readers, is HOW math is created. We create relationships among the number line, & then use that linear scale to measure high, higher, highest and low, lower, lowest. Or Smallest, smaller Small to Big, bigger biggest, and so forth. Then assign the numbers by measurement & thus we see what’s going on. That mathematics of the Adjectives is not? The exact correspondence between verbal, linear descriptions to the more precise, more efficient numbers, which allows us using math, to more accurately describe events. Thus the utility and value of math. It does many jobs better, more exactly and thus with increased information. & thus due to Shannon’s IT foundational concept, entropy declines, as the description holds MORE information with math, than the verbal. Thus yet another hierarchy, is not?

.

It’s that simple. It’s nearly universal and it applies to most all information be it verbal or numerical, be it description or measurement.

.

How Physicians create information shows this very, very clearly once more. & how we read most all radiological images by standard, set rules of what is normal, versus what is not. It’s a universal visual processor, visualizing the standards & then applying, comparing them in each case, to chest x rays, angiograms and venograms, CT and MRI images, and so forth.

.

And how do we figure out the S/F relationships in the function MRI (fMRI)? We set up the baseline resting image, first, and then activate part of the brain, and see what lights up BY Comparison Processing!! And every single image read is CP at least twice. We see the resting image, and we know it’s normal, then we do the activation of the brain and see by CP where it’s located in all the myriad ways, without limits. Simple, efficient, highly descriptive and yet, highly likely to be the case. And if we also then compare the fMRI to the magnetoencephalogram (MEG)? We gain ever more information creation as well. Triply so!!! Thus does CP create information visual and verbal in radiology. & the Readings of most all images is largely CP to set, fixed stable(efficient) standards, too.

.

And this realization shows us how to create AI to read images. We use the expert skill sets of radiologists to show us what standards they use as normal, and how in each set of skills they interpret (compare) the images. And once we have those standards, we program the computer by recognition methods (bayesian statistics) to abide by & use those exacting standards. It necessarily creates AI for imaging reading. & saves the radiologists a lot of time on CXR’s, too!!! More income!!!

.

Thank You, Albert!!

.

.

Now, how do we get Mountain Passes from Einstein? And that’s simple. We use the LE, CP standard. Why do we preferentially use mountain passes, rather than other routes? Because those are least energy & often very time saving. We climb the least, and expend the least amount of energy. The pass is nearby to areas which are convenient and easy (LE) to get to. And we know that by crossing them without measurements, that it takes lots less time and energy to climb up them and then walk or travel down them.

.

This is what opened up the far Kentuckee & Tennesee areas. The Cumberland Gap, known by the Indians, which went along the Cumberland River, which had cut a deep channel through the Appalachian Mountains into the interior. And it’s least energy, too. We know that by verbal descriptions the energy taken crossing into those areas. The reduced cost in time, food, and materials & wearing out of the wagons and horses. & when we measured the other ways to get over the mountains? By math we found it was the Lowest, comparatively way, as well. Thus the math confirmed precisely the routes to taken, as well, because the altitude numbers were lower. The energy was lower, the cost in terms of wagons and times were less. It’s all been there for 1000’s of years, from the Kyber Pass, to the Berthoud pass, from the South Pass at the end of the Wind River range, to the passes through Nevada and the Sierras. From Echo Lake Pass to the lower, I-80 pass via Donner Lake and Summit. All the same, all the myriads of duplicating equivalent ways, world wide. The universal, nearly, processor, Least energy, least energy, Least energy rules. Detected and used by CP, nearly universally, we see. And verbally describe and more accurately described then by mathematics, which confirms in most all cases, that those passes ARE the short cuts, the shortest, easiest most efficient routes, is not?

.

And it’s Einsteinian, too.

.

Now we look at weather radars, for more. What’s the use of radar to detects storms of all sorts? The basic concept is that of visual tracking, which is a series of comparison processes of what happens to events over time. When we see a bird flying, we see it move, as our visual systems sample each image over 100 msec. or so, and then begin to compare them automatically because it’s built into animals’ visual systems for at least 300 megayears to do so. When we see a walker coming down the sidewalk or trail, we use the same CP to detect what direction they are moving and if they are getting larger and large, CP, or smaller & smaller and thus moving towards, or away from us. It’s most all CP. Its’ done by most all the animals, as well. It’s universal method for understanding events. It’s how the birds get from place to place and catch bugs in flight, too. It’s how the insects do the same. It’s tracking, from the clouds moving against the side of a building, or relative to a telephone we detect the movements of the clouds by entirely arbitrary, but fixed and stable standards, do we not? And the illusions are created by such arbitrary standards, as well.

.

So this relates to radar, because we can see the weather images coming in, updated from time to time and how THOSE compare gives us the direction the clouds and storms are going to. We detect and have learned by T&E what constitutes rain, and other phenomena. Then we use those visual standards to See what’s going on. Is the storm moving towards us or not? If we mark the edge of the image at a certain town, then watch as the storm moves to the edge of another town, we simply do a distance/time and we get the speed of the approach. Then we do a simple measurement of how far that set standard, Einsteinian point is from where we are, using the D = Rate X Time, a ratio, a proportion, an algebraic expression, A Comparison Process all of it, and that gives us how much time we have before the storms gets to us, is not?

.

So we see the verbal image, mathematize the verbal description and thus get more precise information about it reaching us. That’s how it’s done. Tracking visual, become a mathematical expression by creative means of seeing the comparison (the Einsteinian relationship) between Distance, time and speed. Again, CP, again description &measurement equivalences. And that’s how math is created in a practical sense from verbal descriptions. & it gives us predictive control over storms coming in, tornadoes being seen, hurricanes’ speeds and so forth. All of it Einsteinian relatively fixed, set standards being used. And we can warn of such dangers & prevent a lot of damage, deaths, and costs, too. It’s efficient for survival. And thus moral, too.

.

For this statement here, we see the most profoundest insights into math creativity as well. “Any society (or group in a society” which cannot break out of its current abstractions, after a limited period of growth (LE growth), is doomed to stagnate. It’s the “S-curve of growth” case converted into mathematics from an easy visualization.

.

This shows how practical empirical maths are created. Note in conclusion, that Einstein’s Relativity was largely verbal, with some math in it. And it took Einstein and Minkowski to cast the earliest version largely descriptions into Space/Time 4 dimensional math models. Those could then be tested more precisely by measurements. The ideas, words, descriptions came first and THEN the math followed, by comparison process of the outcomes of the math to follow more accurately the verbal description, which anatomically and evolutionary Preceded the math.

.

We can make do without math. We cannot make do without languages. We do MUCH better, though, with the efficiencies of math, than without it, however. As the mountain pass confirmations of altitude measurements show.

.

& that, dear readers, is how verbal descriptions create information & how measurement creates information & how BOTH in an Einsteinian way are related very exactly, and ontological and neuroscientifically to each other..

.

With Simplicity. CP creates the LE outputs. Then the S/F relationship to further tie them together & f/b the higher nearly universal as well, applications of Complex system descriptions, which are far, far better than the linear, as well.

.

And as Ulam stated, Math must greatly advance if it’s to describe complex systems. But we can do that, too, NOW that we know how math creativity is in part created. Now that we know how to create creativity & Understand understanding and then think about thinking and then think about that, too.

.

And it’s most all from the great, intuitive, neuroscience visualizer, Dr. Albert Einstein who showed us very like how it works, from words to description from the creation of maths in verbal centers of brain, to the CP creating our empirical mathematics. Simple, elegant, nearly universal, and because we can see the Relationships among the languages to each other, & those relate to math, a higher degree of understanding, as well.

.

.

Simple, elegant, explains a LOT with a little, CP, LE rules with S/F, & complex systems.

Walking Shortcuts, Or How to Create Unlimited Professional Growth Methods

.

By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014

Copyright © 2019

.

Using simple walking routes, parapatetics and the unlimited ways of efficiently getting from place to place, this can be shown significantly & useful as the cameo for solutions of most all professional skill sets, work and method & techniques.

,

Have written about these insights from the first in this blog, and it works and is nearly universal in applications.

,

So, we will use the complex system of the 2nd law as a good guide from s/f relationships, and so forth, how we can create efficient ways of getting from place to place, significantly assisting us in building efficient roads, phone & telecom cable lines, power lines, water and gas lines, etc., as well. This corresponds nearly exactly to what Uber is using to direct their drivers via the most efficient, time savings method in delivering passengers. And WHY that works, as well. It’s all Least energy manifestations, which creates growth, as it has done very clearly with Uber. And underlies, mostly unrecognized in most ways and methods/techniques. Understanding that efficiencies are LE, and thermodynamics creates manifestly better, and virtually unlimited ways of improving most all methods/techniques. & using the sorting methods of Least Energy applies to S-curves estimating savings, yielding growth events, as well.

,

,

As has been stated before, the difference between professionals and amateurs are thermodynamic qualities. Professionals do their tasks faster, more efficiently, with less cost, time, movements, materials, etc. compared to amateurs, and each of these are a mathematically expressible quantity. Sadly, the mathematics cannot write all of those in terms of a single expression, such as heights, weights, costs, times, etc., but deals with each piecemeal, because as yet (a la Ulam’s, “Math must greatly advance before it can describe complex systems.”) there is not any really effective mathematical way of expressing complex systems in all of their vast, unlimited kinds and processes, too. This shows how that can be created. & is Directly related without limits to the solutions of the Traveling Salesman Problem, too.

,

Essentially, let’s take a simple case, of how to get from place to place by walking using the shortest times/distances. Most persons who drive or walk know that there are the shortest distances, and the shortest times, which are often very close, but not often the same. A driver once told me that going the shortest route, down Jahant Rd., instead of up the faster highway to Liberty Rd, which was fastest as there were fewer stops and other such slow downs. Time, he said, you can’t buy more of. Gas and distances traveled you can. Go the fastest route, not the shortest if needed. This is Uber’s insight, too. And professional drivers know that, quite, quite well.

,

.

This is highly important for Uber’s special designed efficient, valuable & empirically tested specific routes to drive around. There are time problems with rush hours. There are slow downs due to accidents, road work, and many other complex system events. Bees do this all the time with their needs to find the best routes to the most yield flowers and how to get there against winds and many, many other factors. But they can do it and so can we. Clearly the least energy solutions are growth creators, and have over time a huge pay off to those using them. AKA growth and survival. That is what Uber has found and why they have grown so very fast.

.

.

We use much the same to solve this “travelling salesman problem (TSP)”, when we are walking. When we go from A to Bee we must follow footpaths which are allowable. We can’t walk over houses, nor swim streams, but must walk round obstacles & use bridges. This is part of the complex system of the TSP and how we best solve those by sorting, T&E and such is much to the point.

.

First of all, we develop rules, meaning the first choice is the least distance. We try that way out, and find it takes about 25′ to get there. Then we find more least energy ways, a “Shortcut”, which is yet another LE, 2nd law form, & save say 150′ or so. So we use that way. However, it can be fraught with mud, and weather and other problems times to time. So we must be flexible in our routes chosen according to conditions.

.

Thus we find the best shorts cuts. I did this today. coming from the store to the library. Starting out I knew I could go several ways, but the 2 major shortcuts, were of unknown comparison lengths and had to do Trial & Error(T&E). So, instead of going down Gaston, then cutting across a parking lot and then grounds of Baylor, went down the street by the church and short cutted over behind a building on the 1 way street. Then saw at once that if I short cutted across TWO adjacent proprieties i could save even more steps. I had missed that, before which is why empirical testing is always best. Thus I saved not only 1 hypotenuse, but 3, with the 2nd one which became two combined. A very large savings, alone.

.

The rule being, moving in a straight line, the longest hypotenuse getting from A to B on the 90 deg. sq. grid is the best solution, if the routes are passable. & thus I could doing it the next time, save another 25-30 steps, those being the basic ways of measuring distances. So that passed into my LTM (memory), to try the next weekend for that.and for testing could also pace off the distances, then compare them. Again showing that we Detect Least Energy by using comparisons. this mathematizes complex systems, does it not?

.

Then getting to the parking lot by that means, found had to cross over the parking lot and then the street, but that in fact got me further south, than going by the Gaston St. way. AND it was quieter, less traffic & more shade, too. Less dirty as well. So crossed over the train tracks and then into the shade, saving more time than ever, too. What I thought was longer, altho more complicated, was shorter than first realized. Again empirical testing, correcting a visualized route plan. And could steadily improve it more and more to save more and more, too. We can’t walk in straight lines, but we can approximate those by siting a landmark which delineates the hypotenuse and walk as directly to that as possible. Yet one more method to improve times and distances.

.

Reaching the train tracks & crossing over on the street, walked another hypotenuse over the road, when traffic allowed, and saved more time. The right turn on the grass saving another 6 feet and across the street to the parking lot and then to the nearest building edge on the alley. Then down to the next main street, using the longest possible series of hypotenuses as short cuts to get there.

.

There I had a pair of possibilities. To go south and cut over on a large parking area, or to go down and cut across another parking area, and then over a large median strip with grass and then over another hypotenuse to the main street, again.

.

Looking over to the other parking lot, saw at once, that the distance from the cut thru was shorter than the distance down between buildings. The alternate path was a shorter hypotenuse than the way I went, judging by comparing building lengths. So the route I’d chosen was shorter. AND then there was the 2nd parking lot I could short cut thru PLUS the lengths over the road via the wide grassy median. So I’d picked the right way, crossing on an hypotenuses a number of times, instead of a longer 1 but once. If, however wanted to p/u an X-word, then would go that route, because the goal was different, getting a X-word AND to the library the most efficiently. I’d lose a bit of time/distance, but get a crossword for vocab building. The goals we have affect clearly the routes used. That’s Trial and Error.

.

Then finally walking up on the road going to the library, found it was hot in the sun, and the right side of the street was, far, far shadier and the same distance. Without other needs, so could miss that spot over the road’s. side, too. Again, goals change the routes. T&E is Always goal directed, it should be stated.

.

But then ran into a street construction problems, and solved that using the above methods, efficiently. & then next time on that route ever better, too. Just like the bees get better and better finding the best routes to the flowers. Successive approximations, as the tool, there.

.

So, the shortest route was indeed that route I’d decided on, avoiding obvious confirming biasing, too. Again efficiencies!!!

.

So short cutted over the rocks, saving distance, but risking injury, still it worked. then the short hypotenuse avoiding the water, & then the longer one over the parking lot, over past a Building on the sidewalk, and then over the street’s hypotenuse to the end of the fencing, where could enter the parking lot & safe more than ever. & did that.

.

Then on the street, shadier north side as above. & then down to where the shade ended about least 200 ‘ longer more shade the N side, less heating, and then to the cross over. Route was complex, often off sidewalks, but easier to walk to the end of it, and then over. & then again over a built up grassy median, and to the cross over saving more time, as didn’t need to hit the usual store on Main.

That saved more time.

.

All of this can be measured by comparison standards, mathematized and then calculated out as the nearly ideal route to go. Then the route taken approximates that ideal as best possible with significant times and savings.

.

And what’s more, the more often we take the same highly time/distance saving routes, the more we save. So if we go a route back and forth every day, then the work spent to mini-max the distance/time measures becomes ever more savings of distance/times. We do MORE with less, the more we use a very efficient route. That’s true as well of most all tasks. Those we do all the time, we make the most efficient. The ones not as often or rarely, we won’t save that much time. So we First mini-max the most used routes, methods & ways of doing things!!!

.

Then had 10′ to kill, found a shady area, good winds blowing to cool down, sat there, did some writing and the up and down the street. staying in the S side shady area. Could have gone over the parking lot, a bit shorter, and a climb, but it was HOT and that was not on.

.

So down the street in the shade mostly, to the left in the shade and then an hypotenuse thru the fencing saving more distance & time. & was there at the goal. Showing how from hypotenuse to hypotenuse would save lots of time.

.

All of this showing how walking can find the solution to a TSP with complex system operators going on such as shade, less traffic, dirt and risk of being hit, and so forth. Saw a cop, and waited till she’d moved on in her car, & then walked over the road on my hypotenuse and did the work, there too. & went well.

.

A 2nd route Out was shown me by a friend, who went over to the E side many times. If went to Canton, then would have to hypotenuse, short cut over a single parking lot, and then another and jump a low fence. And a bit over the grassy area on the street corner, too.

.

However, and this is the case, going east on Hickory, went to a big field, where a huge hypotenuse was possible. which then found another 400′ tract, to walk the hypotenuse. And then yet another large grassy field going to Expo park, where on the northern side, another paved hypotenuse was possible. & then over the media strip to yet another by a building, where yet another hypotenuse could be done. Instead of saving only on two shorter hypotenuses, saved in multiple ones lots more.

.

The existence of those multiple, long hypotenuses saved about 0.4 mile and THAT is significant esp. if a bit impaired with walking.

.

So instead of two hypotenuses one over a field and parking area, there were THREE fields of large size, more than the other two, and that was shorter, as once over the RR tracks over another hypotenuse to the street, then over the corner of a property which if wet was passable by walking by the building on concrete, and then another shorter hypotenuse. and then up the street. Walking on the W. side, which had fewer breaks, fewer problems.

.

But there was another choice, there, too. going to the Aldi’s required going thru Baylor’s 3 parking lots, which saved time. but up walking to the McD’s was on the left side, few side roads, and overall better, too.

.

So if going to the Aldi’s then would use those crossovers, as from there, too. & to the Dollar Store, a 3rd route became possible.

.

This method of LE least distance, allows to save about .3 to .4 mile each way.

.

Walking from library to the place, could as well be found to be short cutted in many sites, too.

.

And staying out of the heat as well, mean on the south sides going E and the north sides, going West out of the sun.

.

These show the ways of going places, too. and how the unlimited short cuts and least energy & time routes do the job too. Less traveled less traffic means less time used up. No lights, or fewer lights as well. & that works, too.

.

& this in a cameo is how each and every LE efficient way of doing tasks is done by the professional. & simple peripatetics way which gets the job done in the shortest time, effort, work, lifting, cost, materials and best quality, in-deed.

.

“I took the road less traveled on and That has made all the Difference.” — Robert Frost.

.

Once had, on my walk about, a cool, breezy spot by a door. When I got there, a dairy truck drove up and the driver got out to make a delivery. Did my method of efficiencies apply to that person and was he a professional? The answer was the LE methods as above ALSO applied to him. And as I watched him, he got out of the truck. to the back, put down the stairs, (simpler than jumping up and down to get inside the truck, thus the device was LE!). and opened up the back doors. Walked inside, and pulled out the dolly and set it down on the curbside of the road. Then pulled out 8 plastic, large cartons of milk & products & set them on the wide platform at the back of the truck.

.

Down the stairs he went, took the cartons over to the sidewalk and then shoved the dolly under the 1st 4, opened the door, and went inside with that, and then back out to get the other 4. after previously closing & locking the back door of his truck and made in all 8, saved thefts, presumably by me, who was sitting there watching.

.

Inside the building he went, coming back once to inside the locked door to get the other 4 cartons. Then about 10′ later, came out with EIGHT empty crates, carrying all 8 at once as they were empty. Saving a trip, we see. Out the door and to the side of the truck with the dolly. Then loaded up the cartons four at a time on the platform after opening up the doors again, put the dolly up against the bumper. Then put the 4 and then other 4 inside the back, pulled up the dolly which he’d placed against the bumper so he didn’t have to walk down again. Locked the door and went off.

.

Noted he’d also turned off the truck engine, too, to save fuel, wear & tear.

.

Now, except for not putting the 1st four full cartons on the dolly which could have been moved onto the sidewalk, saving a step, he’d taken the least energy shortcuts to the whole delivery tasks, yes? & he was a professional, that was clear. The newbie amateur would have missed most of this, but likely over time figured much of it out to make more deliveries in less time and thus save his job.

.

When a vendor came into the building to restock the soda and snacks, I could see the same times, effort savings method he was using to short cut and do the job without any real problems. And when I mentioned that he was efficient and others would not likely to be so, and he was a professional. He smiled, and said that was likely the case. My methods of least energy savings also applied to his work. And that, again, marks a professional does it not. LE rules!!!!

.

That’s the empirical introspection of the method being used, which has been talked about many times, before.

.

& how we know that our patients have real diseases & not imagined, too. This is esp. important in the clinical neurosciences, when dealing with real pain, weakness & numbness, which reports can be faked. LE and CP guide empirical introspection and avoid problems too often seen. By setting up solid, CP, LE standards, which the CP creates by T&E and sorting using LE, etc., to do the work.

.

Have gone into these methods in detail in “How Physicians Create Information”, which details those processes, as well. & my article about “Empirical Introspection” & how we know that people have real conditions and are not just fooling us.

.

.

.

The whole point of this article is to show HOW we can efficiently and accurately compare and contrast professionals in each field and how they do their work. Then by comparing 8 or so, find out what methods they use, and then delineate, measure and test them to improve them without limits. Adding new devices, such as on line maps for driving are also showing what’s going on.

.

Thus, “How to Create a Self Driving Car”. Or indeed Any vehicle.

.

.

When we teach how to be a professional in ANY field, those specific tasks, methods, and techniques which are used by professionals, compared to amateurs, show us how to improve without limits what we are doing. It also shows us HOW and WHAT to teach to students to give them the best scientific methods, empirically tested to teach them faster, & better. & then to perform at very high levels, in most all fields, without limits to improvements.

.

Would you rather be trained by someone just showing how to do a job, who is a professional carpenter, or by superior methods & those of others which have been delimited, detailedly delineated, then combined for the best ways of doing things? The effects upon our education systems could be profound. Because NOW using these methods we can scientifically study, improve, and get better without limits in almost all fields, be they the arts, the construction fields, police work, & in EVERY field, this LE method is applicable in ALL the myriad ways of methods/techniques, devices & technologies.

.

Without limits!!! The speed of improvement in all field, esp. in the life staving medical areas would be impressive. but Just. Because inevitably we create the break out. Any society, or person in that society or group, who cannot break out of their current abstractions, after a limited period of growth, is doomed to stagnate. The S-curve, mathematically, And from here we launch into the unlimited S curves of growth and how those can work phenomenally to do all sorts of useful, efficiently highly growth creating methods and technologies.

.

& this is how it’s specifically done. When a new method is created, we compare to the older methods used, how much time, cost, materials, behaviors, actions, etc. are saved. Then we graph that on a created S curve comparing it to similar savings in say time, distance and costs. If it’s significantly better, say at least 15%, then it will by the rule of 72 double the advantage over the others used by about 2 fold if used 5 times. That measures the slope of the S-curve, & shows the saving which can be made, mathematically by ANY new method or technology.

.

Thus we have created an empirical, mathematically measurable quantity to decide whether to use the method/tech and how much return to expect on our investment. It’s a relative term, not absolute, but it’s real, too. Thus for ANY new product or service, the S-curve of growth can be found, and figured out. If it does the job significantly, provably, scientifically empirically better by these measured means, then we know it’s going to go and grow. Because efficiencies create growth and driver the markets. Or in another way, least energy methods created growth.

.

The further implications of this regarding emergence of new phenomena and making the future more predictable are innate in this S-curve concept, as well.

.

Thus we can, if careful studied likely know what’s more likely to succeed and grow than not.

.

We must add that Steve Jobs had an intuitive sense of this, and which was why he was the most successful admin and product creator in history. Essentially he knew how to both create an efficient product, AND how to make it fun to use, thus marketing and utility combined to make the world’s most widely used and successfully marketed, the I-Phone.

.

That’s how it’s done. As far as Uber is concerned, once they realize the cost/efficiency of their methods and that they derive Directly from least energy, TD physics, then they can increase their efficiencies vastly, up to the point of diminishing returns. & then move on up again by improving and “Breaking OUT” of their older methods, which like any method or tool have their values, capabilities, which drive their growth, but also their limits, which cause the growth to peak out at the top of the S-rive.

.

That is what these methods portend for virtually EVERY field today, without limits and without any real end to improvements. Either.

.

Understand what’s behind growth, and then use that, using the empirical methods and scientific methods we now have to create huge growth in all fields, without limits.

.

That’s what these new neuroscience models we have here, promise. & the effects on education & training as well will be limitless; & revolutionary & wealth generating without limits, too.

By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014

Copyright © 2019

The problems with the InMarsat data are that the debris on Reunion, Mauritius, & Madegascar were simply TOO close together, to have come from the southeast Indian Ocean. An intersect point which is likely real, and may have come from the pilot making an interference in the outputs to the Inmarsat, which could account for this obvious process visual Facts. The debris field will necessarily fan out over time. It’s a TD process of diffusion. Thus if too far away from Mauritius, and Reunion and NE Madagascar and Tanzania, then it’d NEVER be seen at all, but would have very greatly spread out and very little if anything would be found on Reunion, Madagascar and Mauritius alone.

Rather if we take the projected lengths of the flight paths, and swing them from the last NW of Sumatra Andaman Sea position of MH370, and move that with a best fit to Where it intersects with the South Equatorial current (SoEqC), then we find a probable drift, flow path which leads using the SoEqC, right to the places where the ONLY provable and possible debris were found.

With a very accurate series of currents maps, we see this very clearly, that the sites on Reunion, Mauritius, the north and south beaches of MadaGascar and the Pemba Isl. in Tanzania, & the S. African area, Are exactly ON and Map the debris sites very well. Even the splitting into N and S branches of the SoEqC as it approaches Madagascar is exactly mapped by this. Thus this comparison process between the highly accurate and detailed SoEqC currents matches very closely what’s seen on the debris maps and the Western Indian Ocean SoEqC currents.

Thus jet MH370 went down just south of the Equator, very likely well south of Ceylon, and crashed and smashed into pieces. The normal flow of the SoEqC current took those pieces to Reunion, Mauritius, the beaches of Madagascar and African coasts where the SoEqC normally flow.

And it can be tested by using very buoyant floating objects with IF transponders, launched in quantities of 100-120 at the probably site in the SoEqC and watching where they end up. So this Model is testable. & from where they end up in the months afterwards, the model can allow adjustments West or eastwards from that probably crash site, to a more exactly estimate where the MH370 crashed into the ocean.

This uses KNOWN data, not estimates, calculations and other Indirect, and dubious due to lots of assumptions, instead. And it’s surprising how exactly the proven and probable debris sites in the Western Indian Ocean Map & correspond very well to the streams and branches of the SoEqC itself, when detailed maps of same are used. A bit on the Tanzanian coast, but more on the S. African Coast, apparently. It even shows the split of SoEqC east of Madagascar into North and south branches, and the tight correspondence of that to the debris there.

Thus the crash of MH370 occurred likely right on the SoEqC and much closer to Mauritius than believed. and this explains both the tight findings, which had the crash been further east, would have been scattered by normal diffusion patterns all over the East African coasts & north even the Arabian Peninsula and place east, as well.

Comparing the debris sites below

‘To this detailed map shows this clearly. and it’s not likely a coincidence

The Break Outs: Roots of Growth & Unlimited Creativities

.

Copyright © 2019

.

“Almost anything which jogs us out of our current abstractions, is a good Thing.” –Alfred Whitehead

.

“Any society (or groups) which cannot break out of its current abstractions, After a limited period of growth, is doomed to stagnation.” (The verbal description of the S-curve of growth and development)” –Alfred Whitehead

.

“I hold that a little rebellion now and then to be a good thing. ” —Thomas Jefferson

.

“If we are to do good physics, we must put the Personal (viz., emotions), aside.” –Dr. Albert Einstein

.

Essentially, it all comes down to the mind traps which humans fall into. Those are created by the stabilities of brain hard wiring and egosyntonic, dopamine rewards, PLUS the stabilities of efficient ways of doing things.

.

.

What happens is best described by the implicit mathematics and physics of the S-curve of growth seen in Whitehead’s wise insight, above. Innovation, creativities, and related events are driven by thermodynamics processes of growth and developments. They then become stable as more efficient methods ARE stable. This is both a blessing and a curse. Because it’s more efficient, it promotes stabilities and better methods of doing things by savings of time, costs, materials, the whole rich panoply of the “Complex System of the 2nd Law”, QV . But on the other hand, it can block in many ways the development of better and better techniques, skill sets, approaches, methods, etc.

.

We see this routinely in economic performances, as well. The Bessemer steel furnace was a very large improvement (as was Watt’s steam engine over Newcomen’s) which gave more for less. Less cost, time, and higher quality steel. It drove the markets in England, and then Andrew Carnegie brought it over to the US where it transformed steel making. & he continued to improve it, as well, please note. There was substantial growth, and it took over the fields of iron and steel factories. It grew quickly as it Was more efficient in many ways than any of the previous methods. It was creative. And thus, it worked.

.

But the downside is as Whitehead so astutely stated. Every method, tool, techniques and technologies have their capabilities, AND their limits. And the downside is their limits, which slows down the growth at the point of diminishing returns on the S-curve, until finally, it runs up against its exponential/asymptotic limits of the growth S- curve. Then the growth slows down. & even stops.

.

We see vast, unlimited numbers of examples of this, for instance. For Apple, they had enormous growth with their mouse, computer, touch screens, and other innovations, due mainly to the Steves Jobs & Wozniak. But inevitably, the growth slowed, others caught up, and IBM’s PC’s overwhelmed them, because it was open architecture, and it could adapt faster and better. Anyone could write programs or add new cards to it, because the PC was “open architecture”.

.

Jobs then left Apple, worked on the NEXT, the next major innovation, but it didn’t sell it had the vaster capacities, but the ease of use and expense likely limited it. But Jobs learned. See the outstanding bio of Steve Jobs as an insert in Steve Forbe’s magazine, published a bit after his death in Oct. 2011.

.

Then he brought out the newer, more efficient technologies, which resulted in a vast growth in Apple, IBM hit the limits of their growth curve, and began to decline. so much so, how the mighty have fallen applies. Even selling off their PC section!!!

.

Then Jobs brought out the newer I-series of technologies, which created the largest growth, S-curve ever seen, with the I-phone dominating at the last, the entire world market, Apple becoming the biggest computer company, and indeed the largest, richest of all of them, as well. Finally, in the last stages, after creating over 1 Billions and more devices in operation, the diminishing returns hit (top of the S-curve). Apple’s growth slowed down, due to those. The S-curve once again. and their profits and sales began to decline, along that very clear, predicted, predestined S-curve of growth of Whitehead.

.

The same occurred with the Japanese economy, which was very quickly grew to the second largest in the world, and then hit the S-curve limits. The same more recently with Chinese economy, which became the world’s 2nd largest economy (at least), and which is now at the top of their growth/S-curve, with declining growth rates, way down from the heady days of the 1980-’90’s of 20-25%/year. & now down to 6% and still falling. With serious banking, structural and other problems. Again, capabilities which drive the exponential S-curve growth and at the last, the diminishing returns as the curve flattens out at the top into an exponential barrier.

.

This is nearly universally seen in technologies, and growth systems of all kinds and can be easily fitted onto a mathematical form of growth, S-curves. Indeed, it’s so universally seen, that it can be used, when properly developed to predict how much a system will grow, and when it will slow down and then when, & thus be predictive of how much each new idea is worth, too. This is deep & important idea enough to be developed into an entire article, coming later.

.

As Steve Jobs so well knew, a new growth curve set needed to be created & sold. The S-curve of Whitehead is a nearly universally applicable process to most all growing, Complex systems, as well, and is seen & confirmed almost universally with growth of most all types. The value of Whitehead’s insight becomes remarkably apt in these days of techno innovation & change and changing markets.

.

But that is a piece of it, but not all. When we consider personally, how new methods which we use every day in our personalities, ALSO do the same, then the real day to day effects on our very lives, for each of us, becomes much more clearly seen. Each of us have a series of skills sets which we have developed over the years by the universal methods of Trial & Error and sorting. This is true of the professionals, especially. Their methods are highly efficient in terms of least energy methods of costs, energy, time savings, better outcomes of goods and services, & so forth. & nearly universally ALSO explains the differences among the professionals in most every field, high efficiency methods and work, versus by comparison processing, the outputs of amateurs. Thus, the differences among each of the Professionals in nearly every field, can be cast in term of Least Energy efficiencies, versus those of the amateurs. A unique set of skills, which differ from professional to professional, yet use many similar methods in each distinct field, AND least energy vocabularies (That’s WHY, least energy efficiencies, the professionals develop, esp. in medicine, their highly professional words, phrases and descriptive methods.). And this can be studied, developed and shown to be the case in nearly EVERY field. Least energy rules the professions. That is Expert systems in other applications, too.

.

.

But we need to look at the far, far larger picture which Whitehead stated, the complex system of events. Every skill set of every professional has its growth potential, but ALSO the exponential, asymptotic average set of the S-curves. Their very success makes them good at what they do. But each of those methods used reaches yet another S-curve of diminishing returns. & so the fields, altho more efficient than those previous, ALSO reach stagnation, and stabilities. This stability is a thermodynamic quality, as well. The more efficient, stable, durable & useful a method/technique or technology/device, the more it’s Least energy.

.;

This is what’s going on with our personalities too.. Each of us have those skills sets of the boundless ways of doing things each day. If those are driven by least energy recognitions, then they grow and develop without limits. If not, then they stagnate. Dopamine and the emotions are also good drivers, as using the emotions properly as did Einstein, Jobs and most all of our creative people do. But those can create fads & fashions and those fail if not least energy efficient. This effectively and succinctly models real & valued methods and shows the impermanence and transient nature of many social movements. Festinger, in his “social Comparison model” one of the best social models ever found and used practically, has discussed this as well.

.

BUT, and this the caveat here, in time the growth becomes a facilitated, habituation of cortically stabilized by long term memory(LTM) proteins & synaptic lay downs. So we do the same things over and over again. The memory traces reach stabilities. THAT is the mind trap in many cases. In order to advance & progress, even survive, we MUST break out as persons, as societies, & as groups OUT of our current abstractions and sort out, find and explore MORE & better ways to meet our needs.

.

Whitehead shows us how to do this. His kind of process thinking, introduced and then ignored, nearly 100 years ago, of visualizing solutions, is The Key to how our brains work best and most comprehensively, as well, to find and plan ahead practical creative solutions to problems of most all sorts. The “thought experiments” which Sagan’s “Cosmos” so brilliantly touches upon with Einstein, are yet other examples of the mechanical engineers and indeed most creative person’s abilities to “see the answers.” AI can’t do that very well, for obvious reasons. When it passes that kind of “Turing test” of visual, process thinking, which requires huge amounts of processing power, Then it will become in part, more human.

.

.

We can tie in this very well with Thomas Kuhn’s, “The Structure…” Where he showed how the solar system model was successively improved over time, using creative, information additions, until we have a very, very good model now. But not quite. Where the ancients used the planar circles, Kepler & Newton used the ellipses of orbits, which were also planar, 2D. It was next realized that the orbits were NOT 2D but THREE D, and that led to the current “elements of orbits”, which are more accurate in predictions, but which, however, are not complete, either. In time, the complex system orbits of each planetary system become in time less and less predictable.

.

This was all driven by least energy methods, which described more with less. & in a very real sense, also became complex system(Cx Sys) beliefs, as we see with the probabilities of QM. Wherever we have a Cx Sys, mostly likely, the uncertainties we find with those probabilistic methods, be it with a simple Cx Sys of dice throwing, or a more Cx Sys with the weather, and genetics. When we see probabilities, we deal more & more with the uncertainties of the incompleteness of Cx Sys understanding. & thus our knowledges have also, the very same huge problem of “incompleteness”.

.

Because most all of our models are NOT complete, for one simple reason, The thermodynamics bases, Shannon’s model, is that there is NO completely efficient heat engine, we see at once the consonance with the facts that our ideas do NOT very completely compare to events in existence. What we expect even in our best theories, is not usually what we find with more and more observations. Those Leon Festinger’s Cognitive Dissonances, the observed, comparison process created disparity between what we expect, personally, professionally and scientifically which what instead very likely happens, is the Case. From those come the new models, which those attempt to solve that Paradigm problem, of “it doesn’t work that well”, or as Kepler stated, except for that ” 8 minutes arc error in Circular orbits, I’d have accepted Copernicus’ model”. We see, broadly the case that Kuhn had it right. Deeply we see. Incompleteness is inefficiency, and thermodynamically a quality of the same ignorance.

.

What we expect is NOT what we find. And that, be it the neutrinos problem of the absent 2/3 solar neutrons. which revolutionized the physics model of neutrino mass and types; of the failures of growth in all other respects, is what drives growth and development.

.

It doesn’t work very well, there, we too often see. The comparison process drives Festinger’s Cognitive dissonance, driven by the same, is almost universally the case.

.

So we, when we want to figure out what’s going on, MUST rise above, Break out, get jogged out of our “current abstractions” & find newer & better ones, which do more with less, efficiently and describe, predict & model more with less, too.

.

This is what’s going on personally, as well. Too often we fall into the “ruts” of our mind traps of brain hard wiring. It’s a facilitated, habituated, LTM situation. It’s easy to do the same more or less efficient ways over and over again. But if we try to change, it’s hard. & almost anything which breaks us out of our current abstractions, is thus a good thing & a good way to more growth.

.

This then is the key to growth without limit. because the universe is so huge with Cx Sys’ totally beyond our control, prediction or understanding. & yet our brains are set up & can be trained up further to penetrate those mysteries, those cognitive failures, do better. & without limits, too, as our brains are so tiny in comparison to the events in existence of our universe.

.

This has largely been written about in La Chanson, of all the myriads of Cx Sys ways we use, & why we use them. The field of medicine uses, as does biology, Cx Sys methods all the time, descriptively, but we haven’t realized that, generally.

.

.

The S-curves of growth, without limits, show us HOW we succeed, and then why we fail, reaching the “limits of growth”, a la Herman Kahn, we should add.

.

Thus we see the much larger picture. The next, likely and largely confirmed paradigm of our understanding. Perfection and the absolutes are mostly artifacts of our brains & are likely unreal. The exponential barriers of the tops of the S curves show this. To escape those we must go OUTSIDE of the current abstractions to create more growth.

.

.

It’s very, very hard to understand logic, religions, maths, & many other systems WITHIN those systems. We cannot usually understand our own home town, unless we visit OTHER cities and see how those work, & then compare those facts & information to what we see in our own. In order to be linguists we must speak and understand a number of languages. We must Escape the mind traps of our habituated, stable, facilitated, hardwired beliefs and ways of doing things. This is why travel and peripatetics work so well to teach us new things, too. This is why “never stop learning” is also critical to personal, social & scientific growth and development.

.

Thus, we find, use and develop more ways, the means to Break OUT of our current abstractions, do we not?

.

& it’s very clear as well that in our beliefs of all sorts, we are given benefits by those, creatively, but not absolutely, Either. There is NO absolute space and time. Our measurements cannot be final, nor complete, as Einstein and physics has shown. & similarly our descriptive, verbal methods are Also epistemological restrained by the same Einsteinian rules.. Those also arise by comparison processing of verbal, relatively fixed stable standards, as in “How Physicians…” Description by creating efficient methods is the case. But even those have their serious limits, too. The S-curves of growth AND their very serious limits.

.

.

The nearly Universal processes of Comparison Processes(CP), which can create and discover the Least Energy (LE) solutions to problems by T&E and other sorting means, creates the necessarily creativity to Understand understanding. & uses those means to find and with structure/function (S/F) relationships within this Universe of events. & casting that within a Cx Sys model, can create a nearly universal Model of Everything(MOE). This is what Einstein was searching for, to simplify our understandings of most everything. But when, while he had the thermodynamics right, didn’t realize that it was a big part of that nearly universal processor, LE. And we have yet found another example again within, very unexpectedly, the Hubble/Hummason data. It’s Einstein’s work, the bending of photon paths by large gravitational fields, Universally in all spaces & times those Einsteinian Arcs & crosses, witnessed & observed by the scores in each image, without limits. & thus confirmed by the Hubble Telescope.

.

That was what the great Steven Hawking attempted in his lovely “The Grand Design”. But by adding Cx Sys thinking, LE, CP, and S/F relationships we can get there, into the Undiscovered Country of a Unified Model. That was what he missed. & it’s simple, easy to correct, use and apply without limits.

.

“Depths Within Depths” shows this: & it can be developed far, far more without limits in terms of confirming events/relationships.

.

.

& then this gives us the ” Promised Land of the Undiscovered Country” of the Cx Sys model which can not only describe very much better that which is going on within us, but that outside of us, too. A nearly universal Model of Everything, created by those found, nearly universal processors of most all events, very likely.

.

.

The MOE themes and discoveries will soon be published here, to show how that can be done, confirmably.

Rev. 9 Jul. 2019.