Problem Solving for Self Driving Cars: a Model

By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014

.

As predicted the interactions between self driving cars and their environments of fog, snow and ice, intense rain, and other weather conditions (tornadoes and hail), plus the road construction site complexities created by loss of landmarks and other stable signs marking safe driving lanes are clear. But ever so much more clear are the problems being caused by humans who are complex systems and NOT rational drivers in the usual sense. This has created at least one commonly seen pattern, the rear ending of human driven vehicles by self driving cars. And the teenage propensity to “play” in traffic, approaching self driving cars and forcing them to stop unexpectedly by stimulating their auto-braking to kick in using various methods. Those persons will find, many other ways of confounding auto-driving cars.
.
The abilities of self driving cars to deal with these complex system inputs is a real problem, because humans are NOT driving according to traffic laws and good driving rules, nor least energy rules, necessarily. & this means that such human/self-driving car interactions are going to be a real problem, in addition to the environmentally complex systems which such car decision making systems will have to face, too.
.
Simply stated, humans will stop for the “heckuvit”, too often, and this results in self driving cars crashing into them from the rear. Now how does the usual driver deal with this very real problem? And how do professional drivers do so? & what are the causes of traffic chain reaction pile ups in fog and snowy weather, which are likely to show us the way out of those complex system problems?
Taking a case by case series of rules, the usual driver will leave about 10-20 feet between a car and the one in front, according to a general rule of 10-20′ and traffic conditions per 10 miles/hour speeds. Thus driving at 40 mph, there would have to be about 40-100′ between the car and the one in front.
.
But, this is a probability confounded by the weights of the vehicles. Large trucks need a longer stopping distance in those being driven in front of others, and those trucks behind other trucks, than shorter and lighter cars. And in addition, lighter cars can often safely stop at shorter distances, as well. There are many other factors, such as if the front driver stops unexpectedly due to a mistake, or other factors. The human driver will often plow into him, esp. in heavy traffic conditions. In addition, if a driver strikes a heavy pole, abutment, or tree near the roadside, the car might well angle out into the lane, and block that lane requiring a fast, complete stop by a following vehicle, esp. in oncoming heavy traffic. That sudden stop is NOT covered by the speed/distance to stop rule, and will often result in a rear ending.
.
Also, if an oncoming car, bike, motorcycle or pedestrian suddenly pulls in front by whatever reasons or conditions, the front car will slam on the brakes & the car behind is likely to hit it. The complexities of these issues are very, very great, and depend largely upon traffic conditions, heavy/rush hour, or lighter, or due to lighting condition, esp. at night, where visual information is lessened, often substantially. The medical conditions and mind altering chemicals also are complexities. So, how to solve these complex system problems?
.
We go back to a rule. Professional drivers are usually better than the average, amateur drivers, just as professionals in most fields are much better than amateurs. They have encountered many, many types of the same situations in which most drivers have not. Thus they have seen the higher, more complex patterns of stabilities repeating. because they have developed more methods to drive safer, & are far, far more able to adjust quickly and avoid accidents. But what are the features of professional drivers? Those are a series of skills & driving styles, and processing of the complex events which are encountered daily. Driving in heavy traffic, they become more alert, brain works faster, and they tend to slow down, according to conditions. Faster driving shortens reaction times, in other words. That principle needs to be built into driverless cars, one way or the other. In other word, the rule is when conditions get critical, recognize those many types and Slow down!!
.
And this is the point, studying professional drivers and how they operate in such conditions will provide the data, esp. with 10-12 or more of those experienced drivers, about they they respond, in real, repeating, efficient ways to such conditions. Those are the stabilities of methods which will show up, and that information can be extracted by comparison processing to be found, and created. & it will be far more efficient as well and result in faster progress than other methods.
.
First of all, what kind of drivers will be needed, who drive professionally? Not taxis, or delivery for cars. But professional drivers who work out of their cars, and related auto industry persons who operate vehicles, such as salespersons. They have to get to places quickly, safely and effeciively. And they are practiced at doing so.
.
Regarding trucks of all kinds, the mostly personal delivery systems would be best, although not UPS, etc, but smaller companies who drive.  In this way the specifics skills sets of such drivers can be delimited and then compared. Each driver will have his own sets of skills, that is style of driving. and mostly by comparing those many kinds of methods will give us the best series of skills needed to drive most safely and avoid most accidents. Clearly, those with the highest level of no accidents/incidents, will be the gold standard of whom to study. For obvious reasons those without drivers’ licences are out. But the standard needs to be an established, professional driving history plus least accidents ratings. Such studies can also be used by insurance companies to predict who is most likely to have and keep a safe driving records, because, clearly, they haven’t had accidents. And that can be easily correlated and studied to figure out HOW, that is the structure/function relationships of good driving.
.
Oddly enough the personality traits of good drivers are also on analysis here, and those traits of sociopaths are not likely to result in good driving records. But those driving records of pros who drive with a good history of safety, not only avoiding accidents themselves, but also in Avoiding the bad drivers, and those signs such give off, when the professionals are driving about. It’s complex system, but doable. It will take work, but the inevitable creation of data about what safe driving skills are the most effective will be a valuable product, as well, from such studies, and will be of benefit to both insurance companies and even traffic pattern studies. & can be incorporated into Drivers education classes found in most all secondary schools statewide in the US. & the legal classes and such of those driver re-training after having accidents, law breaking, and so forth, too.
Thus these studies will be a of widespread benefits, not only for creating safer self driving car strategies, but widely, even internally, as well. What’s the value of making driving safer in a nation with 100K+ injured in vehicular accidents each year and 10K’s of deaths?  Incalculable due to the likely reductions of such problems which these studies & their applications will create.
.
And setting up sudden unexpected conditions for them to drive in, will also show how they deal with those, as well. But in a real time world, NOT on an isolated track somewhere, wherein the reality of the situation does not correspond well, and introduces too much bias into the conditions for really good, practical outcomes research.
.
Thus we use professional drivers, and their methods and skills, which can be observed, and delimited quite well, to guide by human supervision, HOW to avoid those problems of weather, construction areas, and heavy traffic, and those problems so often seen in driving which result most likely in accidents. & then compares those to the average driver, for significance. Because most persons have a set of multiple skills they use, which are often unique to them, a number of professionals will need to studied to see as many as possible of the skills used to more safely, efficiently and effectively drive.
That will take some work, but will very likely result, if properly and creatively approached, in methods which will improve human driving as well as that of self driving vehicles. Human supervision of AI is well established and provides, by long experience, far higher performance in AI as has been shown many times. Thus professional drivers will provide an incentive to improve not only self driving cars, but human drivers, as well. Because it can be used practically and will create a virtual, newly created science of driving methods and strategies. It will make humans better drivers as well, and who could object to that? Get that dopamine boost operating in humans to make the investigations grow faster, and get more financial support for such essential, critical studies of driving, as well.
.
In addition, each of the methods used by professional drivers which work,can be detailed and then tested for which are the best, most efficient ways of avoiding accidents in the higher, more likely, observed stabilities of accident events. Alos which driving skills result in getting most efficiently to the place needed to be visited. Least energy and travelling salesman problem solutions can be addressed as well, including more efficient methods to get to places in series, or alone.
.
Rear ending, avoiding potential collisions from approaching cars, sudden turns into traffic from the right and left; those drivers who cut off other drivers, and suddenly veer in front of professional drivers. The data are already there with those good drivers and there is no need to re-invent the wheel. Plus the fact that as the methods are found, they can be then experimentally improved upon without limit, too, in each and every case. Each new method can also be combined with detection devices and other AI to improve personal, human driving as well. This will exponentiate growth of both human and driverless cars’ driving strategies and methods, and devices, very likely, if done properly.
This will create rules for driving, which will not only improve the strategies of self driving cars, but human drivers as well. Given the success of AI and expert systems in applying those rules already seen, this will substantially shorten & speed up the rate at which self driving cars can develop more quickly and better deal with unexpected events while driving. This will solve thye problems of self driving cars and speed applications to this new capability. And further, will also improve most driving overall, when those principles are elucidated and worked on.
.
This is how it’s done, naturally, anyway. Learn from those who know best in a scientific way, and then go from there, by applying good driving principles to humans and self driving cars. And “knowing where to go”, gets us there a LOT faster, by cutting out the trial and error, slow learning meanderings to far, far fewer, does it not?
.
& further teens can be asked about how they have been able to interfere with self driving cars, and thus effectively use the more creative of them to find & deal with potential problems, if they are repeatable enough, to avoid more problems. We can recall that many chess programs formerly developed which had peculiar quirks which humans did not have, and those could be exploited to beat them most every time. Be assured that self driving computer programs and devices will have those same quirks that Sargon had in chess. And the more complicated they are, the more likely those will be created by the programs, too.
.
This is application of the comparison and least energy processes plus complex systems in a workable, realistic, practical way to provide an incentive for humans, as well as self driving car makers, to solve some of the real problems of driving, across the spectrum of the complexities of driving more safely and better.

Table of Contents

1. The Comparison Process, Introduction, Pt. 1
https://jochesh00.wordpress.com/2014/02/14/le-chanson-sans-fin-the-comparison-process-introduction/?relatedposts_hit=1&relatedposts_origin=22&relatedposts_position=0

2. The Comparison Process, Introduction, Pt. 2
https://jochesh00.wordpress.com/2014/02/14/le-chanson-sans-fin-the-comparison-process-pt-2/?relatedposts_hit=1&relatedposts_origin=3&relatedposts_position=1

3. The Comparison Process, Introduction, Pt. 3
https://jochesh00.wordpress.com/2014/02/15/le-chanson-sans-fin-the-comparison-process-pt-3/?relatedposts_hit=1&relatedposts_origin=7&relatedposts_position=0

4. The Comparison Process, The Explananda 1
https://jochesh00.wordpress.com/2014/02/28/the-comparison-process-explananda-pt-1/

5. The Comparison Process, The Explananda 2
https://jochesh00.wordpress.com/2014/02/28/the-comparison-process-explananda-pt-2/

6. The Comparison Process, The Explananda 3
https://jochesh00.wordpress.com/2014/03/04/comparison-process-explananda-pt-3/?relatedposts_hit=1&relatedposts_origin=17&relatedposts_position=1

7. The Comparison Process, The Explananda 4
https://jochesh00.wordpress.com/2014/03/15/the-comparison-process-comp-explananda-4/?relatedposts_hit=1&relatedposts_origin=38&relatedposts_position=0

8. The Comparison Process, The Explananda 5: Cosmology
https://jochesh00.wordpress.com/2014/03/15/cosmology-and-the-comparison-process-comp-explananda-5/

9. AI and the Comparison Process
https://jochesh00.wordpress.com/2014/03/20/artificial-intelligence-ai-and-the-comparison-process-comp/

10. Optical and Sensory Illusions, Creativity and the Comparison Process (COMP)
https://jochesh00.wordpress.com/2014/03/06/opticalsensory-illusions-creativity-the-comp/

11. The Emotional Continuum: Exploring Emotions with the Comparison Process
https://jochesh00.wordpress.com/2014/04/02/the-emotional-continuum-exploring-emotions/

12. Depths within Depths: the Nested Great Mysteries
https://jochesh00.wordpress.com/2014/04/14/depths-within-depths-the-nested-great-mysteries/

13. Language/Math, Description/Measurement, Least Energy Principle and AI
https://jochesh00.wordpress.com/2014/04/09/languagemath-descriptionmeasurement-least-energy-principle-and-ai/

14. The Continua, Yin/Yang, Dualities; Creativity and Prediction
https://jochesh00.wordpress.com/2014/04/21/the-continua-yinyang-dualities-creativity-and-prediction/

15. Empirical Introspection and the Comparison Process
https://jochesh00.wordpress.com/2014/04/24/81/

16. The Spark of Life and the Soul of Wit
https://jochesh00.wordpress.com/2014/04/30/the-spark-of-life-and-the-soul-of-wit/

17. The Praxis: Use of Cortical Evoked Responses (CER), functional MRI (fMRI), Magnetic Electroencephalography (MEG), and Magnetic Stimulation of brain (MagStim) to investigate recognition, creativity and the Comparison Process

https://jochesh00.wordpress.com/2014/05/16/the-praxis/

18. A Field Trip into the Mind

https://jochesh00.wordpress.com/2014/05/21/106/

19. Complex Systems, Boundary Events and Hierarchies

https://jochesh00.wordpress.com/2014/06/11/complex-systems-boundary-events-and-hierarchies/

20. The Relativity of the Cortex: The Mind/Brain Interface

https://jochesh00.wordpress.com/2014/07/02/the-relativity-of-the-cortex-the-mindbrain-interface/

21. How to Cure Diabetes (AODM type 2)
https://jochesh00.wordpress.com/2014/07/18/how-to-cure-diabetes-aodm-2/

22. Dealing with Sociopaths, Terrorists and Riots

https://jochesh00.wordpress.com/2014/08/12/dealing-with-sociopaths-terrorists-and-riots/

23. Beyond the Absolute: The Limits to Knowledge

https://jochesh00.wordpress.com/2014/09/03/beyond-the-absolute-limits-to-knowledge/

24  Imaging the Conscience.

https://jochesh00.wordpress.com/2014/10/20/imaging-the-conscience/

25. The Comparison Process: Creativity, and Linguistics. Analyzing a Movie

https://jochesh00.wordpress.com/2015/03/24/comparison-process-creativity-and-linguistics-analyzing-a-movie/

26. A Mother’s Wisdom

https://jochesh00.wordpress.com/2015/06/03/a-mothers-wisdom/

27. The Fox and the Hedgehog

https://jochesh00.wordpress.com/2015/06/19/the-fox-the-hedgehog/

28. Sequoias, Parkinson’s and Space Sickness.

https://jochesh00.wordpress.com/2015/07/17/sequoias-parkinsons-and-space-sickness/

29. Evolution, growth, & Development: A Deeper Understanding.

https://jochesh00.wordpress.com/2015/09/01/evolution-growth-development-a-deeper-understanding/

30. Explanandum 6: Understanding Complex Systems

https://jochesh00.wordpress.com/2015/09/08/explandum-6-understanding-complex-systems/

31. The Promised Land of the Undiscovered Country: Towards Universal Understanding

https://jochesh00.wordpress.com/2015/09/28/the-promised-land-of-the-undiscovered-country-towards-universal-understanding-2/

32. The Power of Proliferation

https://jochesh00.wordpress.com/2015/10/02/the-power-of-proliferation/

33. A Field Trip into our Understanding

https://jochesh00.wordpress.com/2015/11/03/a-field-trip-into-our-understanding/

34.  Extensions & applications: Pts. 1 & 2.

https://jochesh00.wordpress.com/2016/05/17/extensions-applications-pts-1-2/

(35. A Hierarchical Turing Test for General AI, this was deleted after being posted, and it’s not known how it occurred.)

https://jochesh00.wordpress.com/2016/05/17/extensions-applications-pts-1-2/

35. The Structure of Color Vision

https://jochesh00.wordpress.com/2016/06/11/the-structure-of-color-vision/

36. La Chanson Sans Fin:   Table of Contents

https://jochesh00.wordpress.com/2015/09/28/le-chanson-sans-fin-table-of-contents-2/

37. The Structure of Color Vision

https://jochesh00.wordpress.com/2016/06/16/the-structure-of-color-vision-2/

38. Stabilities, Repetitions, and Confirmability

https://jochesh00.wordpress.com/2016/06/30/stabilities-repetitions-confirmability/

39. The Balanced Brain

https://jochesh00.wordpress.com/2016/07/08/the-balanced-brain/

40. The Limits to Linear Thinking & Methods

https://jochesh00.wordpress.com/2016/07/10/the-limits-to-linear-thinking-methods/

.

41. Melding Cognitive Neuroscience & Behaviorism

https://jochesh00.wordpress.com/2016/11/19/melding-cognitive-neuroscience-behaviorism/

42. An Hierarchical Turing Test for AI

https://jochesh00.wordpress.com/2016/12/02/an-hierarchical-turing-test-for-ai/

43.  Do Neutron Stars develop into White Dwarfs by Mass Loss?https://jochesh00.wordpress.com/2017/02/08/do-neutron-stars-develop-into-white-dwarfs-by-mass-loss/

44. An Infinity of Flavors ?                             https://jochesh00.wordpress.com/2017/02/16/an-infinity-of-flavors/

45. The Origin of Infomration & Understanding; and the Wellsprings of Creativity

https://jochesh00.wordpress.com/2017/04/01/origins-of-information-understanding/

46. The Complex System of the Second Law of Thermodynamics

https://jochesh00.wordpress.com/2017/04/22/the-complex-system-of-the-second-law-of-thermodynamics/

47. How Physicians Create New Information

https://jochesh00.wordpress.com/2017/05/01/how-physicians-create-new-information/

48. An Hierarchical Turing Test for AI

https://jochesh00.wordpress.com/2017/05/20/an-hierarchical-turing-test-for-ai-2/

49. The Neuroscience of Problem Solving

https://jochesh00.wordpress.com/2017/05/27/the-neuroscience-of-problem-solving/

50. A Standard Method to Understand Neurochemistry’s Complexities

https://jochesh00.wordpress.com/2017/05/30/a-standard-method-to-understand-neurochemistrys-complexities/

51. Problem Solving for Self Driving Cars: a Model.

https://jochesh00.wordpress.com/2017/06/10/problem-solving-for-self-driving-cars-a-model/

A Standard Method to Understand Neurochemistry’s Complexities

By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014
.
In the field of neurochemistry it has been estimated that there are about 100 neurochemicals or so operating in the human brain. How to understand all of these complex system neurochemicals and how they work? The method has been understood and is therefore easily applicable in all the myriad ways.
.
First of all, these descriptive methods use comparison processes and methods, the Least Energy rule, structure/function relationships, & complex system thinking. Essentially, description is the application of relatively efficient concepts/words which act as standards against which we can describe events in existence. The close correspondence between those ideas and the events to which they relate, describe, and detail processes is the critical insight here. Because once this descriptive model is largely developed into a working model, then it can be in part mathematized to greater define by measurement what’s going on, and can be more precisely tested and thus described.
.
But First comes the model, which we learn from Einstein. Mostly descriptive, just as Newton’s first inklings of gravity were visual, process thinking, it was developed and lastly mathematized by Einstein and Minkowski. Note, please, this sequence. Creating the model from comparison methods and least energy real existing principles, to give it the empirical nature it must have to be real and useful, and then mathematizing it where possible.
.
This is also seen in the way our linear measuring scales take our sensory systems and create high, higher, highest, and low, lower lowest. Taking the synonym/word cluster of height, length, width, altitude, size, etc., and creating a standardized, relatively fixed, efficient method, that is, convention (standard), by which we can use to describe events. Thus we created first the span, foot, etc. measuring tools. Then a more English system, Also using “hands” for measuring horse heights at the shoulder; and finally, the metric system which embodied the decimal system for efficiency, which easily replaced all the others for least energy reasons. Thus the cubit, to the foot, to the mile, to the meter, etc., development, all least energy driven.
.
For temperature we use the efficient, easily duplicable freezing point and boiling points of water, and the Fahrenheit system, for describing warm, hot, hotter, hottest, and cool, cold, colder, coldest, all linear, verbal scales. The corresponding degrees Celsius/Fahrenheit, etc., does this.
.
For speed & velocity, rate of expansion, etc. we use the comparison ratio of length/time. For time we use the second, minute, hour, day, week, month, year hierarchies, as well. And on and on, the myriad ways of La Chanson.
.
Just so we use comparisons of an efficient description standard, thus to understand much better how the neurochemicals work. Correspondingly and consistently we use the best, oldest, most wide spread neurochemical as that standard to compare, describe and measure the other neurochemicals. (as we use the pain killing power of morphine as the efficient standard against we compare pain killing analgesics of ALL forms, for instance, using a direct ratio of pain killing power)..
.
Thus we describe in detail the characteristics of the complex systems of dopamine (DA) as our standard. & using that to develop and compare against the characteristics of serotonin (5HT), GABA, and neurokinins. And describe the sedatives which act via GABA and Serotonin pathways, and the nicotinic, dimethylxanthines (caffeine, theobromine, etc.) and strychnine, as well. These act in excitatory ways, analogues of DA and the other two family members of the catecholamines, adrenaline and noradrenalin, derived from the DA, the older, parent chemical.
.
In the same way, the other synthetic and artificial stimulants can also be compared to the catecholamine standards to understand their variations on this theme. This is how it’s done.
.
How do we do this? Dopamine is the oldest neurochemical of import. We know this by the existence of by usual means, 10 receptor sites, D1 being motion, and E-motion being D2, generally tho there is some overlap. In addition it takes 1000’s of years of evolution to create new receptors, which are protein mediated sites on cell membranes, usually. But also we know that DA is the parent neurochemical of both norepinephrine and epinephrine (adrenalin), the latter being very powerful. But both NE and Epi have DA effects, as well. This is because we raise BP in patients by DA infusion, and also that it increases heart rate, too, which are Alpha Adrenergic actions. Beta adrenergic effects are smooth muscle relaxations to some extent, also mediated by DA. Therefore the 5-6 adrenergic receptors are ALSO DA mediated subsets. In addition, epi, being complex system, with a number of effects creates immune system effects, pain blocking, as well as increasing blood sugar. There are others, as effects on immune system being anti-histaminic and treating anaphylaxis, successfully and on a very quickly responding, emergency basis. & nausea, emptying of bowels, too. Anorexia is also a trait associated with the catecholamines, too. This gives an idea of how central to the CNS the whole system DA and its daughters, NE and Epi, are. And how the central, standard/convention of DA can create an understanding better of the catecholamine family of relationships, from the start. And thus can be extended easily by comparison to each of the other major neurochemicals and their synthetic, pharmacological analogues.
.
In addition, nicotinic actions and receptors are known. But this is the cognitive dissonance: there is NO nicotine normally in brain. So what acts at those receptors? And how do we explain the nicotinic agonists of epibatidine and tebenacline which block pain at 200 and 50 times respectively the mg./mg. comparison with morphine?
.
As written about before, what of the motor nicotinic, Ach receptor sites? That cannot be because unlike insects, there is not wide use of nicotinic sites, which is why the insecticides are very often Ach inhibitors as well and the nicotine is also insecticide. As might well be chocolate, theobromine and theophylline which can poison many animals, too. & human babies if not careful  & further there is NO nicotine in the brain. Thus we begin to see the overlap, the commonalities of the excitatory normally occurring receptor sites of DA and excitatory neurochemicals, as well. And using this method, we see that the nicotinic sites are very likely most closely related to DA and the catecholamines systems.
.
In addition, in pain killing power, DA ranks very high. As does adrenalin, as anyone who’s been seriously wounded knows. The flight or fight outpouring response of adrenalin, DA, cortisol, anandamide, kills pain wonderfully, for about 20′, allowing us to get away. and THEN we start feeling the pain. DA and related neurochemicals kill pain, and do it convincingly. Block the DA, and the pain recurs. Use naloxone and the pain killer is blocked. Use epibatidine and tebanicline (ABT-594) and the pain is blocked as those are nicotinic agonists. But recall, there are NO nicotine chemicals in the brain known, only DA!!!
.
Thus DA & related are pain blockers, largely, and we can measure those against the power of DA as well, or Adrenalin if we choose. As we currently use the morphine standard of pain control per milligram, when compared to the other pain drugs.
.
DA is a stimulant. It wakes us up, and activates the awakening of the nervous system. It has the D1, largely the motion, and D2, largely the E-motion (no coincidence there, but a widely missed point in neurophysiology). When DA is given for BP drop, to support perfusion, we see that not only does the BP rise, with the heart rate (adrenergic Alpha receptor stim, but also that the person wakes up some times from the coma!!!
 .
n the Balanced Brain we have the interplay between serotonin/5HT, GABA, and neurokines in terms of sleep/wake cycles, the circadian rhythm, mediated by DA and 5HT. And the pain system, mediated by the stimulant, neurokinin, which will NOT rest & unfailingly reminds us until we feel and deal with the pain, effectively. It’s a stimulant, like nicotine and the catecholamines, whereas serotonin is inhibitory, with all the shades of grays. Thus neatly accounts for the kinds of circadian rhythm problems, the migraine HA’s which are DA and 5HT mediated, and the pain complex system, as well.
.
.
GABA is an inhibitory neurochemical and its sedating effects, like those of chlorpromazine, a standard DA inhibitor, are like serotonin, but with differences. GABA agonists block Seizures as well. Serotonin enhancers do NOT create seizures. But low 5HT activity/levels seen in migraines explain the 10% incidence of seizures with migraines, as well. And the seizures created by adrenalin and nicotine are well described. These complex interactions and their confusing mass of effects, are largely all standardized and made organized by comparison with the DA effects, and functions.
.
Thus we begin to see how we can understand the other neurochemicals from the complex system, multiple receptor sites of DA (and Those receptor site multiplicities, all the myriad ways), show us the complex system nature of receptor sites. (Just as the Lorentz/Fitzgerald equations were NOT how the universe looked without the putative ether, but were quite instead measures of how fermions were affected by speeds near that of Cee, as Einstein showed), so are the receptor sites clear cut, obvious insights into the complex system nature of physiology, including neurophysiologies and neurochemicals. That relationship is at once evident when the new epistemology, the comparison and LE insights are applied.
.
In addition we often speak of “side effects” of drugs. In other words those effects which are not what we’re looking for, and which render some drugs useless because of those “untoward” effects. Very likely just like receptors sites show, the so called “side effects” are but linear simplifications & are NOT side effects, but more accurately complex system effects. This is how we use more advanced comparison standards to extend, improve and make more efficient our understandings. & each of those complex systems are NOT to be ignored by linear methods, but they show us the many capabilities of such drugs, which we can use to design more effective medications, doing more with less.
.
As the method which can make Sildenafil 50 mg. tab last for 3 days in responders to the usual dosages and linear, “take the pill” model. Just as l-dopa with carbidopa, or amoxicillin with clavanulate (a beta lactam inhibitor), are far, far more effective with fewer “side effects” than without. This shows the way, frankly, to a new pharmacology of complex systems, which unbeknownst to most pharma, they are going towards with methods used to treat depressions. Many new depression meds have multiple effects built into their molecules. Thus, in this complex system way, much is done with less.
.
we find, as in the “Balanced Brain” that our early medications did not treat all of the disease processes, such as hypertension, depressions, and migraines. But as we found out, there are many aspects to migraines, being complex systems, and careful, trial and error testing showed that taking a complex system approach using meds to block the DA, 5HT, and neurokinin pain AND anti-inflammatory effects of migraines were necessary to control migraines better, and often prevent them.
.
This brings out the DA spectrum of effects in the motions & emotions. The production of dyskinesias, parkisonisms, and related movement disorders and the related emotional disorders at the high end of DA activities, mania/psychosis. Then down through the more normal range of the sociabilities of the species, love, charity, altruism, hope, faith, optimism, and the qualities of mercy in the largely D2 category. Altho those do overlap with D1. However, the creating of DA blockers to leassen the D1, and amplify the treatment of D2, by fewer “side effects” (ahem!), we find better emotional control of manias and psychoses, and now we know more clearly why. Complex system effect models again. And the addicting effects of DA and related neurochemicals, such as endorphins, used for pain control as innate DA pain blocking analogues.
.
And this is how we do it, as the “Balanced Brain” shows, comparing DA and 5HT in both the very ancient circadian rhythms, and in also, 5HT complex of migraines, plus an addition benefit of understanding HOW pain comes about (neurokinins, excitatory, similar to DA) and pain control, too. Why we use Anacin for pain (ASA PLUS caffeine), as in Caf-ergot, as well. Thus much more is explained by this model than can be otherwise efficiently and simply understood.
.
So we use almost all the DA effects to understand as well, morphine, endorphines, and anandamides (THC, marijuana analogues). Thus the entire model is quite a bit simpler, unifying and explains a very great deal with a little. A good model in fact.
.
There is much, much more, such as understanding Seizures as hyperexcitabilities and why the post ictal period is a sleeping state. And how the alpha, beta, and theta measure what’s going on. Alpha as the brain synchronizers, and so forth. Beta as a measure of the inhibitory circuits and serotonin type acting drugs, and GABA as a sedative, and DA inhibitors, relatively. The GABApentins being used as seizure medications, also fits quite well into this larger model, too.
Then each the neurokinins and their affects, inflammation, long lasting enduring pain (get the migraine treated ASAP to avoid a long, drawn out headache), and why many sedatives which induce sleep will break up a migraine HA, too. It all fits into the larger neurochemical picture. Simply by adopting a good, general, basic neurochemical standard to start with, and then as the major characteristics of each, 5HT, GABA, neurokinin, Anandamide, Endorphins, etc., are compared to DA are found, Establishing each of those new ideas/concepts/names as better standards by which we can explain, extend & continue to build on the great numbers and complexities of the neurochemicals.
.
That’s how it’s done. Creating an Einsteinian, relatively fixed, stable standard, against which we describe and measure the neurochemicals, en masse, AKA the DA standard. Thus does the comparison model create progress towards a unified understanding of all those scores of neurochemicals. A method which creates understanding by showing the relationships of neurochemicals to each other, using the DA standard as the convention/standard, measure of the others. Creating new words/standards by which our understanding of neurochemistry is advanced, as well.
.
Then we use the 5HT, GABA, and other effects of each of the neurochemicals compared and described next to DA, as MORE standards against which we can such as 5HT compare to similar inhibitory and excitatory neurochemicals.
.
And quite within and consistent with the comparison process and methods, least energy, structure/function relationships, and complex system models work..
.
Simplifying comprehensive, without limits in terms of understanding depression as low DA, and so forth. And WHY beta blockers create depression in some and why having small pets around are anti-depressants, among other methods.

The Neuroscience of Problem Solving

By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014
.
It’s now possible to observe, dissect, explain, and study problem solving using the basic tools previously deeply explained and shown: Comparison processing, the basic function of the cortical columns; Least energy, part of the 2nd Law,  and the unlimited apps/methods which can be generated from both; complex systems thinking, and structure/function relationships.
.
Essentially, what does the brain/mind do at a high level in the cortex when it’s trying to make sense of, understand, and comprehend events?  It’s problem solving. When faced with a person we know, as he begins speaking we try to understand what’s being said. This is problem solving we do every day, without limits, in fact. We essentially compare the words and context of the words interacting with those words to understand. Essentially, we compare the sounds which we hear to our LTM tracing of those words which we know, in a sequential way to get the context of the word, such as “can”. Have written about these 5-6 listings of the meanings of the word “can” all of which are distinguished by comparing other words with it, to give the correct meaning. This is largely, in most all cases, the source of context, the comparison processing of words.
.
Peruse down to the 5th paragraph:
.
This is a very facilitated, skilled sequence of methods, acts, which we call understanding speech and speaking, which is very efficient, what we call fluency. It’s done very quickly and we are likely unaware of most of what we are doing. But the CP and LE methods allow us to empirically introspect what we and others are doing when we understand & process words.
.
In the same way that we create data from a measuring standard, by Comparing the length of a piece of wood to a meter stick, we create a meter/cms. output of data. We do this to distance, height or heighth, length, width, altitude, shortness, etc., the multiplicit synonyms again, which show the many complex system descriptions and apps possible.
.
 In the same way that our adjective trios, repeated endlessly as in La Chanson, high, higher highest, short shorter shortest, big bigger biggest, etc. forms. The middle one of which is the comparative form, but in fact they most ALL are comparisons. Bigger than a house, Big as an elephant, small as a pea or bee-bee, it’s all comparison. The hot, hotter, hottest, and cool, cooler, coolest essentially show the linear nature of the concepts which we model by using their linear nature to create a linear meter scale (standard, convention, relative fixed and stable, a la Einstein’s measuring standards); a linear temp scale based upon the STP 0 deg. C. freezing point, as water is ubiquitous, and thus efficiently can standardize those scales by exact comparison. And the boiling point of water, which gives us the 0 to 100 scale which is used, commonly. The two points act as scales by which we can compare most all temps to some extent. The same with speeds, being distance (length) comparing (ratio, proportion, viz. Algebra) time as in kilometers/hour, miles per hour, or per second, etc. Comparisons all.
.
The scales are also efficient, because those are based as the STP (standard temp & sea level pressure) volume of the cubic cm. of water, being a gram. Easy to recreate using common water. We find the balance scale, put the unknown on the right and then place the exact, standardized gram weights on the other scale pan, and where it balances, exactly, i.e. Compares, we have created the datum of its weight. & we can do this repeatedly.
.
In the same way as measuring & maths order/create data, the language/speech,arises neuroanatomically & interdigitates with..in the left hemisphere speech centers, we do the same. Math arises from the left post. speech centers which interdigitate iwth Wernicke’s area. When the speech here is damaged, so is math, and proportionately to the damage. And it’s also repaired in the same ways as it was developed, by recruitment of surrounding cortical columns.
.
Thus, the relatively fixed, idea/word system, which is self reinforcing in our memories, also provides this same, fixed landmark, standard, convention of a word, which we compare to all else. The more words we have which are understood and meaningful, the more comparison standards we have, verbally, to describe events in existence. Thus the simplicity of mathematics/mearurements shows us that we build up relatively fixed, stable, efficient ideas/words  each of which the complexity of many 100K’s of words we use to describe events in existence.
.
For instance, ROY G BIV, provides the comparison standards for the colors, most exactly and even children can tell us what those words mean, and can point them out because they’ve created a comparison standard in their LTM, by constant reinforcement of seeing those repeatedly via their senses. This is how language creates meaning, by comparison processing at first against events in existence, and then by creating the hierarchies of meanings, creates more and more useful, widely applicable, abstgract ideas/words, which do the work, efficiently,that is, least energy (LE).
.
Thus we have a more complete, more widely applicable model to understand understanding, which is consistent, also due to the consistency of comparison processing, We can model a model, and then model that. The same with analyzing analysis, and so forth. Each of the mental processing words have these characteristics. The comparison process word cluster.  As does adding adding and adding that, etc. throughout arithmatics. We can compare a comparison and then again and again, without limit, inputting the output, and then again and again.
.
IN addition, Comparison Process which is NOT globally exclusive, but specifically and very rarely excludes more than tha specific cases. This has been shown before. This is where consistencies come from, and how it works.
.
Now, with this solid basis, we move to how do we solve problems? And using this model it becomes astonishing easy to see, describe and observe working in brain/mind. We create recognition by comparing events to our LTM of same, efficiently, of all types. Therefore, HOW do we figure out if a chemical, or medication, or drug, has an effect? How do we thus relate an event, I.d. a tgreatment, drug etc., to the outcomes of using those for specific tasks?
.
Dr. Paul Stark has shown this extensively in “The Method of Comparison”  Ch. 28:
.
& will not ignore his deepest, specific insights. We compare outcomes of each chemical, to the desired goal, do we not? Does one form of penicillin work better and with fewer systemic untoward effects (side effects), than the others? Is it cheaper to manufacture? Is it long enough acting to reducing dosing, and how many micro-organisms does it kill? Each of these are easily seen to be least energy outcomes driving the T&E process of comparison.
.
The best ones are chosen, when compared by Trial and Error (T&E).
.
And that is the key. Trial and error, or as it’s sometimes more specialized referred to in AI, back propagation, as it models the T&E methods used by our brains, to compare outputs towards a specific or more general goal. That is, does the treatment work, and how well?
.
Thus pharmacology is essentially massive trial and error, & have gone into the high inefficiencies of the top of the line method called combinatorial chemistry for creating 100K’s of drugs in a single day in a properly equipped and managed lab, as a good way to deal with bacterial resistances. Because how can any be immune to 100K’s of new antibiotics? Thus the problem is solved by an efficient use of combo Chemistry, if properly recognized as the solution to drug resistances. It’s very clearly, problem solving by T&E, massively so.
.
The next problem solving will show how detailedly Edison created the electric light and the methodology can be extended to literally EVERY creative act in almost every field, as well. Thus the invariant, stable nature of this method of problem solving and how and why it’s used. And not just by us, but in nest building, hive creation, prairie dog tunneling and so forth.
.
How did it arise in Edison’s brain/mind that the electric light was possible? Using the CP and LE templates we can read his mind, & explicate and describe, AND model the steps he used to do this.& each of his creative acts, which changed our civilizations and bettered our lives. He was running current through a copper wire and noted that it glowed, did he not? That was light, he recognized. Thus as he turned up the current, it glowed more and more until the copper melted, and did bit of oxidizing too, please note. See here now the empirical introspections that are possible with this model of comparison processing, viz. the heart and core of observing cognitive neurosciences on a practical level.
.
So, he had to find a substance which would not burn up, and would not breakdown/melt from running a current through it. He did this by trial and error testing. And soon learned that oxygen was a burning problem with materials as he tried one after another. So he put the material to be tested in a roundish globe, the glass bulb, which is stabler than any other shape due to its roundness. And also the least amount of material, far, far better than a cube, which could enclose a stable vacuum. Thus least energy again, we see!!!
.
So he carefully placed that material into the glass globe, created efficiently by generations of professional glass blowers, via LE and CP developments of their skills. (The difference between an amateur and professional is the latter’s highly developed least energy, efficient skills, and how those are ALSO developed by T&E!  Multiple CP’s again)  So the material to be tested once in the globe evacuated of air, greatly. And then sealed.
.
Then it was attached to a rheostat like device and he steadily turned up the juice until it began to glow, slowly, so as not too quickly to burn it out, or otherwise waste all that time. Also he could control and not exceed a limit of too much current. He tried, be it noted, 900 materials before he found that carbon filament which glowed for 80 hours. and then duplicated it to confirm, by repeating methods, that it was workable. He had the electric light, tho he, the Tinker, as Tesla called him, did it by brute force. And thus the electric light was born.
.
Now we have the real guts and core of how inventions are created, but do we? And this is where the recognition systems cut through to the heart and core of creativity. The higher level abstractions which solve the problems faster, than 900 trials. The one closed to Edison because he was NOT educated in physics and such. But would have at once occurred to Tesla because he was in fact, educated, that is more information, more complete facts about how materials work and their characteristics. He could have cut a year off his efforts, and more efficiently, LE solved the problem with only testing a few proper materials.
.
The solution to this problem is clear, Refractory materials do NOT break down with high heats, either. Carbon was such a one. Tungsten and molybdenum and some Pt group metals also are this way, but those latter are WAY too expensive & rare to use and violate the least cost, least energy rule. He needed a commonly found material and that was carbon and those are still used today in the carbon arc lamp systems to create high intensity long lasting search lights and so forth.
.
We use tungsten which is commonly available to create incandescent lamps today. More durable, with higher refractory traits, and less delicate than carbon and far, far more long lasting due to its innate characteristic that it will not easily melt in high temperatures within a vacuum. & that’s the cutting of the Gordian knot of complexity by least energy trials of such substances, which we use today. It saves time, expense and development cost. Thus, once we understand how creativity is created by the brain/mind, we can more easily solve our problems, can we not? And here is the wellspring of creativity described, characterized and shown how it comes about. If we know where to go, we get there faster. It’s least energy. Thus this model.
.
Then we have the P and NP problem. Or how to solve these problems. The P are those which are solved, and we simply, as with the equations for miles per gallon, fit the figures into the equation and figure that out. Or the broader one, how many gallons will it take to get to X city, and will I have to fill up the tank to get there?
.
But the NP are NOT solvable,and most believe, quite rightly as can be shown below, that P is NOT = NP. And this is very easy to show. Simply stated, in order to convert an NP to P, we must add information. Specifically the above proofs that incompleteness is a thermodynamic problem is the case. Information content of a solution/method to a problem is higher, than in NP, not solved situations. Thus incompleteness is equivalent, and related to entropy. and as solutions are least energy, then they are more efficient than NP by necessity. Thus the TD argument shows, very clearly, that P is more information rich, technically more complete than the NP from which it arose, and was solved. Information was added (usually as above, by T&E methods), and NP became P. This is the proof which has so far eluded professionals in the area, because they did NOT have the understanding, that is, the least energy,comparison process method to understand, more completely, what went on in most problem solving.
.
Thus, we have the proof here. But how do we find answers to those NP problems? We must use trial and error outcomes, until we do so. But, as shown above, there are often ways to solve brute force problems by cutting through the complexities of the Gordian knot, by higher concepts and abstractions, which more generally provide the answers. This is why P is NOT equal to NP. Info Theory and thermodynamics shows us why, because the higher concept of incompleteness as a thermodynamic value, has been shown to the case, by least energy considerations. Solutions do a LOT more with less and permit solutions of problems by T&E methods. This is the general verbal solution to the problem and will let the mathematicians find the corresponding experimental math methods to do so.
.
Buit will also show HOW it’s done, likely by the below truths of how we mathematize verbal description or how Einstein and Minkowski mathematized relativity using 4D space time mathematics. I.E., the experimental maths. Those are what Fermi/Ulam used to begin to describe complex systems, of all kinds, finding the landmarks, the stabilities of events, which are least energy forms, in those complex system events, which showed the solutions to it. Tho Fermi and Ulam had NOT realized those were least energy forms, which created the stabilities which repeated themselves. & and then pass by natural reinforcements into our Long Term Memories. Thus neatly combining Behaviorism into the Cognitive neurosciences, by adding more information to behaviorism, you see?
.
.
Have already mentioned  deliberately the way linear, verbal method of the unlimited high, higher highest approaches, by which we converted lengths into measuring scales. The same with temps, and speeds. And by extension the Moh scale of hardness, using the relative hardnesses of each mineral to create the linear 1-10 scale. And that was mathematized using the same creative use of comparing relatively fixed standard of pure quartz, at 7, to a scale which created the pascal measure of hardness Thus Pascals more precisely defined by math, which is its least energy value and why we use math, BTW, This create the current mathematical, linear scale of hardness, is it not? Creativity being seen by studying the history of how the Moh scale was created, by comparing relative mineral hardness, talc gypsum, limestone, quartz (7), & up to sapphire, corundum, to diamond at 10, was the same method.
.
So, with a final example, how DO we mathematize growth? By understanding its basic natures. If the growth is arithmetic, it’s simple, & we must use simple addition to do so and multiplication, yet another, the 3rd hierarchy of arithmetic, thus counting by least energy methods.
.
But if it’s exponential, then we must find guidance from a great mathematician, Whitehead. and from his words find, by T&E, a math system which corresponds, very exactly and can be widely adjusted for the kinds of growths found, too.
.
Here is how it’s done. and if I can do this, then anyone who’s educated, can!!!
The elements of understanding, a good education and speed of processing also essential to solving problems, too. Recognitions are the key, and that’s how Ramanjuam, worked & was the source of his remarkable creativity.  Feynman’s magic used the same in creating the diagrams, which within a few minutes could duplicate the results of weeks of computations in his time. So he was lazy, AKA efficient, and found the solution by using horizontal, hierarchical, diagrams to symbolize, visually, the processes occurring. No accident there, as hierarchies are very efficient and near universal problem solvers, too, by showing deep, existing real relationships among events. & visual thinking takes huge amounts of events (a picture being worth 1000 words), thus handling efficiently masses of data.
.
But how from Whitehead? Again, the subtleties of Einstein, “Raffiniert ist Der Herr Gott, aber boshaft er ist nicht.” Subtle are the ways, but not mean, and the universe of events also shows us the ways to do it. And here is yet another way to describe, mathematically, growth of all sorts, based upon the verbal description, process thinking ideas of Whitehead.
.
He stated, “Almost anything which jogs us out of our current abstractions is a good thing.” In other words, to get us to grow, learn, and develop. This is also stated in his own way by Thomas Jefferson. “I hold that a little rebellion now and then to be a good thing.” All the myriad ways, we see.
.
But Whitehead was subtle, and he saw the truest form of most all growth. “A society which cannot break out of its current abstractions, after a limited period of growth, is doomed to stagnation.”
.
We know that growth of this kind is mostly exponential. But it starts out slowly, then gets going, goes exponential, but forever? Not likely. Most All methods, tools, & techniques have their capabilities, which in a least energy way create compound interest growth, that is exponential growth. But they have their limits, which also reduce that growth at the top of the curve. And what is the shape of this curve which Whitehead’s statement describes verbally? The mathematical S-curve, very likely. And how was that found? By translating the words into a series of descriptively measuring complex mathematical curves. By using T&E, these can be used to model, simply by measuring it. Then tweaking & adjusting that curve to model more completely, the verbal description. Just as the linear hot, hotter hottest gets modeled by a temperature line. It’s found by trial and error and by the same trial and error tweaking can create a very good model of each kind of observed, real, existing growth.  We find the S-curve at the bottom of every business cycle and at the top when the growth stops, too. Or why economies grow so well for a while, and then peak out. Or corporations in the same ways.
.
Let us advance to physics and show how this model can create a more universal, unifying model of thermodynamics and gravity. We see the avalanches, which are created by having a least energy growth phase where innumerable tons of snow, rocks, etc, slide down the mountain. Just a bit more weight from some snowflakes can precipitate this effect. & this also enlightens us as to how a butterfly flapping its wings can, in complex system, create a tropical storm, over time. Least energy growth combined with a system ready with energy to grow. Even weather and winds.
.
The avalanche at first grows and grows in size, exponentially, until part of mountainside slides down the gravitation, least energy gradient. And then? Reaching the bottom, slows down & eventually stops,mostly. Although for weeks even months after, a bit more settles, slides down hill, and so forth. The top of the S-curve is seen. Inverted, no doubt, but still real and existing. Most all landslides can be modeled by this method. And this is how, by comparison we can model growth, and landslides and avalanches, by comparison processing. Creating creativity.
.
Gravity is least energy function. Things tend ot fall from higher orbit, position, or levels to lower ones. Least energy is the case. This can begin to combine TD and gravitational systems.
.
That’s essentially how we create solutions to problems. Using comparison processing relationships which we find, and then by comparing the events, find a relationship. And then by further T&E comparing, find a math, or create a math which can model the events. That’s how Newton did it for gravity. How Archimedes got his Eureka moment as he created the mass/volume concept of density, which gave him such a dopamine boost, and likely saved his life, too.
.
This model can be extended with time, T&E & work applied to most all forms of the unlimited professional outputs, also without limit. For each method is not final,  but a LE way station, to the next more efficient methods. Unlimited growth and improvements from this understanding of how events work.
.
 And if we want to understand the complex systems of flow, then we find the stabilities in it, and report and organize those, to build up an increasingly efficient model of how to describe turbulent flow.
.
And how a Viagra 50 mg. tab can be made to work for 3 days in those who respond to sildenafil. & in 20′ onset of activity, too.

An Hierarchical Turing Test for AI

An Hierarchical Turing Test for AI
.
By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/COMP Theory/Model; 14 Mar. 2014
 .
Have been following Jeffrey Hawkins’ work with some interest, and note his work in hierarchical  temporal memory models. It’s likely that Jeff Hawkins has the closest working model to creating a good simulation of AI.
 .
These insights which may help facilitate creating AI & can be explained in   detail using concepts of comparison process & methods, structure/function relationships, least energy & methods,& complex systems thinking.
 .
Using structure/function relationships (clinico-pathological correlation) traditionally, we can generate & create unlimited information about how the brain works, and its corresponding outputs, vision, language, motor/sensory, thinking, memory, and so forth. With MRI, fMRI, MEG and combining the two latter even more information is now extractable from living brain using those methods. These tools which have seemed to work for create higher brain functions right off the cortex, the mind brain interface, (cortical columns of Mountcastle), are S/F relationships, found by the millions in clinical comparisons between lesions in brain and functional deficits, & are self evident to us all. However, using a comparison process model, which largely operates in the CC’s, a very much better understanding can be found.
 .
The model is both elegant, simplifying, as well as highly fruitful. Because a comparison can be made between or among most every event in existence both inside brain and outside (in the various permutations), it’s part and parcel of the structure/function method. And appears to be a universal tool. Least energy is also a comparison process, comparing energy, resources, time, and efficiencies of different methods, to find new insights. & likely universally applicable, too.
.
 It’s necessary to ask this deep question. How does it happen that the brain is organized upside down & reversed right for left in humans, and very likely most normal primates, mammals, and likely reptiles and birds, as well?
 .
Right hemisphere controls left body and vice versa, in both motor and sensory homunculi. This is well known from the time of Wilder Penfield and numerous published cases. The toes are at the top, & in the interhemispheric fissure. Then moving down the cortex of the motor strip comes the instep, the arch, the ankle, and the ankle is connected to the leg bones, & those to the knee, then the thighs, etc. The face is at the bottom, and the trunk and arms fit in between in a smooth transition. Why is this so? In addition, the visual system is also reversed right for left and upside down, with the sup. visual fields functions processed in the inf. occipital lobes and the inferior fields from 3 to 6 to 9 o’clock in the top parts of the visual cortex. & again, left field from 6 to 9, to 12 on the right visual cortex, &vice versa for the right visual fields.
 .
And there is the optic chiasm which takes the right visual field to the left, and vice versa, and the inferior fields to the superior cortex in a smooth transition of reversed upside down & right for left. & the decussations of the pyramids of the inf. brain stem take right sensori-motor afferents from the limbs to left brain, the same way as do the cortical efferents are taken right hemisphere to left body, also. So wherever we find in animals the decussations of the pyramids, we know the brain is most likely upside and right for left oriented. and not just in humans & primates, but likely for most mammals, including the ancient marsupials, and the platypus and echidna.
 .
Why & how does this arrangement come about? We hear of the top/down models which were fashionable a few years back, but were somehow not clearly satisfying, either. Somehow incomplete.
 .
The best, most efficient answer, & the key to understanding human and primate/mammalia brain neuroanatomy is quite simple. When we take a double convex magnifying glass, and hold it at arm’s length, comparing the images of the sights around us, the image thru that double convex lens is reversed right for left and upside down. The trees are upside down, the sky is on the bottom and the grass is on the top. The house sitting on the right of the scene is on the left and the shrubs are on the right. This is the visual comparison which shows us what’s going on. The eyes have it!  The brain is organized visually, and never in my years of study and work has this simple correlation/comparison been described nor discussed. It’s a deep and universal observation, spanning most higher animals. The ability to create new insights using visualization methods is highly human, and species specific. Although some primates can copy our actions, the meaning of many actions, as in cargo cults, ofttimes eludes them.
 .
Quite simply, we are visual creatures, & our brains are organized largely upon the images of events upon our retinas, and very likely least energy, as well. We have stereoscopic vision, different from other species whose visual fields rarely overlap, but for our cousins, the primates. The evidence is also in the internal capsule’s organization, that of the massive connections of the white matter to the cortices, and the internal mirroring deep structures of the globus pallida and the thalami as well. A truly universal structure/function comparison.
 .
But not only does this occur in humans and land mammals, but also in the dolphins, where the same decussation of pyramids are also seen. We don’t have to dissect these creature to see it, but simply image their nervous systems to find it. & thus we know their hypertrophied sound cortices also are organized in the same way, to highly and efficiently coordinate with the older visual systems, and the motor/sensory functions. This is basic neuroanatomy, and although the top/down, old at bottom and younger at the top is interesting, it doesn’t have the depth, profundity & universality of the right for left, upside down features.  Even our cerebellum is organized the same, as well.  The deepest structures are in the brainstem and base of the brain. The newest on the roundish outside, the cortex.
 .
The comparison process when generally applied to data both within and outside of us, gives us the ability to see events in new ways and relationships which we have missed. For instance, color vision. How does this come about, this model of events in existence? We can compare our brains’ color system with the EM spectrum for new knowledge. We know that orange is a frequency of light in the spectrum. But not in the visual cortex. It can be generated by mixing yellow & red pigments, but still we perceive orange! Blue green, by mixing blue & green, where the blue-green, real and existing frequencies are there, but not a mix. And then purples by mixing blue and red. And browns, a highly significant color for some color blind persons, are red/green in combined EM frequencies. Show us by comparison the “brown” frequency in the spectrum!! It’s not there. Thus our rhodopsins interacting with photons’ black/white continuum, create the colors simply and elegantly by mixing. But they cannot convey to us the existence of the spectrum based upon frequency, even tho, in fact, it may use the energies of colors to detect those.
.
.
Another insight is the rainbow, by which Newton was able to see frequencies of light by creating them from refraction. The rainbow is rare, or is it? In fact, it’s a daily phenomenon. Part of a Kuhnian revolution is “new seeing”. New comprehensions created by a new model. As Einstein once stated, “Every advance in physics is preceded by an epistemological advance.” This might also be true of neurosciences. Where IS the daily rainbow seen? Sunrise and sunset. First, it’s black, hardly any color, then it’s IR with black/red. Then red, then orange, yellow, a bit of green, and then finally blue sky. We see the greens from the plants, and the browns as well, seeing bark and branches. & with the black and then shades of greys with clouds, the color palette is thus re-set for us everyday. And at night once again with the sunset rainbow, too.
 .
Consider that our eyes see colors at the brightest and most common frequencies which the sun creates, yellow/green, the center of our visual sensitivities. If an eye needed to detect light best, it would have highly receptive color sensation for yellow/green. Which eyes do. Those frequencies, which the sun creates at its highest numbers of photons, thus brightest, give the greatest amount of information. It’s efficient, least energy. If we lived on a planet whose sun were yellow to slightly orange, our photopigments would be most sensitive at those frequencies, would they not? So our eyes are clearly attuned, evolutionarily & structurally to the sun. So is the rest of our nervous attuned closely to other common events in existence. As will be shown shortly. These are but a few of the  many kinds of insights which comparison process thinking can give.
 .
These insights have been missed, for want of the comparison process and least energy tools. & perhaps, we can still see a lot, by just watching, to paraphrase Yogi Berra. Observation still trumps most all models, and creates, corrects, and extends them, too. & so the sciences are self-correcting by carefully observing, & retesting and confirming repeating events in existence.
 .
Thus do we find this comparison process which creates many methods to be useful. At your UC Berkeley campus, Dr. Paul J. Stark, PhD, Statistics chair, has a lovely video in which he discusses, “How do we know treatments have effects? The Methods of Comparison.
 .
 .
“the effect is ubiquitous.” That is, universal in application. and he’s worked it out rather well, though he’s likely talking about numbers of comparison methods, plural, rather than one. The comparison methods create information, and as Dr. Karl Friston states, regarding least energy (LE), it’s “consilient”. Crossing many fields in application, as Dr. Stark also writes. Pieces of this model are all over, but not yet fully integrated and developed. This is yet again more evidence of near universality.
 .
Dr Friston’s works on connecting least energy to how brain functions is pretty well spot on and well worked out over the last 25 years. My work on evolution from the comparison, LE approach, pretty well confirms his, and that of others who’ve also seen those relationships when applying LE.
 .
 .
This is cutting edge evolutionary biology, but few have realized it. & at the heart of it lies LE and comparison processing.
.
.
When the comparison process and least energy are added in, we get a very great deal of information back out. The inputs and outputs of our CC’s can do this repeatedly, and even input outputs to create the hierarchies of our understandings. Essentially, our language and modeling systems are built upon this. Cognition psychology taught us a very great deal, but hasn’t progressed enough yet, tho intuitively  they realized that recognitions are basic &  the case. But what is behind recognition? Clearly, it’s comparison process. When we perceive events in existence, we at once compare those to our long term memories, and if we find a match, we recognize it, that is we “re-know it”. The language, etymology reflect the structure which gave rise to it. and this is another key to understanding understanding and thinking about thinking.
 .
The constant calls to LTM for recognition and the learning which establishes those memories are very clearly related. Events in our universe repeat themselves multiply. Those which are important to us, reinforce themselves into LTM. These are the CC’s modeling the repeating events by reinforcement. &  then we create our landmarks & so forth upon those perceived events. This is essentially the behaviorist model, but it’s missing big pieces, neurochemical and neurophysiolgoical. It’s incomplete, which has been shown below. & can be neatly integrated into a larger cognitive neuroscientific model. The highly repeating events in existence, are yet another basic clue to AI and how it can be created. Plus they weigh in heavily with respect to confirmation of events in existence by the sciences.
.
.
But when we want to understand description which is largely verbal, we also can find the quantitative version of description, measuring, most useful. When we take a ruler or tape to measure length, we compare the set, stable length standard to the event to be measured. This creates data, or information of length. & when we compare to a word standard, we describe relative to that standard, say a colour among the ROY G BIV standards. Is it red, or yellow, or green, or blue, etc.? So it’s likely that our descriptive measure of events, using words, which are primary, are intimately related by comparison processing to measuring standards as well. They both are comparison methods. Virtually all we do to create information & understanding is thus rooted in comparison processing in our cortices. Again, essential to creating AI.
 .
 .
Most words are likely descriptive standards which we compare to events in existence to understand those events. That is we relate them to events. and this is the point, it’s the comparison which creates the information and knowledge, as the measurement creates the data and information also. Ideas/Words are relatively arbitrary standards by which we measure events and describe them, qualitatively.
 .
For instance, “son”. We know how he is related to his parents, mother and father, his siblings, and his parent’s sibling as his aunts and uncles and his parent’s sibling. The “relatives” of the son are clear. This builds up the hierarchies of genealogies and how we are related to others in our families. The son has a father, as his father was a son, and can have a grandfather, too, or grandsons and so forth. Each comparison creates an hierarchy. Each relative has set specific relationships created and read by comparison processing. In order for AI to be able to function, it MUST be able to understand, create, and navigate the hierarchies of our understandings with simple facility. This could be termed a “hierarchical Turing test” for AI.
.
And we see the hierarchies all around us. Comparison process both creates the hierarchies and navigates among them. The hierarchies of the dictionaries in alphabetic order, for instance.  Every word ordered by alphabetic comparison processes using trial and error, and dictionaries read by same process. It BOTH creates, writes and reads, navigating well throughout the hierarchies, & is thus LE. It does lots with a little. The taxonomies of the species, in the millions; the taxonomies of the languages, both current and extinct, each finding its relationship to the others by hierarchic relationships among comparisons without limit of the words in each language (&/or dialects), and how those are related by “comparative linguistics” to the other. The Teutonic are clearly related by comparison, massively. The next hierarchy of the Indo-european. The Uralic-altaic related to Suomi and Magyar, as well. Each, massively compared by words to the others. The Semitic languages, of ancient Egyptian, Arabic, Amharic, Coptic, Aramaic, etc., are all seen by massive comparison of words to be closely related languages. Each comparison combines to create a huge amount of data supporting the ubiquity and limitless use of comparison process, innate to our cortices. 34 millions of elements & compounds ordered in the IUPAC hierarchical listing, alone; all of them comparison process created, placed and read.
.
This is Einsteinian epistemology as well, because there are NO absolute measures or times or spaces, or much else. Most all that we measure is “relative”, that is, a comparison standard, which Einstein believed was arbitrary, but in fact is not likely that arbitrary. Thus does comparison process tie in neatly with the established facts of relativity, and explain it as well.
.
For instance, he stated that one could use any planet, or any position in our solar system as a fixed point to which we could relate everything else in the universe. But he missed this crucial point. Newtonian physics puts the sun at the center, because of least energy. Newtonian methods are also least energy solutions. Orbits about the sun are least energy. Orbits around the earth are not. Least energy rules, yet again. And this is the point. our linguistic standards, and the means by which are brains operate are NOT unlimited listings of complexities, but related to least energy rules as the standards. Thus our measuring systems, Hands for horse heights at the shoulder, and feet for length, are least energy. & based upon comparison to human anatomies, at first. Least energy rules are often the means by which we cut the Gordian knots of complexity.
.
For our senses we feel hot and cold. Hot, hotter, hottest, and cold, colder coldest. Hot, more hot, most hot, and cool, more cool and most cool. Notably, the central “hotter” words, etc., ARE the comparative forms!!! Again, language shows us the way, but we did not see those clear clues. Those scales are most all comparisons!! These are many instances of linear temperature scales.
.
But put a warm object on a cold hand, and it will feel hot. Put a cold object on a cold hand and it will seem normal. or a cold object on a very warm hand? Extreme temperatures again. Sensation is thus largely comparison process, too.
.
This is most readily and famously proven by using 3 bowls, the left with ice cubes in the water, the middle at 70 deg. and the right bowl at 105 deg. F. Right hand in hot bowl, left in cold, and equilibrate skin temps for a few minutes. Placing the left, chilled hand in the 70 deg. water, it feels quite, quite warm. And then putting the right, hot water bowl hand into the 70 deg. water, it feels quite cool. This cognitive dissonance arises by the facts the temperature nerves work by comparison only, against ambient skin temps, and have not any clear cut absolute temperatures as standards. They use what they have, and speaking generally, the entire rest of the sensory systems work this same way. Relative to the ambient sensations, and thus are comparison processing, most all.
.
 And we compare the relatively fixed standard of our temperature scales, based upon the STP boiling point of water and its vaporizing points, do we not? Simple, ubiquitous water, becomes the basis of human temperature scales. An efficient, stable standard is thus created by simplicity and the commonality of water.
.
Hardness is the Moh’s relative scale to talc which is soft, limestone harder, corundum (sapphire) and diamond, hardest. Thus hardness description is also comparison process. Now we use GPA’s, but that is relative to a more standardized and thus more efficient hardness standard, is it not?
.
And for visual images? Why should we think that the visual system uses our geometries for modeling the universe of events? Non-euclidean models the real universe, and neither do we see perfect circles, right angles, squares and triangles very much in the natural world, either. Instead, more fractal types of features, which are clearly NOT Euclidean, even as our space/time maths use non-euclidean geometries. And so for shapes the visual system uses curves and roundness, because those are the commonest features the occipital lobes detect. The mother’s face is marked by roundnesses and curves. That likely sets the standard for shape processing in our visual cortex.
.
Thus, we see optical illusions, some of the commonest of which involve seeing straight lines as curved. Esp., the two straight lines drawn through a series of nested circles. This optical illusion is illustrative of all the others. and how to show those lines are NOT curved, which our visual systems insists they are? Take a clear plastic straight edge ruler and lay it down next to the straight line. and the illusion disappears at once. & generalizing, for nearly every optical illusion there is a comparison correction, or more than one, which will show the illusion. Again, recalling the colour generation of our visual systems, the comparison shows the illusion and the comparison correction(s) which fixes it. THAT is significant evidence again how the comparison process creates sensations of all sorts through central processing.
.
.
This article was referred to the Euro Radiological Society and am STILL getting hits after 2 years. & this is how we can further investigate how the visual system creates images, by the misfires of comparison processing in creating the whole, unlimited panoplies of optical illusions. Among those illusions lies the structure/function relationships about how our visual systems work. and not just ours, but most all animals with eyes, too.
 .
Radiology is simply reading images and reporting the findings. But how is this “reading” done? Simple, just like description and measurements are done against fixed standards to create meaning and insights, and new data/information. Each medical specialist, & the more so radiologists, have relatively fixed, efficient standards by which “normal” is judged visually. By comparing those standards by which they have learned efficiently to read images, they know using a set routine, if AP and lateral chest X-rays, by massive comparisons, are normal or not. The same for all MRI, fMRI, EEG’s, MEG’s, ultrasounds, arteriograms, etc., massive, standardized comparison processing. The same for lab data of patients which are compared to normal standards. How many more instances of comparison processing do we need to realize it’s ubiquitous & universal, and thus proved to exist? It’s self evident.
.
More on this in far deeper detail and the practical consequences of same in:
.
.
What of the plate tectonics model in geology? It’s complex system, largely visual thinking, too. Setting up the standards of upwelling zones, or mid-oceanic rift zones, subduction zones, etc. If AI can learn to understand tectonics, and its hierarchies, and how those all interact and work, as well as recognize its features, WITHOUT previous specific training in new examples it hasn’t seen, then General AI is upon us. My work has shown some of this water. But it’s likely there are many more wellsprings of our humanity’s creativity, understanding, and much else.
.
& there it is. The mind/brain interface of the cortex, which in the CC’s of Mountcastle, a simple, single, repeating process creates the outputs of the mind, including languages, sensations, creativities, the many, multiplicit functions of consciousness, as well as the moral conscience. The latter of which can be imaged at this time.
.
.
Thus language is easy to generate, from the simple repeating “dada, mama” to the complex. And math as well. Take the linear, number line of counting, 1 + 1, 2 +1,. etc. We can count up and then down. From 2 to 8 there are six counts and vice versa. From that we know there are 4 two’s in 8, and vice versa. By subtraction the same, and by division the same. The we get the exponential both base 10, natural and logarithmic methods.
.
And the hierarchies are again there, the 1’s place, the 10’s, the 100’s, the 1000’s, 100’s of 1000’s, millions and so forth. And the counting, 1st hierarchy; addition/subtractions, second hierarchy; 3rd hierarchy, multiplication and division; 4th, exponentials, etc.
 .
For geometries and algebra, we see that when we compare the circumference to the diameter, we derive Pi, as a ratio, a proportion. We get constants, and speed in the same ways. Thus do we get algebra, which is essentially ratios and comparisons, too. And we get trigonometry which are the precise ratios of idealized right triangles by comparing their 2 sides, and angles, to derive the other side, Most all comparison processing, is it not?
 .
And yet it’s far, far deeper than that. When we value items against each other by barter, a bushel of grain is worth some silver, or some copper. Or we create money which measures values of items against a relatively stable, set constant called, for instance, a dollar or pound. Its costs and values, and each currency, each value can be compared to all others by “conversion factors”, or constants, derived by, you guessed it, comparison processing. And when we shop for the best items, for the best costs, we use both massive comparison processing as well as least energy to get the best for our bucks, too.
 .
Most all of our brain higher cortical outputs are comparison processes. Again as some AI experts have stated, “a single repeating principle in the brain which creates predictive control.”
 .
This is how predictive control is obtained:
.
.
Comparison processing creates classifications, indices, and social classes among humans as well as the flocks, herds, schools, and so forth. “Like knows like” through universal recognitions created by comparison processing, that is thinking. IQ also becomes easy to define and measure, and not only in humans, either.
.
Communication becomes translation & lots easier, because of
.
Ich bin hier.
I am here.
Je suis ici.
Estoy aqui.
Sum hic.
.
Most translation is comparison processing, tho context is also CP and is not easily recognizable to a computer. It cannot recognize as easily as we can and with high facility, that we talk differently to bosses, employees, lover, parents, and so forth. Thus context is comparison process, easily seen once we begin to apply CP.
.
“It’s beautiful.” he said to her later in the day.
“It surely is.” she replied.
.
Computers will have a hard time recognizing the sunset context here, let alone the others which are social. Context is multiplicit of types, and very hard to computers to detect and comprehend. But we humans do it quickly, and naturally.
.
& how to communicate with dolphins? What do we recognize in common with them? By the rule of commonality, fish, sharks, bubbles, colours, and so forth. Recently it’s been claimed dolphins have names for each other. because they observed that specific whistles often in a pod attracted the attentions of single members. & when those were played back, only the one whose “name” was broadcast showed interest. Trial and error, comparison process.;
.
And when we go into space and meet other species space faring or not? How did our trading ancestors do it?
 .
This is wood, this is water, this is meat, and what’s your name for those? Those events in existence which we have in common, those events in existence which create recognitions by continuing reinforcement because similar events repeat themselves. We show the aliens ice, liquid water, and water vapour. Give them our names for those, and at once we have 3 phases of matter and water, which is the commonest life giving element, & must be ubiquitous, as well. Commonest standards, commonest used, even for weight as the 1 cc. gram of water, too. and densities, yet another comparison of mass over volume? The rest flows easily from there. Universal understanding, at a stroke. Simple, easy, elegant, highly fruitful.
.
.
Have gone long enough, but the model is highly applicable to most everything. Without limits. The universality of the CP and the LE models is notable. It can even, where Hawking’s “The Grand Design” lamented, we cannot integrate the classical models of thermodynamics, relativity and QM, while he missed biology, which also isn’t consistent with two of them, either. The commonalities of understanding can likely create a Unified model of most everything. Defragmenting the sciences, as well.
.
ER equals EPR. Some of our physics colleagues will see this relationship as part of a method to unify physics of relativity and entanglement, as this model predicts. There are many, many other bridging concepts which can be used, of which LE is key, too. Especially, in solving the complexities of Quantum wave equations. There are approaches possible to achieve even that. Well past renormalization, which is a specific app of a more general solution kind, oft mentioned specifically here.
 .
The implications for AI of where the hierarchies of our understanding come from, how to both create and read and extend those using creativity are thus almost at once apparent. The system can even model neurochemistry, using Dopamine and its ancient and central, 10+ receptor sites, which both create motion and Emotion (D1 and D2, largely). DA can be compared against most all the other neurochemicals such as serotonin, to create sleep/wake cycles, why we dream & even how to better treat migraine headaches, using complex systems thinking. Or how to make Viagra 50 mg. last about 3 days, where its normal duration is 6-8 hours. Then there’s Cialis…..
 .
Bacterial resistances to antibiotics, which are very easily overcome, as well, using this new paradigm & epistemology of comparison process and least energy, structure/function, and complex systems thinking. Thus we do more with lots less. The secret of growth of all kinds.
.
Hope you found this interesting. It can answer a LOT of questions, including how and why events go viral, fashions/fads & where humor comes from and how to create professional vs. amateur skills for improving education across the board, without limit. It’s behaviorism on super charge, largely extended, because the internal brain source of the classical & operant conditioning model has been found, too, a referenced above.
.
These methods can answer many questions about AI, and how to get to the Promised Land of general AI. Half the problem of getting there is solved if we know WHERE we are going. And if some go by trial and error, in all its combinatorial complexity, compared to those of us who know WHERE to find our goals, that is we know better & more completely what we are trying to simulate, the win goes to the best model because it’s swiftest & most efficient.

 

How Physicians Create New Information

.
By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014
.
This is an extension of the “Creating Information and Understanding” article:
 .
 .
and shows specifically, the formal, testable, details of how this is done in medicine.
 .
We start with Duchenne, who first described some cases of what’s now known as Duchenne Muscular Dystrophy (DMD) in the 1850’s in Paris. Dr. Duchenne noted a first case of young boys who were getting weaker. Those were unlike any ever noted or described before. In each case, they were variations on a theme, (the myriads of ways of complex systems) consisting of a progressive weakness of muscles, occ. heart, hypertrophy of muscles, despite the  weakness of the enlarged muscle mass, and that they often died before age 5. It was not seen in girls, however, which he did not understand, but which our improving models showed was due to it being an X-;linked recessive, like some forms of hemophilia, also only affecting boys, of which the notable cases of the daughters of Queen Victoria spread to the royal houses of Europe. And which probably brought down the Romanov dynasty, or at least materially contributed to it, as the Czar’s son, and heir was so afflicted.
.
Over the last several years the genetics of the disorder were found which was a single mutation in the membrane very large protein called dystrophin. This largely has shown how it came about, because being large gene, meant it was more likely to mutate. Treating the condition has been difficult, but therapies and other work are still extending the lifespans and some trophic factors, such as follastatin, altho with untoward effects, can effectively restore 80% of muscle strength. Work is ongoing to limit the untoward effects by trial and error modification of follastatin’s structure. Thus again, structure/function model is being used.
 .
DMD has many other characteristics. But the point is this in medicine. When we see an unusual syndrome, we remember it. If we see it again, we write it down and look for more cases of it. and if we see 3 or more, it’s then formally written up and submitted to medical journals. The important observations being if it’s seen again and again, this acts as confirmation of such a disease entity, being discrete, recurring and repeating. This as an extension of the behaviorist model is recalled by the naturally recurring reinforcements of events in existence, by which we  learn, lay down Long Term Memories, & then recall events. And in most each case, if rare enough there is the dopamine boost, which also naturally reinforces that perception and also promotes lay down of LTM via dendritic proliferation and synapses formation, to facilitate recall.
.
 .
That’s the basic neurophysiology of how we lay down LTM, and which the DA acts as an additional reinforcer to double down on recalling it, structurally. This structure/function observation can be seen again &  again, esp. from the Archimedes, historical example, of his stepping into his bath, seeing that his body displaced water, from which his revolutionary concept of the comparison of mass to density, now grams/cc., created the hugely efficient and widely applicable method of Density, using water as the general, fixed, stable measuring standard.
 .
In short, Duchenne had a “eureka” moment, when he saw the first case, which was reinforced by the 2nd and 3rd until the LTM began to get facilitated. He looked for more and found those. & so did many others, which they’d missed before. We cannot diagnose conditions which we do not know, recognize, and thus cannot see. And world of muscle diseases was changed, that being the first genetically caused and recognized form of the larger known class at present, of muscular dystrophies. Thus the finding of a the first of the class created a hierarchy of muscular dystrophy, which was then filled up with many other forms of the same. It created the means for more and more growth in understanding. Typically, a good finding will do this, esp. if it’s widely applicable. Also Typical hierarchy, taxonomic creation method of dealing with, organizing, and understanding disease states. The model is universally extended to most all medical conditions, often classified by the major organ affected, the next & Third in the hierarchy of our understanding.
 .
The point is this. This model of repeating comparison processes and methods without limit, its fertile daughter without limits, Least energy, derived from the Second law of Thermodynamics
.
.
complex system modeling and thinking, and structure/function relationships works in our brain, more or less universally, to create new information. We create ideas/words both of which reinforce and stabilize each other, which act as comparison standards which we can apply efficiently to sensory events from the external world, or our internal phenomena and realities. And thus we describe most all events using those standards, be they verbal or numerical descriptions. Same method applies to both. Least energy identities, simplifying the system of understanding, where a single unifying concept.  comparison process, plus least energy creates knowledge/data, and understanding. Information is created from other information in this way, generally extending our knowledge, too.
.
IN addition, the least energy, comparison processing method is highly realistic, tending to true, efficient means to describe events in existence, which are stable, useful and practical. Ockham’s Razor is least energy for this reason, as well. It’s a real existing solution finder!! How this applies to finding solutions to complicated systems is thus at once recognizable and usable. How it applies toe the solutions of QM equations, hitherto unsolvable is also at once apparent.
 .
Duchenne recognized the syndrome was something new. He then looked for more cases of it, thus creating scientific confirmation of what he’d found and therefore presented it in writing. And others began to see it, as well, formerly missed. This matches very much exactly with Kuhnian paradigm shift  “The Structure of Scientific Revolutions:, known for 60 years.  And Einstein’s dictum that “Every major advance in physics is preceded by an epistemological advance.” of which his was the epistemological discovery that measurement was relative to fixed, stable standards. And thus there was no real absolute space, nor time, as has shown, with unlimited repeating cases, confirming that fact, scientifically.
 .
Thus we take Einstein’s relativity shown by comparison process which can be extended massively, and created by this further application, of how the scientific discovery method, that is finding new knowledge, works coming out of and generated by the cortical brain/mind interface. And how such discoveries are converted into good, descriptive, efficient means into new words, reflecting a good usage of words, which stabilizes and passes into LTM so they can be used again and again. & because they are efficient, using them again and again creates growth, amplifying with each usage, their practical value.
 .
This is the article “The Relativity of the Cortex”, by which comparison processes are shown to be not only the source of relativity epistemology, but can also be shown to be the epistemological, paradigmatic extension of the Behaviorist model of reinforcement, as well.
.
.
And can by becoming a more generally applicable model/paradigm is nearly universally applicable to our understanding. Comparison process can be used and applied to most all events within & outside of our brains. Least energy, it’s fertile and fruitful daughter, is just as widely applicable to nearly everything. This universality can create universe, unifying models, very likely. Both efficiently describe from a neurological level to mind and behaviors, what’s going on explicitly and in detail about how innovation, understanding and creativity come about. It formalizes and details what we are doing, in a way which has not been seen before. And why not seen? Because we habituate, ignore, and then facilitate those ignorings. This makes clearer how we are likely to be thinking, cortically and processing external sensory and internal information.
 .
The point which Einstein missed was that though he stated that there was NO privileged position in the universe, as all positions can used as the referent point for time and space were likely not privileged. However, he missed the characteristic thermodynamic of least energy being efficient, thus stable (fixed)  relative standard, which he implied but never specifically understood nor stated measuring standard. And that our words are largely such standards, thus uniting verbal description as being the parent form of measuring standards. Thus this model extends once again, Einstein’s revolutionary epistemology. That least energy creates a far more efficient, standard, such as the sun in the center of the solar system, and like wise universally, much larger bodies orbit much smaller bodies, is the case. Thus the sun, by least energy rules, is the center of our solar system, and by extension in most all other similar cases.
 .
And thus the MD’s were able to understand, apply and use Duchenne’s new finding of DMD and its characteristics. Which, it should be noted, the genetics model has vastly extended our understanding of it. The use of the EMG needle exam has shown characteristic and diagnostic changes of DMD. It, too, & creates a more complete model of the functional, diagnostic findings.  The pathology of muscle biopsy and posts mortem studies has also extended by using clear cut structure/function methods (the heretofore complicated, hard to apply and understand, “clinico-pathological correlation” which is a sesquipedalian form of S/F, and harder to apply because it’s not a simple enough term. Thus, not efficient as S/F!!).. Good, efficient creations create more creativity, building up the hierarchies of our organized understandings.
 .
Thus we have a much deeper understanding, a far more detailed, explicit & formal description of how our brains are working to create diagnoses from our observations. We have a new model of understanding understanding; of how we think by processing information, largely cortical.
.
Understanding that EACH image the radiologist “reads” are a set of efficient descriptions about those images, be they CT scan, PA and Left Lateral chest x-rays, MRI of body parts, esp. brain and spinal images, are massive comparisons to a well established, efficient set of “Normal standards” which he intuitively learns & applies. This method formalizes those methods, explicitly for the first time. Those enable any radiologist to interpret, and read any image he’s so trained to see. This most all done by massive visual comparison methods, which have not been formally described, understood and whose structures and processing can now be seen.
 .
In the same ways, in our examinations, guided by experience and history, we compare as physicians, nurses, and other medical professionals, what we KNOW by constant confirmation and learn by same lay downs of LTM events, what’s normal, and what is not. And the implications of the latter patterns of findings used for diagnoses. This all organized by least energy efficiencies and methods. And these comparisons of the descriptions we daily make in diagnosing at first, then repeated exams to see and detect any characteristic changes over time, compared to our first exams, also written down, is how we create new knowledge. We compare to our verbal, efficient, descriptions by using a specialized language which uniquely and efficiently describes meaningful and useful observations which have significance. This is how we do it, in detail, within the massively interconnected cortical columns of Mountcastle, which are trained up & learned by unlimited repetitions,. Those are worked into efficient comparison methods by this creative method. This model explicitly shows how we work. And the protocols we use to make the diagnoses are also hierarchically arranged, esp. according to anatomy, physiology & in each specialty, their characteristic methods, forms and words.
 .
We use lab tests as well. Each are comparison standard structures, whose normal standards we compare to the blood counts, the chemistries, the blood gases, etc., to determine the nature and patterns of normals as well as those which are abnormal, viz. disease processes. Simply speaking, the comparison of patient data created by the lab studies, makes the diagnoses &/or confirms, it as well. using throughout all of this are the words/measures which act as the standards against which by comparison processing we find AND create the new data.  These are the methods by which we create new information, data, and knowledge, simply, efficiently and practical, most likely.
 .
This is how we create new knowledge about each new patient and how we organize specifically in each case, using the cameo of radiological reading of images, and Duchenne’s method as specifically how it’s done. & in addition, it’s complex systems thinking underlain by TD least energy efficiencies. Thus defining professional characteristics compared to the far less amateur & early student ways of doing the same tasks.  Biology is complex systems. Understanding biological complex systems by pattern recognition creates new knowledge, in other words.
 .
How this applies universally is very, very clear. We can take each & every field and ask the very same, Big Question. How do we compare the outputs of a professional to an amateur? How do we compare the skills of the med student to his training, experienced, learned teachers? Because the professionals have universally built up a series of definable, clear cut methods by which they create information by comparison to known standards from visual mostly, but also other sensory data of touch, hardness, smoothness, sounds, and the other unlimited numbers of sensory descriptions also often seen and reinforced by visual means. & those methods are highly efficient in most all respects compared to amateurs and young students.
 .
And taking a set of 12  fine professionals, for instance, to create confirmability and good sampling, cabinet making carpenters. We study each one for his definable comparison process methods, and show from each of those and how they select the wood, cutting the wood, making the shelves, and then the frames into which those shelves are parts. The surface painting and preparation. The affixing of the hardware. And in each case, how the job is done by the professional cabinet maker (& indeed ANY known professional) that gives a faster job, a job better done, with less waste of materials, and better outcomes, all methods of which are underlain by the creative use of least energy methods, trial and error discoveries and rules. THAT is what creates and defines a professional,largely. But we have much yet to learn and can by this means, as we journey & epxlore into the Undiscovered Country.
 .
And when we do the work which delimits the very many, detailed specific methods/skills that each professional does in ANY field, we find an overlapping and commonality of similar methods. And each of those are not final, nor absolute, but pretty good. We find we can improve Each of them without limit using the applications of the above, now formally described and enumerated, to advance and progress in work outputs of better and faster, and more efficiencies, without limit. Thus setting in motion, where this method is widely adopted and properly developed, growth without limit. And an efflorescence of progress and vastly improved techniques, without limit across most all of the professions. Not to ignore that the teaching of these known methods from study of professionals, will significantly improve and speed up the production of more professionals, also without limit. It creates a science of technologies and the professionalism derived from those. And this probably universal general method can be likely extended to most all of the arts, sciences, and heretofore not scientific professions known.  Without limit.
 .
And further it is enough work to occupy the times of millions of researchers, who can also teach such methods in their own fields, as well. Thus more fully uniting practice and education.
 .
The consequences of these methods are therefore clear. Comparison processing and methods (trial and error developed) using the least energy guiding method, plus complex systems understanding and thinking. and structure/function methods make all of this substantial paradigm change possible, indeed inevitable, along least energy and thus growing methods, driven by the epistemological changes first shown by Einstein. Plus Least energy as the major driver of growth and development.
.
How this can be applied to general AI is clear. IF there is a clear cut, detailned working model of how the brain/mind create knowledge, use it and extend it, viz. the wellsprings of creativity, then a model of this can be built using current and developing methods, which the above model can ALSO create using AI. If we know where to go, we can get there much faster and more efficiently, rather than brute force Trial and Error working within a huge, very time consuming to work through, combinatorial complexity, which is the case.
 .
These methods and insights can create very efficiently general AI along the lines of Gazzaniga, that the brain is a modular, complex system. This is at once likely and apparent from the above model. Creating the creativity, the tools to create the tools is also implicit within the comparison process and methods system.
.
Taking the AI of imaging recognition, this can be used to create automated image interpretation, at first of PA and Left lateral chest images. Once the several radiologists are “debriefed” about all the known and often subconscious methods they use, which are efficient, then those are collected, studied for overlapping similarities by the same comparison system using the Bayesian, more efficient methods by which machine recognize, to some extent, and simulate our cortical comparison processing methods. Guided, by explicit use of Least energy efficient methods.
.
Then each and every imaging kind be it ultrasound of heart, carotids, abdominal or uterine imaging methods can be so studied from those who are best qualified to use them. And in each case, this knowledge then built into the AI image interpreter.  It will also create a database for teaching more radiologists more easily and faster and efficiently, because what’s been subconsciously learned and not specified become formal, explicit and thus far, far more useful heuristic than known before. Combining talents & knowledge of fine radiologists can be very good.
.
It will also speed up interps of basic bone imaging studies, and free radiologists from this kind of repetitive work, where they can at once reviews such films, add or subtract from them as per supervising of the interps by AI. And then create better, and more efficient methods over time via AI plus supervised guidance by expert, highly trained and experienced radiologists in each sub field.
.
Thus the field of medicine will greatly advance in this area, and then these methods can be ,more widely applied using radiological imaging interps to the rest of medical fields. And in fact, most all other fields, without limit, too.
.
No doubt we have much to learn. But these methods will, over time, because they are more efficient, effective will grow without limit into a finer practice of medicine, using our AI assistants.
.
But recall to keep the “genie of AI”  in the bottle. That we take their suggestions and do the work with them, and not to surrender decision making to AI, wherein it can take over and create problems for us, as genies are quite well known to do!!!  We never surrender our supervision. Or final decision making actions which affect patient outcomes.
.
Knowledge is good because from knowledge can come wisdom. And from wisdom can come many good things. But, use the knowledge WISELY.  –Proverbial saying

The Complex System of the Second Law of Thermodynamics

By Herb Wiggins, M.D.; Clinical Neurosciences; Discoverer/Creator of the Comparison Process/CP Theory/Model; 14 Mar. 2014
 .

It’s largely now apparent that complex system approaches can be extended to thermodynamics, in order to create a unifying method which combines the dominant models of the sciences: complex systems as the umbrella and unifying methods of comparison process method; Thermodynamics, relativity & quantum mechanics as the probabilistic model which describes best at present the complexities of such universal systems.

 .
The Second Law of Thermodynamics can be seen to have many characteristics, which are also seen in complex systems. Initially, it was called the entropy/disorder rule, wherein energy release maximizes, thus reducing order, and maximizing entropy. But there is another aspect to the 2nd law, and that is least action, least free energy, least energy, and the maximizing of energy diffusion as well as mass concentrations being diffused. Also called the minimalist principle, and has been known in one form or another since the later 1700’s. James Hamilton called it least action. LaGrange used it to find the least energy L1, L2, L3, and nearly completely stable L4 & L5 points in the earth-moon system.
 .
Least energy as it can be most simply called & applied, means that systems strongly tend to reach least energy configurations. Calling it least energy also allows it more simply to be applied to the many aspects of least energy, which we see in the real world examples. Least energy in manufacturing, as W. Edwards Demming began to develop his “efficiency” methods, means least time to manufacture, least costs, least amount of materials being used; least activities in moving materials and products around during manufacturing, and in short, all the myriad ways Least Energy can be applied & used to create efficiency increases & thus the advantages of surplus energy, time, materials, money, and so forth. That is, growth potential.
 .
Apple Computer as the largest corporation on earth, developed those products which not only were highly efficient, easy and fun to use, but so well marketed, their least energy production and marketing methods, gave them the highest profits both by percentage and totals, ever seen, due to Steve Jobs’ outstanding talents. He united telecommunications, neatly, efficiently and usably with computers, thus doing a very great deal with a product which could be held in one hand. Least energy efficiencies, indeed!!
 .
The multiplicity of the apps of least energy rules is its chief feature and in having these characteristics, show it’s very likely a complex system & approach. It’s not just A, or not A , but a rich panoply, plethora of outcomes possible which create the time, energy and cost savings in manufacture and the service industries.
 .
Thus Least energy underlies the efficiencies of Adam Smith, the “invisible hand of the markets”, as a deeper analysis and more fruitful, and thus applicable means to create the growth created by those efficiencies.
 .
See:
 .
There is yet another point in all of this. Synonyms and the previously related extensions of those by “word clusters” shows this multiplicity of methods and forms as existing and real. The Analogy synonyms are a good example of this. Hofstadter initially discussed this as a wide application of his cognitive work in “Godel, Escher, Bach….” and his conceptualization can be seen as a basis for the deeper and wider comparison process and thus complex system applications.
 .
Thus the many ways in which we describe the Least Energy rule also show its synonymic and complex system nature.
 .
Analogy, metaphor, simile, anecdote, parable, fable, story, koan, many myths, etc. are all the many types of comparison processing which can be used to explain, demonstrate, and teach new concepts and ideas.  But there is the complex system aspect here, too.
 .
Of the multiplicity of comparison process methods present and real, too. Of the many phrases of words, which ALSO describe such methods. Take the synonym group of “Understanding”: comprehension, insight, apprehension, Know-how. Also, making sense, seeing the connections, seeing/visualizing, connecting the dots, working it out; the entire synonym/word cluster grouping, which shows the many ways in which we use the “understanding” words to “figure out” what we are trying to show, explain and do. Thus the word clusters/synonyms show us the multiplicity of ways we do things. The methods, techniques, skills, styles, technologies, etc.
 .
The multiplicities of the synonym & word clusters are there BECAUSE of the complex system nature of events in our universe. We create & use language to show the myriads of ways of complex systems. This then deepens our understandings of complex systems by showing more of the vast ways that things are done. More of the myriad aspects of words which reflect the many ways in which events can be understood, classified and described. Synonyms/word clusters are part and parcel of the complex system descriptions of our universe of events. That’s where they come from and which the word cluster extension also develops and explicates as well.
 .
This then describes the multiplicity of the ways in which we apply, understand and use the Least Energy rules. It’s complex system!!! And begins to unite the models of our universe by a deeper understanding of these multiplicities, such as the many dopamine and catecholamine receptors sites, as showing a complex system at work. & all the other myriads of kinds of receptor sites doing much the same. The many “side effects” of drugs, which are in fact complex system effects, and the many receptor sites for insulin, neurochemicals and many other instances. Each are formed of the same deeper method working, comparison process and creating standards of word categories, and measuring standards to create information as well as comprehend events.
 .
There are MANY other key aspects of the complex system nature and characteristics of Least energy. A major form of which is stability. Least energy molecules CO2 and H2O are molecules from which no further chemical energy can be obtained. It requires a great deal of activation energy to convert those two into sugars & starches, which the plants can do. Thus stability is a major aspect of least energy. The orbits of the planets are least energy. And the Newtonian N=2 gravitational laws are ALSO least energy solutions to orbits. This largely, however ignores the complex system of the solar system, N=9 and more, but have touched upon that topic before, too.
The ancient Egyptian triad, or trinity, of Uas (power, force), Djed (the backbone of Osiris, stability) and Ankh, (life), shows this. They elevated stability to a great part of how they viewed events. Thus, embodying efficiency and least energy implicitly in their understandings of events. As they strove towards stability, so they also achieved efficiency and by implication, thermodynamic least energy outcomes.
 .
Even as the Efficiencies of the market Adam Smith called the “invisible hand” guide market growth and development.
 .
And this growth and development as referred to above is yet another aspect of least energy. And it can describe events from avalanches, to market forces, to embryological growth, also. Most all growth in the natural world is least energy driven, from our basic innovations which succeed because they develop surpluses which can be fed back into the system to create growth, such as the obvious, profits. So the 2nd Law has thus Many aspects to it, which are complex system. And in this way, we extend & deepen our understanding by melding and integrating complex system thinking into thermodynamics, as well.
 .
In describing the methods by which professionals approach their jobs, we find the fox and the hedgehog analogy. The Foxes know a very great many things, and use this complicated, but not coherent set of lists and methods to do their work. But the hedgehog, he knows one big thing!  And that’s what’s going on here. Complex systems require a very great deal of information to handle, use and understand them. & the foxes use the” splitter” method of knowing lots of details and methods to handle it. But the lumper, the hedgehog, knows a big concept, model, theory to handle it all. Simplifying the mass of data into a single, unified model.
 .
In the same way,
 .
 .
For instance, Newtonian orbital equations simplifies most all N=2 orbiting bodies into a single, least energy equation, which describes ALL of the orbits, rather uniquely. As compared to the foxes who have the details and must know the descriptions of all of the orbits of all of the planets in all of the 200+ Billions of star systems in our galaxy, alone.  The least energy Newtonian method wins by least energy!!
 .
In Shannon’s Information Theory, order, disorder and information have deep implications and relationships. The more the order, and precise, detailed description of events there are, the more information is there and the less entropy. Again, least energy extends I.T. And in addition, because incompleteness can be shown to be less order and more entropy in comparison to higher to lower information events. it becomes clear. Incompleteness is very likely least energy relationships as well.
 .
Further, the more complete a model is, the less entropy; the more information and more descriptions it holds. The caveat is that the method must also be more efficient, not just a better description, more complete. This doubles up the creative, innovative new model’s value. And because we know from the Second law that perfect efficiency is very unlikely, if not impossible, we know very likely that complete descriptions of events are also unlikely. That most all our models are necessarily NOT complete. Thus there may be almost always much room for improvement in our models.
 .
Besides the subtleties of incompleteness, there is that of simplicity. A more complete model simplifies all of the details, and links greater relationships to this increase in simplicity. This is yet another aspect of least energy. Or to quote H. D. Thoreau, “simplify, simplify, simplify.”
 .
The major trait of new effective models is that they are elegant, explain much with little (simplicity), they are fruitful, giving many, important new insights and find many new and often unexpected findings. And that they are by Shannon’s rules, more complete, describe more with less, as well.
 .
Let’s take the Periodic chart of the elements and examine it from these new epistemologies and paradigms. How did Mendeleev find this organization of the elements? Let’s use these methods to empirically look into how his mind worked. He knew that elements, intrinsically could not be broken down into smaller parts by normal chemical means available at the time. Thus, they were stable, being least energy atomically. Further, he began to see patterns, the before mentioned pattern recognition that follows basic recognitions, when the recognitions are fed back into the comparison processing systems. That created the next category as he knew lithium, sodium, potassium, cesium were all chemically very similar. His brain recognized this high similarity. So he grouped them in a linear, vertical line just under hydrogen, also with 1+ charge, we now know. Showing how bonding concepts flowed directly from this grouping of the alkali metals, the first hierarchy of the elements. Thus proving the hierarchical nature of the periodic chart and grouping with the many other taxonomies of our understanding by those uniting relationships.
 .
Then he saw that NaCl, common salt, linked the alkali metal, Na, to chloride. And also, iodine, fluorine, bromine, and so forth. Using the alkali metals as a basic standard he created the halogen linear standard.
 .
And at once given their weights realized that there were elements to the right and left of Li and F, as well as the next series Na and Chloride. He had created the outlines of the periodic chart. and all that was left was filling it in. His model was therefore, HIGHLY fruitful, not only in bonding characteristics, but in relating all of the elements into a chart reflecting their fundamental relationships of weights, atomic numbers, etc. Which also over time gave rise to nuclear models of the atoms, as well as the isotopes. AND he realized not only were there gaps in the chart, but that those could be found, as well. Thus it’s massive fruitfulness; again, the power of a good model.
 .
He knew of neon, argon, krypton, but realized there was another element missing above neon, which was only found first in the sun much later, and then on the earth. Helium4 was put into its place. The chart was predicting outcomes, which were unexpected!!! And so rubidium was found below cesium, and francium below and so forth.
 .
Finding beryllium, he put that next to lithium, and then ID’d those elements which combined with 2 halogens, as well. and on and on he went, with the heavier weight elements following each of them together. He’d found the grand design of the elements. And note how this comparison process model, models very accurately his modeling, as it does the models of Darwin/Wallace for evolution and how Edison found, by Trial & Error, his amazing discoveries, creations. As it does indeed Einstein and most all other creative acts.
 .
And this is how most all of our anatomies, taxonomies, hierarchies, & classifications are built up. Not ignoring the alphabetic hierarchies of the dictionaries, thesauri, indices, telephone and city directories, and maps, which show how all on those classifications are related to most everything else, there, too. Alphabetic systems are used, with numbers, to rank together words whose relationships are not known, but need to be organized so they can all be listed, organized & understood.
 .
And within the dictionary, which are least energy classifications using ordering by alphabets are also least energy. & within the dictionaries, we can find THESE least energy classifications, which correspond to groupings unsuspected in linguistics, largely least energy. We find the repeating words, beginning with “re-“, 1000’s of them in fact. Reiteration, reflect, re-organize, remind, reconsider, remember and most importantly, recognize, etc. And those similar words, such as again and again, forever and ever, etc., etc., etc, which reflect the repeating events in existence and map and describe those very, very well.
 .
In addition, and this has been missed, we find the “com” words, in all their nearly unlimited forms, again, the “cum” words from the parent Latin word, which go together under the “com” sections, and many others, as well. Those are the “go together” words, the “KO-ine’ related words. Cooperate, company, commiserate, and so forth. & yet, each of those are groups of events which can be connected, and collected together. Hiding all the time, and disguising and camouflaging the parent of the entire organization, “comparison”!  These are the depths within depths, and yet there are far, far more in the dictionary. Without limit, too. & anyone can open up the dictionary and find many of those and many other examples too many here to relate.
 .
We find the hierarchies of the “high higher highest” grouping of the comparative adjectives, and the “low lower lowest” forms, or the “some, more most”, and “some lesser and least”, to show how many kinds those are, which although NOT grouped together in the dictionary ARE grouped together as the 3 forms of linear adjectives which describe, and can be used to create linear numerical groupings as well.
 .
There are the least energy, vast numbers of contractions (can’t, weren’t), acronyms (NASA, FBI), and abbreviations (abbrev, etc.), which are ALSO least energy forms, too. Seen often in slang, esp. the Aussie -o ending for funnel web spider, the funno, and so forth. Again, least energy forms in all the myriad ways.
 .
Returning to the “com” words we see depths within depths again, just as Mendeleev did, in the commons, commiseration, company forms. But hidden there are the collection, com-lection changed to collection to save a consonant and thus time. Corroboration instead of com-roboration. Again, hiding the “com” words in all their myriads of ways. Those missed by linguistics not understanding the Least energy rules and applying those deeply widely and without limit to the words, themselves.  And the lovely co-re-spond being an “re-” word as well as a “com” word. And the same with com-re-lation, correlation. Without limits. least energy forms!!! All of this missed, too.
 .
And yet there are MORE depths within depths of least energy rules.
 .
Knowing Latin we know of the ab  and ex prepositions. So for “abfection, we get affection; and likewise for ex-fection, we get effection, saving yet another letter/consonant and time/energy in saying the words. And those are also without limit, too. arrogate, instead of ab-rogate. Affliction shortening now ab-fliction.  & on and on as we can find literally numbers without limit of those all over the dictionaries. Thus we have applied the comparison processes which created the periodic chart of the elements to the dictionaries to find again, those relationships and categories of many, many kinds of words which heretofore have been missed.
 .
There are many many other examples of many kind of word groups, which there is no time nor space to relate. See what you can find which is original, new and not expected, the fruitfullness of this approach is at once apparent again!!!
 .
These show the vast efficeincies and apps of the comparison processing and least energy methods. Without limits.
 .
But furthermore as well, we know that as our models are most all incomplete, that they can become MORE complete by work of trial and error and related processes. Thus we have the means, without limit, & most all of our methods can be “improved” without limit, virtually. because no matter how efficient they might be compared to earlier methods, they still have room for improvement.
 .
This necessarily contemplates and implies that there likely exist unlimited improvements in our ways of doing things. Our models, our technologies, skills, devices, tools and methods can be improved without limit, Up to a point of diminishing returns. And means that once realizing this, we can always find ways to get better from there.
 .
Further still, the skills of the professionals compared, versus those of the amateurs can now be understood as efficient methods developed over time, by trial and error. We can apply least energy rules here as well, without limit, and the applications of these can change and improve education in ALL fields. Comparison processing and least energy are universally applicable
 .
For instance, We can take a group of 12 highly skilled professionals & studying their styles and methods of doing things for the specific methods each uses. Then comparing those dozen persona & their myriads of similar and different methods for efficiency. & Then teaching those specific, now unknown, but which can be found with careful observation and work, methods which are very efficient. Then teaching those advanced methods to students, specifically. This will speed up education substantially. and because it’s a universal application, likewise likely applying to ALL fields, to improve and upgrade our professionals without limits. This is the Promised Land.
 .
These are but a few of the multiplicities of implications, developments and possibilities which can be created by the complex system approach to the 2nd Law. Unlimited growth and an efflorescence of creativity and outputs in the sciences & arts, and indeed in all fields, unparalleled in human history!!  All because the the applications fo the complex systems, comparison process and unlimited methods, and least energy rules. This is the promised land of the undiscovered country. Unlimited cultural nd scientific developments potentially dwarfing most anything which has ever been seen before.
 .
Least Energy Rules…….