Way back in 1989, one day, I knew this was Intuition.
Abraham Thomas

KNOW YOURSELF PODCAST Listen each week, to one podcast. Based on practical self improvement principles. From the insight of an engineer, back in 1989, about the data processing structure of the human mind, recognizing, filtering, storing patterns, without stopping.  Patterns of guilt, shame, fear.  How to silence painful subconscious patterns and become self aware.

The Modular Biological
Neuronal Network

I feel that one biological neural network has not received the attention it deserves from mainstream science. This network is modular and is repeatedly utilized throughout the brain. The module has a pattern recognition framework, which facilitates the flow of information between networks, using a common language of internal representation. Each module has massive memories and powerful intelligence. While ignoring the mechanisms which facilitate the operations of this network, science has traced the paths along which these modules progressively integrate information in the brain.

In the primary areas of the cortex, these modules convert all sensory information into a common identifiable language of neural impulses. The recognition messages proceed to networks in secondary areas, which coordinate binocular vision and stereophonic sound. This integrated information travels to modules in the association regions, which recognize events. Recognition of events triggers complex emotionally supervised motor controls, which finally define all human activity. All these modules follow exactly the same intelligent processes, making them identifiable as the pivotal biological neural network of the brain.

  • Wikipedia differentiates a neural network from a neural circuit.
  • The olfactory system uses combinatorial coding to intelligently remember, identify and differentiate between subtle smells.
  • Combinatorial coding can store a virtual infinity of memories.
  • In the visual field neurons fire to identify a line, or an edge.
  • During evolution neural firing patterns took over communication from chemical messages.
  • Science assumes that neurons use a mathematical process.
  • Brain scans reveal that specific regions perform specific functions.
  • Hebbian learning cannot explain trillions of visual memories.
  • Perceptrons are simplified models of neurons.
  • LTP deals only with urgent "speed dial" neural messages.
  • Combinatorial coding provides the nervous system a language of internal representation.

This hypothesis is unique in accounting for the striking speed of human intuition; in offering simple new routines to control the mind; in revealing hope for the future of Artificial Intelligence (AI).  Does the mind contain a treasure trove of knowledge?  How does it retrieve solutions to topical problems from such a store?  These proposed explanations have been gathering millions of page views from around the world.  The 1989 beginning of this exciting mission was a revealing insight from a Prolog AI Expert System.  The Expert System could diagnose one out of 8 diseases hinged on the user entering answers to a long string of questions.  In contrast, a doctor could identify a disease out of 8000, without questions, with just a glance.  The ideas in this unconventional hypothesis stem from an "Aha!" moment, when the Expert System revealed a singular algorithm, which could be enabling the mind to identify and act on perceived patterns in milliseconds.

The Prolog Expert System could diagnose 8 diseases, which shared 13 symptoms. It used an algorithm, a step by step procedure, for the diagnosis. Out of curiosity, I began testing an alternate algorithm in a spreadsheet.  Its first step was to SELECT all diseases WITH a particular symptom. Contrary to my plan, the algorithm would DELETE all diseases WITHOUT the symptom. That reverse was caused by a chance double twist in its "if/then" logic.

So, when I clicked "Yes" for one particular symptom to test the first step, the spreadsheet DELETED 7 out of the 8 diseases, leaving behind just one disease.  Surprise!  That disease was indicated by that symptom.  In just one leap, it had proffered the correct diagnosis. As with the doctor, it was a split second verdict!  The algorithm had ELIMINATED all diseases without the symptom.  Was selective elimination from a known list the trick used by nature for its intuitions?

Could elimination provide a faster search strategy?  Since elimination shortened the steps, a programmer coded for me a new, more ambitious Expert System.  Instead of 8 diseases, it dealt with 225 eye diseases.  Its algorithm eliminated both irrelevant diseases and their connected questions, for each answer.  The Expert System was presented to a panel of doctors. "It identified Angular Conjunctivitis, without asking a single stupid question," said a doctor. The Expert System was satisfactorily diagnosing all the eye diseases in the textbook!  The algorithm was an impressive AI tool!  The year 1989 catalogued the premises, set out in these pages, explaining how the algorithm could be enabling the mind of a doctor to achieve split second diagnosis.

Can An Algorithm Be Controlling The Mind?
I am not a physician, but an engineer. Way back in 1989, I catalogued how the ELIMINATION approach of an AI Expert System could reveal a way by which the nervous system could store and retrieve astronomically large memories.  That insight is central to the six unique new premises presented in this website. 

These new premises could explain an enigma.  A physician is aware of thousands of diseases and their related symptoms.  How does he note a symptom and focus on a single disease in less than half a second?  How could he identify Disease X out of 8000 diseases with just a glance?  

First, the total born and learned knowledge available to the doctor could not exist anywhere other than as the stored/retrieved data within the 100 billion neurons in his brain.  The perceptions, sensations, feelings and physical activities of the doctor could only be enabled by the electrical impulses flowing through the axons of those neurons.  The data enabling that process could be stored as digital combinations.

Second, combinatorial decisions of neurons cannot be made by any entity other than the axon hillock, which decides the axonal output of each neuron.  The hillock receives hundreds of inputs from other neurons.  Each hillock makes the pivotal neuronal decision about received inputs within 5 milliseconds.  A
xon hillocks could be storing digital combinations.  It could be adding each new incoming digital combination to its memory store.  The hillock could fire impulses, if it matched a stored combination. If not, it could inhibit further impulses.  Using stored digital data to make decisions about incoming messages could make the axon hillocks intelligent.

Third, combinations are reported to enable a powerful coding mode for axon hillocks.  Olfactory combinatorial data is known (Nobel Prize 2004) to store memories for millions of smells.  Each one of 100 billion axon hillocks have around a 1000 links  to other neurons.  The hillocks can mathematically store more combinations than there are stars in the sky. Each new digital combination could be adding a new relationship link.  In this infinite store, specific axon hillocks could be storing all the symptom = disease (S=D) links known to the doctor as digital combinations.

Fourth, instant communication is possible in the nervous system.  Within five steps, information in one hillock can reach all other relevant neurons.  Just 20 Ms for global awareness.  Within the instant the doctor observes a symptom, 
feedback and feed forward links could inform every S=D link of the presence of the symptom. Only the S=D link of Disease X could be recalling the combination and recognizing the symptom.

Fifth, on not recognizing the symptom, all other S=D hillocks could be instantly inhibiting their impulses. The S=D links of Disease X could be continuing to fire. Those firing S=D link would be recalling past complaints, treatments and signs of Disease X, confirming the diagnosis.  This could be enabling axon hillocks to identify Disease X out of 8000 in milliseconds.

Worldwide interest in this website is acknowledging its rationale. Not metaphysical theories, but processing of digital memories in axon hillocks could be explaining innumerable mysteries of the mind.  Over three decades, this website has been assembling more and more evidence of the manipulation of emotional and physical behaviors by narrowly focused digital pattern recognition.  It has also been receiving over 2 million page views from over 150 countries.

The Modular Biological Neural Network –
Why Is the Neuroscience Definition Misleading?

Neuroscience does not link the word “pattern recognition” to the word “network.” Wikipedia defines a biological neural network as a “series of interconnected neurons whose activation defines a recognizable linear pathway.” If the cells fire together, it is a biological neural network. But, if it is a “functional entity of interconnected neurons, which intelligently regulates its own activity using a feedback loop,” Wikipedia terms it a “neural circuit.” If it is intelligent, it is a circuit and not a network. The “circuit” label highlights the failure of neuroscience to identify the pattern recognition role of innumerable functioning biological neural networks in the nervous system.

he Modular Biological Neural Network –
When Did Science Acknowledge Pattern Recognition?
While its official definition suggests that a biological neural network lacks intelligence, a few scientists have already discovered the mechanism, which grants it a massive intelligence. A 2004 Nobel Prize was awarded for this very discovery. The olfactory system intelligently remembers, identifies and differentiates between subtle smells. The researchers had used calcium imaging to identify individual mouse receptor neurons, which fired on recognition of specific odors. They exposed the neurons to a range smells. They found that a single receptor could identify several odors. At the same time, each odor was identified by several receptors.

In the experiment, scientists reported that even slight changes in chemical structure activated different combinations of receptors. Different combinations of receptors fired to identify different odors. Neural firing combinations formed the internal representation of smells by the olfactory network. The scientists had discovered an intelligent biological neural network, which used combinatorial coding as its language of internal representation. Neural firing combinations were the language of internal representation for the olfactory neural network. The researchers believed that the taste network also followed the same internal representation system.

The Modular Biological Neural Network 
Why Is Combinatorial Coding Powerful?

Imagine the memory potential of a combinatorial processor.  The olfactory system contains over 10,000 receptors. Just 100 receptors could represent 100 x 99 x 98 x 97 x .... x 2 x 1 unique possible combinations. That represents more than 1, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000 possible combinations! The recognition of combinations of firing at their dendrites can enable a single neuron to fire to identify trillions of unique combinations. Combinatorial coding has been discovered. It is a feasible language of internal representation. Imagine how similar biological neural networks can store vast memories and recall them in milliseconds. Imagine such networks as the modules, which operate in all regions of the brain to empower the breadth and sweep of human and animal intelligence!

The Modular Biological Neural Network –
What Can Your Visual System Do?

A hierarchy of neural networks recognize objects by analyzing the pixels of light arriving in the visual field. Individual visual receptor neurons fire in response to a small subset of stimuli within its receptive field. The neural firing combinations of their axonal outputs become the dendritic inputs of a neuron in V1. This neuron fires on recognition of a combination, which indicates a vertical line. Combinatorial firing from myriad neuronal network modules are identified at higher levels to indicate location, brightness, color, edges, line and curves. Imagine a system, where each neuron can store a virtually infinite number of combinations of such aspects within milliseconds!

The human mind can recollect any one of 10,000 images displayed at 1 second intervals. The higher levels recognize and remember innumerable aspects of the millions of pixels of a single photograph. Neurons fire, when they recognize subtle combinations. At the highest levels, researchers discovered the “Bill Clinton neuron,” which fired on recognition of just one special face. The cell fired on recognizing three very different images of the former President; a line drawing of a laughing Clinton; a formal painting depicting him; and a photograph of him in a crowd. The cell remained mute when the patient viewed images of other politicians and celebrities.

The Modular Biological Neural Network
What Is The History of Pattern Recognition?
Before the arrival of nerve cells, the earliest multicellular forms moved about and swallowed, or expelled food, by expanding and contracting their cells. The contraction was effected through chemical signals, the forerunners of hormones, which diffused quickly throughout the system. But the diffusion of chemicals was slow over longer distances and they could not be specifically targeted. Nature developed neurons to transmit specific information faster. 

The entire process of neuronal interactions have been based on the pattern recognition model. Those networks in the early reptilian nosebrains recognized smells to decide whether they were safe, or dangerous. The 2004 Nobel Prize describes the mechanism of those networks. The fine distinctions in the environment which they could make can hardly be explained through mathematical network models. Dogs can quickly sniff a few footprints of a person and determine accurately which way the person is walking. The animal's nose can detect the relative odor strength difference between footprints only a few feet apart, to determine the direction of a trail.

The Modular Biological Neural Network –
Why Arithmetic the Wrong Answer?

The reason why science still ignores the pattern recognition model is a single erroneous perception. That view prevents an understanding of the mechanisms of the biological neural network. The root perception of mainstream science is that neurons compute. Their standard explanation is that the axon hillock of a neuron triggers an action potential, when the arithmetic total of input signals received by its dendrites reach a specific threshold. Neurons are presumed to use some form of computation.


Science errs fundamentally with its assumption that mathematics initiates the action potential. Imagine the understanding possible, if we assume that neurons do not compute. That they recognize combinatorial patterns. The axon hillock of each neuron in the network recognizes specific patterns of the input signals at the synapses of its dendrites. On recognizing a pattern, an action potential flows down the axon of the neuron. In the end, science has acknowledged that, further downstream, the omnipresent modules of the biological neural network do performs clear pattern recognition functions.

The Modular Biological Neural Network –  
What does Neuroimaging Reveal?

There is overwhelming evidence to show that specific biological neural networks perform clearly defined functions in specific regions of the brain. The activation of particular brain areas, when people perform particular tasks can be identified through functional neuroimaging. fMRI (functional magnetic resonance imaging), PET (positron emission tomography) and CAT (computed axial tomography) have been extensively used to identify functional structures, or to assess brain injury through high resolution pictures.

Researchers have identified dysfunctional neurotransmitters such as dopamine in the basal ganglia of Parkinson's patients to yield insights into the networks, which cause specific cognitive deficits. Predictions of these deficits enable pharmacological manipulations, which deal with specific networks. There is clear evidence that the modular biological neural networks perform intelligent functions within the system. Yet, such intelligence is attributed to mathematical models, which fail to explain the powerful memories or the subtle intelligence of these networks.

The Modular Biological Neural Network –
Can Hebbian Learning Work?

The Hebbian learning theory suggests that the strengthening of active synaptic junctions could store network memory. Donald Hebb theorized that "Cells that fire together, wire together." He suggested that simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells. He suggested that this led to “associative learning.” He suggested synaptic plasticity as a mechanism, which can store memories in networks. But visual memories imply changing combinations of neural firing at the same synapses. Each image is an arrangement of millions of visual pixels, arranged in marginally different combinations. 
A movie has wide screen images with millions of pixels changing 25 frames per second for 90 minutes!  So, the process of watching a movie would strengthen ALL synapses in the visual system!

The Modular Biological Neural Network
Can Perceptrons Explain Human Memory?
Instead of acknowledging the role of pattern recognition in biological neural networks, science offers several explanations of their neural mechanisms. All these explanations attribute various types of computation to form the basis for interactions between neurons. McCulloch showed theoretically that networks of artificial neurons could implement logical, arithmetic, and symbolic functions. Perceptrons, or artificial neurons are simplified models of biological neurons. While such models do carry out mathematical and logical computations, they cannot explain the phenomenal memory or the broad sweep of the human intellect.

The Modular Biological Neural Network –
What are the Limitations of PKMzeta?

Researchers suggest long term potentiation (LTP) as forming the basis for memories of neuronal networks. Dr. Sacktorat discovered a substance called PKMzeta, which was present and activated in neighboring cells with LTP links. The PKMzeta molecules formed into precise fingerlike connections among brain cells that were strengthened. The molecules remained in place to sustain the speed dial links, which enabled heightened responses to danger. However, when a drug, which interfered with PKMzeta was injected directly into the brain, the animals forgot their fear. The animals even forgot a strong disgust they had developed for a taste after the administration of the drug. It was hoped that by disabling LTP, the drug could blunt painful memories and addictive urges. The ability of LTP to handle urgent messages does not explain how the system remembers last night's dinner menu.


The Modular Biological Neural Network –
What Is An Internal Representation?

All scientists are agreed that biological neuronal networks require a language of internal representation. The vast extent of animal and human knowledge requires the storage of memories, coded in this language. The language needs to translate external perceptions of the world into complex philosophical concepts. The system is known to have the ability to access any portion of this knowledge within milliseconds. Imagine seeing a reasonable explanation of all these capabilities through the combinatorial interactions of myriad biological neuronal network modules.

This page was last updated on 27-Jan-2016.



KNOW YOURSELF PODCAST Listen each week, to one podcast. Based on practical self improvement principles. From the insight of an engineer, back in 1989, about the data processing structure of the human mind, recognizing, filtering, storing patterns, without stopping.  Patterns of guilt, shame, fear.  How to silence painful subconscious patterns and become self aware.


Jordan Peterson - Happiness
Can Artificial Intelligence Replace Humans?
The Hard Problem Of Consciousness


SCROLL DOWN
FOR MORE....