BrainMind.com


Neuroscience
Head Injuries, Limbic System, Basal Ganglia,
Frontal, Temporal, Parietal, Occipital Lobes
Right & Left Hemisphere

Hardback & Paperback Books

Limbic System: Amygdala, Hippocampus, Hypothalamus, Septal Nuclei, Cingulate
ADVANCED TEXT: Emotion, Cognition, Memory, Development, Language, Abnormal Behavior…
Head Injuries, Concussions & Brain Damage: Cerebral and Cranial Trauma, Skull Fractures….
Basal Ganglia, Brainstem, Cerebellum, Striatum, Thalamus: Caudate, Putamen, Globus Pallidus...
Frontal Lobes: Neuroscience, Personality, Emotion, Language, Alien Hand, Free Will….
Buy From Amazon
Buy From Amazon
Buy From Amazon
Buy From Amazon
Buy From Amazon

Neuroscience
Neuroscience Neuropsychology, Neuropsychiatry, Behavioral Neurology - E-Books
Neuroscience: Neuropsychology, Neuropsychiatry. Introduction...
The Frontal Lobes: Neuroscience, Neuropschiatry, Neuropsychology, Neurology...
Parietal Lobe: Body Image, Visual Spatial Perception, Language
Stroke: Thrombi, Emboli, Hemorrhage, Aneurysms, Athersclerosis, TIA, CVA...
Basal Ganglia, Striatum, Thalamus, Caudate, Putamen, Globus Pallidus, Brainstem...
Limbic System: Hypothalamus, Amygdala, Hippocampus, Cingulate...
Temporal Lobe: Language, Memory, Emotion
Head Injuries, Skull Fractures, Concussions, Contusions, Hemorrhage, Coma...
Memory, Amnesia, Amygdala, Hippocampus, Neural Networks…
Free Will and the Frontal Lobes: Loss of Will, Against the Will…
Brainstem and Cerebellum: Medulla, Pons, Midbrain....
Right Hemisphere, Left Hemisphere, Consciousness, Unconscious Mind....

Developmental Neuroscience E-Books
Fetal Cognitive & Brain Development: Normal & Abnormal
Child & Infant Emotional & Cognitive Brain Development
Neuroscience.
Brain & Mind
Introduction, Primer
Development of Consciousness, Language, Emotion & Thought

Consciousness
Development, Quantum Physics, Neuroscience E-Books
Buy From Amazon
Buy From Amazon
Buy From Amazon
Buy From Amazon

Infinity - The Big Bang
Extinctions
E-Books

Infinite Universe: Quantum Physics of Infinity
Extinctions: History, Origins & Future of Mass Extinctions
The Big Bang: A Critical Analysis...

Language, Consciousness & the Origin of Thought

From: Neuropsychiatry, Neuropsychology, Clinical Neuroscience
by Rhawn Joseph, Ph.D.




LANGUAGE, CONSCIOUSNESS & THE ORIGIN OF THOUGHT

Among 80-90% of right handers, and over 50-80% of those who are left handed, the left cerebral hemisphere provides the neural foundation and mediates most aspects of expressive and receptive speecha and language functioning, including reading, writing, speaking, spelling, naming, and the comprehension of the grammatical, syntactical, and descriptive components of language, including time sense, rhythm, musical scales, verbal concept formation, analytical reasoning, and verbal memory.

As demonstrated with functional imaging and blood flow studies, when reading, speaking, and naming, or when playing musical scales, or listening to the rhythmical aspects of music, the activity significantly increases in left hemisphere and left temporal lobe.

The left cerebral hemisphere is also associated with the organization and categorization of information into discrete temporal units, the sequential control of finger, hand, arm, gestural, and articulatory movements, and the labeling of material that can be coded linguistically or within a linear and sequential time frame. It is also dominant in regard to most aspects of expressive and receptive linguistic functioning including grammar, syntax, reading, writing, speaking, spelling, naming, verbal comprehension, and verbal memory. In addition, the left hemisphere has been shown via dichtoic listening tasks, to be dominant for the perception of real words, backwards speech, and consonants, as well as real and nonsense syllables.

As is generally well known, within the neocortical surface of the left hemisphere there is one area that largely controls the capacity to speak, and another region that mediates the ability to understand speech, i.e. Broca's Expressive Speech area, and Wernicke's Receptive Speech area which becomes coextensive with the inferior parietal lobule. Specifically, Broca's expressive speech area is located along the left frontal convexity, whereas Wernicke's receptive speech area is found within the superior temporal lobe. In addition to becoming coextensive with the Wernicke's area the inferior parietal lobul projects directly to Broca's area and injects names, words, and temporal sequences, into the stream of language and thought .

However, whereas the left hemisphere mediates the the perception and processing of real words, word lists, rhymes, numbers, backwards speech, morse code, consonants, consonant vowell syllables, nonsense syllables, the transitional elements of speech, and single phonemes, the right hemisphere is dominant for the melodic and emotional aspects of speech production and comprehension. The right frontal lobe mediates singing, swearing, and the melodic emotional aspects of speech, whereas the right temporal lobe perceives and processes these para-linguistic sounds so that meaning, intent, context, and attitude can be discerned (Joseph, 1982, 1986, 1988, 1990, 1996, 2000). In addition, language production is also dependent on the limbic system. The limbic system provides the neurological foundations for the evolution and development of human speech.

THE LANGAUGE AXIS

In a monograph published n 1982, and which was subsequently translated and republished by numerous foreign journals and numerous American medical schools, Dr. Rhawn Joseph introduced a new theory of language which as been adopted and even plagerized by numerous scientists and national publications, including the New York Times and Scientific American. In this monograph, Dr. Joseph detailed the importance of convergence zones and the pivotal role of the inferior parietal lobe in assimilating and creating verbal associations (and filling the "gaps" in any information received), thus producing vocabulary-rich grammatical human speech. Dr. Joseph's theory is now supported by an incredible body of evidence, and has been widely accepted.

As detailed by Joseph, (Joseph, 1982, 1986a, 1988a; Joseph et al., 1984), and subsequently confirmed in numerous studies (see below), language processing is sequential, serial, and parallel, involving the activation of widespread areas of both the right and left hemisphere and specific cortices in the left half of the cerebrum. That is, when engaged in language and other cognitive tasks, the brain functions in both a parallel and localistic mode and thus engages in parallel/distributed as well as localized representation and processing.

The construction of human speech involves the parallel and localized activation of Wernicke's and Broca's areas, and the inferior parietal lobule (IPL), with the IPL serving as the ultimate convergence zone. Moreover, because language is also melodic and emotional, language also requires activation of the right frontal and right temporal lobe--findings which have also been confirmed by functional imaging (e.g., Bottini et al., 1994; Price et al., 1996).

Dr. Joseph also explained how the inferior parietal lobule (the IPL--which includes the angular and supramarginal gyrus) assimilates associations received from yet other areas of the left and right hemisphere and limbic system (including the amygdala and cingulate gyrus), fills any gaps with relevant associations, and then injects the resulting verbal associations into the stream of language and thought via the arcuate and longitudinal fasciculus which interlinks the language areas. Hence, the concept of the "language axis" and "limbic language" --concepts which were first proposed by Dr.Joseph and which have since received widespread confirmation from a number of independent laboratories. For example, as based on functional imaging studies, it has been demonstrated that speech processing, reading, subvocal vocalization, and innerspeech activates the left frontal lobe (Buchel et al., 1998; Demonet, et al., 1994; Paulesu, et al., 1993; Peterson et al., 1988). And, left frontal activation increases as word length increases, and in response to unfamiliar words (Price, 1997). In addition, reading and speaking activates the Wernicke's area and the left posterior temporal lobe (Bookheimer, et al., 1995; Howard et al., 1996), including the supramarginal gyrus (Bookheimer, et al., 1995), angular gyrus of (Price, 1997) and (when reading) the left medial extrastriate visual cortex (Peterson et al., 1990). When making semantic decisions (involving reading words with similar meanings), there is also increased activity in the left posterior temporal lobe at the junction of the IPL (Price, 1997).

Moreover, like the left frontal lobes, the left temporal areas are activated during word generation (Shaywitz, et al., 1995; Warburton, et al., 1996), and sentence comprehension tasks (Bottini, et al., 1994; Fletcher et al., 1995), whereas the IPL in general appears to serve as a phonological storehouse that becomes activated while reading, listening, and when engaged in a variety of languages tasks including word retrieval (Bookheimer, et al., 1995; Demonet, et al., 1994; Menard, et al., 1996; Paulesu, et al., 1993; Price, 1997; Vandenberghe, et al., 1996). Moreover, activation in the IPL as well as the left frontal and left temporal lobe increases as word length increases and for long and umfamiliar words (Price, 1997).

BROCA'S ARE & BROCA'S APHASIA

If an individual were to sustain massive damage to the left frontal convexity, his or her ability to speak would be curtailed dramatically. Even if only partially damaged, disturbances that involve grammar and syntax, and reductions in vocabulary and word fluency in both speech and writing result (Benson, 1993; Goodglass & Berko, 1960; Hofstede & Kolk, 1994; Milner, 1964). However, the ability to comprehend language is often (but not completely) intact (Bastiaanse, 1995; Tyler et al. 1995). This disorder is called Broca's (or expressive) aphasia (Benson, 1993; Goodglass & Kaplan, 1999; Levine & Sweet, 1983) and has also been referred to as "motor aphasia."

Individuals with expressive aphasia, although greatly limited in their ability to speak, nevertheless are capable of making emotional statements or even singing (Gardner, 1975; Goldstein, 1942; Joseph, 1988a; Smith, 1966; Smith & Burklund, 1966; Yamadori et al. 1977). In fact, they may be able to sing words that they cannot say. Moroever, since individuals with Broca's aphasia are able to comprehend, they are aware of their deficit and become appropriately depressed (Robinson & Szetela, 1981).

Indeed those with the smallest lesions become the most depressed (Robinson & Benson, 1981) --the depression (as well as the ability to sing) being mediated, presumably, by the intact right cerebral hemisphere (Joseph 1988a) and the right frontal and right temporal lobe.

For example, Abrams and Taylor (1979) and Shagass et al. (1979) have found that depressed patients demonstrated more right vs left temporal lobe electrophysiological activity and EEG abnormalities. Moreover, others have argued that ECT administered to the right vs left temporal lobe is more likely to alleviate depressive symptoms (Cohen, et al, 1974; Deglin & Nikolaenko, 1975) and more likely to result in euphoric reactions. Likewise, it has been found that repetitive transcranial magnetic stimulation of the right frontal lobe over a 10 day period, significantly diminishes depressive feelings as compared to those who received sham treatments (Klien et al., 1999).

WERNICKE'S AREA & WERNICKE'S APHASIA

If, instead, the lesion were more posteriorly located along the superior temporal lobe the patient would have great difficulty understanding spoken or written language. Presumably this disorder is due in part to an impaired capacity to discern the individual units of speech and their temporal order. That is, sounds must be separated into discrete interrelated linear units or they will be perceived as a meaningless blur.

Wernicke's area (in conjunction with the inferior parietal lobule) acts to organize and separate incoming sounds into a temporal and interrelated series so as to extract linguistic meaning via the perception of the resulting sequences. When damaged a spoken sentence such as the "big black dog" might be perceived as "the klabgigdod." This is referred to as Wernicke's aphasia. However, comprehension is improved when the spoken words are separated by long intervals.

Patients with damage to Wernicke's area are nevertheless, still capable of talking (due to preservation of Broca's area and the fiber pathway linking these regions). However, because Wernicke's area also acts to code linguistic stimuli for expression, expressive speech becomes severely abnormal and characterized by nonsequiturs, neologism, paraphasic errors, sound and word order substitutions, and the ommission of pauses and sentence endings (Christman, 1994; Goodglass & Kaplan, 1999; Hecaen & Albert, 1978; Hofstede & Kolk, 1994; Kertesz, 1983a). That is, temporal-sequential expressive linguistic encoding becomes disrupted.

For example, one patient with severe receptive aphasia responded in the following manner: "I am a little suspicious about what the hell is the part there is one part scares, uh estate spares, Ok that has a bunch of drives in it and a bunch of good googin...what the hell...kind of a platz goasted klack..." Presumably since the coding mechanisms involved in organizing what humans are planning to say are the same mechanisms that decode what they hear, expressive as well as receptive speech becomes equally disrupted with left superior temporal lobe damage.

Nevertheless, a peculiarity of this disorder is that these patients do not always realize that what they say is meaningless (Maher et al. 1994). Moreover, they may fail to comprehend that what they hear is meaningless as well (Cf. Lebrun, 1987). This is because when this area is damaged, there is no other region left to analyze the linguistic components of speech and language. The rest of the brain cannot be alerted to the patient's disability. Such patients are at risk for being misdiagnosed as psychotic.

Presumably, as a consequence of loss of comprehension, these patients may display euphoria, or in other cases, paranoia because there remains a nonlinguistic or emotional awareness that something is not right. That is, emotional functioning and comprehension remain intact (though sometimes disrupted due to erroneously processed verbal input). Hence, aphasic individuals are often able to assess to some degree the emotional characteristics of their environment including the prosodic (Monrad-Krohn, 1963), stress contrasts (Blumstein & Goodglass, 1972), and semantic and connotative features of what is said to them, i.e., whether they are being asked a question, given a command, or presented with a declarative sentence (Boller & Green, 1972).

For example, many individuals with severe receptive (Wernicke's) aphasia can understand and respond appropriately to emotional commands and questions (e.g., "Say 'shit'" or "Do you wet your bed?" (Boller et al. 1979; Boller & Green, 1972). Similarly, the ability to read and write emotional words (as compared to non-emotional or abstract words) is also somewhat preserved among aphasics (Landis et al. 1982) due to preservation of the right hemisphere. Indeed, the capacity to identify emotional words and sentences is a capacity at which the right hemisphere excells (Borod et al. 1992; Graves et al. 1981; Van Strien & Morpurgo, 1992).

Because these paralingusitic and emotional features of language are analyzed by the intact right cerebral hemisphere, the aphasic individual is able to grasp in general the meaning or intent of a speaker, although verbal comprehension is reduced. This, in turn, enables them to react in a somewhat appropriate fashion when spoken to.

For example, after I had diagnosed a patient as suffering from Wernicke's aphasia, her nurse disagreed and indicated the patient responded correctly to questions such as, "How are you this morning?" That is, the patient replied: "Fine." Later, when I re-examined the patient I used a tone of voice appropriate for "How are you today?", but instead said; "It's raining outside?" The patient replied, "Fine!" and appropriately smiled and nodded her head (Joseph, 1988a). Often our pets are able to determine what we mean and how we feel by analyzing similar melodic-emotional nuances.

THE INFERIOR PARIETAL LOBE AND LANGUAGE

The inferior parietal lobule (the angular and supramarginal gyrus) is a multi-modal language area which acts to sequence language as well as inject words and categories into the stream of language and thought (Joseph, 1982). The role of the IPL in language is not only evident as based on lesion studies (as will be discussed) but functional imaging. For example, as based on functional imaging, it appears that the supramarginal gyrus may act as a phonological storehouse that becomes activated during short-term memory and word retrieval (Demonet, et al., 1994; Paulesu, et al., 1993; Price, 1997); whereas conversely, deficits in phonological processing are the most common correlate of reading disability (Brady & Shankweiler, 1991). Simply looking at words will activate the left supramarginal gyrus (Bookheimer, et al., 1995; Vandenberghe, et al., 1996; Menard, et al., 1996; Price, 1997) which also becomes active when performing syllable judgements (Price, 1997), and when reading (Bookheimer, et al., 1995; Menard, et al., 1996; Price, et al., 1996). Likewise, the IPL become highly active when retrieving the meaning of words during semantic processing and semantic decision tasks and activation increases as word length increases (Price, 1997).

As detailed below and in chapters 6, with the evolution of the IPL, what had been limbic language became yoked to the neocortex. Briefly, the lateral frontal convexity, including Broca's area, may have evolved from the supplementary motor areas and medial frontal lobe which in turn evolved from and is richly interconnected with the anterior cingulate (e.g. Sanides 1964). The amygdala (and hippocampus) gave rise to the medial and inferior temporal lobes, the insula (see Sanides 1964), followed by the superior temporal lobe, Wernickes area, and by extention, portions of the inferior parietal lobule. The IPL, however, is also an evolutionary derivative of superior parietal tissue which expanded in accordance with and as represented by temporal sequential hand use and fine motor function involving the fingers and the thumb.

The evolution of the IPL (that is, the angular gyrus), therefore, may have served as a nexus, interlocking, at a neocortical level, the cingulate-Broca pathways, and the amygdala-Wernicke's pathway, thereby enabling limbic language impulses to become hierarchically represented as well as subject to temporal sequencing by neocortical neurons (Joseph 1993, 1999e,f). Prior to the evolution of the IPL/angular gyrus, Broca's area presumably was unable to receive sufficient input from primary auditory receiving and Wernicke's areas (and the amygdala), and language thus remained by and large, limbic and controlled by the anterior cingulate gyrus.

The impetus for inferior parietal and frontal lobe and Broca's area evolutionary development, however, appears to be two-fold, being in part limbic derivatives (amygdala-hippocampus, amygdala-cingulate) and a function of the evolution of fine motor control involving the facial-oral musculature, vocalization, and especially the establishment of handedness. Given that the human left corticospinal tract matures earlier and crosses the medullary pyramids at an earlier age than fibers from the right (Kertesz & Geschwind 1971; Yakovlev & Rakic 1966), thereby presumably establishing synaptic control over the spinal and cranial motor nuclei in advance of the right as well, dominance for hand control and temporal sequential processing became the province of the left hemisphere (Joseph 1982). With motor dominance, the left amygdala, cingulate gyrus, superior temporal lobe, inferior parietal and frontal were reorganized accordingly.

Broca and Wernicke's areas and thus left cerebral linguistic functioning are exceedingly dependent on the IPL and it's capacity to impose rhythmic temporal sequences on auditory associations and motoric actions (Geschwind, 1966; Goodglass & Kaplan, 1982; Joseph, 1993, 1999e; Heilman et al. 1982; Kimura, 1993; Strub & Geschwind, 1983), including vocalizations which arise from the limbic system.

Presumably when the inferior parietal lobule and the angular gyrus fully evolved, humans acquired the capacity to segment incoming sounds and to hierarchically represent and punctuate social-emotional, limbic vocalizations so as to vocally express themselves in temporal and grammatical sequences. Thus social-emotional vocalizations came to be governed by grammatical rules of organization, thus producing "modern" human language.

MULTI-MODAL PROPERTIES

In humans, the left IPL being an indirect product of temporal lobe and superior parietal evolution (see chapter 6) is capable of multimodal processing of auditory, visual, as well as tactile impressions, and then naming this material by forming verbal associations. The IPL then injects this material into the stream of language and thought. For example, as based on functional imaging, the left IPL becomes highly active when looking at words and reading, and when engaged in word retrieval (Bookheimer, et al., 1995; Vandenberghe, et al., 1996; Menard, et al., 1996; Price, 1997). Indeed, because of its unique position at the juncture of the auditory, visual, somesthetic, and motor neocortex, it has gained the capability of analyzing, associating, and assimilating this divergent data in order to create multiple categories of visual, auditory, and tactile imagery and meaning.

Hence, because the IPL receives multi-modal input, one can feel an object while blindfolded and know what it would look like and be able to name it as well. One can also integrate and assimilate these diverse sensory signals so as to abstract, classify and produce multiple overlapping categories of experience and cross modal associations (Geschwind, 1966; Joseph, 1982, 1986a 1993; Joseph et al., 1984).

THE ORIGANIZATION OF LINGUISTIC THOUGHT

ASSIMILATION AND ASSOCIATION WITHIN THE INFERIOR PARIETAL LOBE

The primary sensory receiving areas for vision, audition, and somesthesis are located in the occipital, temporal and parietal lobe respectively. Adjacent to each primary zone is a secondary-association neocortical region where higher level information processing occurs and where complex associations are formed. Wernicke's region is one such zone, as is the middle-inferior (basal) temporal and the superior parietal lobe. Moreover, there are complex third order association areas such as the middle-inferior temporal lobe (Brodmann's area 37).

Area 37 is located between the visual cortex and the anterior temporal cortex and becomes activated during a variety of language tasks, including reading and object and letter naming (Price, 1997)-- as demonstrated by functional imaging (Buchel et al., 1998; Price, 1997), direct cortical recoding (Nobre et al., 1994), and electrical stimulation (Luders et al., 1986). In fact, both normal, cognitally blind, and late-blind subject display activity in the medial temporal area (Buchel et al., 1998). Moreover, similar to injuries in the IPL, if the middle-inferior temporal lobe is injured, patients may suffer from reading and naming deficits (Rapcsak, et al., 1987); a condition referred to as phonological alexia. As noted, deficits in phonological processing are the most common correlate of reading disability (Brady & Shankweiler, 1991).

The IPL (which includes the angular and supramarginal gyri) is located at the junction where the all secondary and multi-modal association areas meet and overlap, and receives converging visual-linguistic input from the basal-lateral (middle-inferior) temporal lobe. In this regard, the inferior parietal region receives converging higher order information from each sensory modality and all association areas and in fact makes possible the formation of multiple associations based on the assimilation of this divergent sensory input (Geschwind, 1965, Joseph, 1982, 1993). One can thus feel an object while blindfolded and know what it would look like and be able to name it as well.

Through its involvement in the construction of cross-modal associations, this region acts so as to increase the capacity for abstraction, categorization, differentiation, and the verbal as well as visual labeling of sensory-motor experience. One is thus able to classify a single stimulus or event in multiple ways. In part this is made possible because the inferior parietal lobule is the recipient of the simple and complex associations already performed in the primary and association cortices via the ten billion axonal interconnections that occur in this region.

STIMULUS ANCHORS AND THE TRAIN OF THOUGHT

The left IPL of which it is part, makes possible the assimilation of complex associations which have been constructed elsewhere so that multiple classifications, categorizations, and descriptions are possible. The IPL also acts to integrate and arrange them according to preestablished (gestural) temporal sequences and the requirements of what needs to be communicated.

Moreover, via rich interconnections with Wernicke's area and the middle temporal lobe, the IPL it is able to associate auditory/verbal labels with other sensory experiences such that we can describe things as "sticky, sweet, moist, red, lumpy," as well as use single word descriptions, e.g. "jelly." This capability is particularly important in regard to reading and naming as described in further detail in chapters 20, 21. For instance, when a word is read, the pattern of visual input is transmitted from the visual areas in the occipital and temporal lobes to the left IPL (which is coextensive with Wernicke's area) and which then performs an auditory visual matching function. That is, it calls for and integrates the auditory equivalent of what is viewed or read so that we can name animals, objects, words and letters and know what the name sounds like. If this area were damaged, reading ability would be lost, a function in part, of disconnection between the IPL and the middle-inferior temporal lobe.

THE TRAIN OF THOUGHT-ASSOCIATIONS

As noted, the left IPL (including the left posterior-superior temporal lobe become more active when reading (Bookheimer, et al., 1995; Menard, et al., 1996; Price, 1997 Price, et al., 1996; Vandenberghe, et al., 1996) and becomes active during semantic processing (Price, 1997), and when making semantic decisions, such as when reading words with similar meanings (Price, 1997). These same areas are activated during word generation (Shaywitz, et al., 1995; Warburton, et al., 1996), and sentence comprehension tasks (Bottini, et al., 1994; Fletcher et al., 1995).

In most instances in which the IPL is activated via internal or external sources of stimulation, multiple trains of inquiry are initiated via the numerous interconnections this areas maintains. Impressions, memories, ideas, and feelings which are in any manner associated with the initial stimulus probe, are aroused in response.

If a student is asked: "What did you do in school today?" a number of verbal and memory associations and association areas are aroused in parallel and integrated within the Language Axis, all of which are related in some manner to each element of the eliciting stimulus. Finally, in the process of associational linkage, those associations with the strongest stimulus value and which most closely match each element of the question in terms of internal and external appropriateness and thus with the highest probability of being the most relevant, rapidly take a place in a hierarchical and sequential, grammatical arrangement that is being organized in a form suitable for expressing a reply.

To return to the question regarding "school," each speech segment and sound unit become triggers which first activate and then, like a magnet, draws associations accordingly. All aroused forms of mental imagery, verbal associations and so on which are received in the IPL are then arranged, individually matched and group matched such that the associations which correspond to all sources of relevant input with sufficient value of probability then act as templates of excitation that stimulate and attract other relevant ideas and associations. These in turn are assimilated and associated or are subsequently deactivated due to their low probability in contrast to the association already organized.

Moreover, because the strength and value of closely linked associations change in correspondence to the developing sequential hierarchy (or the initial parallel hierarchies), previously aroused and assimilated material may subsequently come to have a now lower value of probability or appropriateness within the the matrix of overall activity and may be deactivated (Joseph, 1982, 1986a, 1993).

Consider the question: "What is furry, small, loves milk and makes the sound Meoww?" At the level of the neocortex, each word, "furry," "small," "milk" and "meoww," acts to trigger associations (e.g. "furry = coat-animal-...," "milk = liquid-white-cow-..."). The grammatical linkage of these words also acts to trigger certain associations (e.g. "furry-milk-meoww = animal-cat-...") while deactivating others (e.g. "cow"). Following the analysis and comprehension of these sounds and words in Wernicke's area, the angular gyrus, and the middle temporal lobe, the IPL continues to call forth associations so that a reply to the question can be generated.

So that the animal can be named, the IPL via its interactions with the temporal lobe, activates the necessary phonemic elements (e.g. "k-a-t"), and then transfers this information to Broca's area and the question is answered: "Cat." If instead the individuals replies "tak" this would indicate a problem in organizing the correct phonemic elements once they were activated (see chapter 21 for an extended and detailed discussion).

The final product of this hierarchical, highly grammatical arrangement of mutually determining and parallel associational linkages is the train of thought or a temporal-sequential stream of auditory associations in the form of words and sentences. However, before this occurs, these verbal associations must receive a final temporal-sequential grammatical stamp which is a consequence of the organization imposed on this material as it passes from Broca's area to the oral-speech musculature.

CONFABULATION & GAP FILLING

Assimilation of input from diverse sources is a major feature of the Language Axis in general. However, if due to an injury the language axis is functionally intact but isolated from a particular source of information about which the patient is questioned, he or she may then suffer word finding difficulty, anomia, or angosia.

Following massive lesions of a brain area with which it normally communicates, the language axis sometimes begins to invent an answer or reply to questions based on the information available despite the gaps in that data or the incongruent nature of what is being reported. Consider, for example, denial of blindness (following massive injuries to the visual neocortex) or denial or neglect of the left extremity which may also be paralyzed (due to massive right cerebral injuries involving the motor and parietal neocortex). Patients will claim to have sight although they bump into objects or fall, or they may claim that their paralyzed left arm belongs to the doctor or a person in the next room (chapter 10).

To be informed about the left leg or left arm, the Language Axis must be able to communicate with the neocortical area which is responsible for perceiving and analyzing information about the extremities. For example, since the right parietal area maintains the somesthetic body-image, as well the storage site for body-image memories, when that areas is destroyed, the left half of the "body-image" all associated memories and essentially "erased" -as if they never existed.

When no message is received by the Language Axis, due to destruction of the neocortical area responsible for that message or memory, and when the Language Axis is not informed that no messages are being received (because the brain area which would alert them is no longer functioning), the language zones instead rely on some other source even when that source provides erroneous input (Joseph 1982, 1986a). Substitute material is assimilated and expressed and corrections cannot be made (due to loss of input from the relevant knowledge source) and the patient begins to confabulate (see chapters 10, 19). That is, the Language Axis fills the "gap" with erroneous material.

THE RIGHT CEREBRAL HEMISPHERE

It has now been well established that the right cerebral hemisphere is dominant over the left in regard to the perception, expression and mediation of almost all aspects of emotionality, including the recall of emotional memories (Cimino et al., 1991; Rauch et al., 1996; Shin et al., 1997). This emotional dominance extends to bilateral control over the autonomic nervous system, including heart rate, blood pressure regulation, galvanic skin conductance and the secretion of cortisol in emotionally upsetting or exciting situations (Rosen et al. 1982; Wittling, 1990; Wittling & Pfluger, 1990; Yamour et al. 1980; Zamarini et al. 1990). However, this dominance does not appear to extend to the immune system (Meador et al., 1999).

In part, it is believed that the right hemisphere dominance over emotional functioning is due to more extensive interconnections with the limbic system (Joseph, 1982, 1988a), including the fact that limbic system appears to be functionally and structurally lateralized (see chapter 13). For example, the appear to be more axonal connections between the neocortex of the right hemisphere and subcortical structures as the white matter connections are more extensive. The neocortex of the right hemisphere is also about 4% greater in size as compared to the left, the right amygdala is significantly (9%) larger than the left (Caviness, et al., 1997), whereas the left amygdala contains heavier concentrations of dopamine (Bradbury, Costall, Domeney, & Naylor, 1985; Stevens, 1992).

t has also been theorized that over the course of evolution and development, limbic social-emotional functions have come to be hierarchically subserved by the right cerebrum due in part to the initial earlier maturation of the non-motor portions of the right cerebral neocortex and due to limbic laterality (Joseph, 1982, 1988a). This right hemisphere limbic dominance came to include the expression and representation of limbic language, thus providing the right cerebrum with a functional dominance in regard to the expression and comprehension of emotional speech.

COMPREHENSION AND EXPRESSION OF EMOTIONAL SPEECH

Although language is often discussed in terms of grammar and vocabulary, there is a third major aspect to linguistic expression and comprehension by which a speaker may convey and a listener discern intent, attitude, feeling, mood, context, and meaning. Language is both emotional and grammatically descriptive. A listener comprehends not only the content and grammar of what is said, but the emotion and melody of how it is said -what a speaker feels.

Feeling, be it anger, happiness, sadness, sarcasm, empathy, etc., often is communicated by varying the rate, amplitude, pitch, inflection, timbre, melody and stress contours of the voice. When devoid of intonational contours, language becomes monotone and bland and a listener experiences difficulty discerning attitude, context, intent, and feeling. Conditions such as these arise after damage to select areas of the right hemisphere or when the entire right half of the brain is anesthetized (e.g., during sodium amytal procedures).

It is now well established (based on studies of normal and brain-damaged subjects) that the right hemisphere is superior to the left in distinguishing, interpreting, and processing vocal inflectional nuances, including intensity, stress and melodic pitch contours, timbre, cadence, emotional tone, frequency, amplitude, melody, duration, and intonation . The right hemisphere, therefore, is fully capable of determining and deducing not only what a persons feels about what he or she is saying, but why and in what context he is saying it --even in the absence of vocabulary and other denotative linguistic features (Blumstein & Cooper, 1974; DeUrso et al. 1986; Dwyer & Rinn, 1981). This occurs through the analysis of tone and melody.

Hence, if I were to say, "Do you want to go outside?" although both hemispheres are able to determine whether a question vs. a statements has been made (Heilman et al. 1984; Weintraub et al. 1981), it is the right cerebrum which analyzes the paralinguistic emotional features of the voice so as to determine whether "going outside" will be fun or whether I am going to punch you in the nose. In fact, even without the aid of the actual words, based merely on melody and tone the right cerebrum can determine context and the feelings of the speaker (Blumstein & Cooper, 1974; DeUrso et al. 1986; Dwyer & Rinn, 1981). This may well explain why even preverbal infants are able to make these same determinations even when spoken to in a foreign language (Fernald, 1993; Haviland & Lelwica, 1987). The left hemisphere has great difficulty with such tasks.

For example, in experiments in which verbal information was filtered and the individual was to determine the context in which a person was speaking (e.g. talking about the death of a friend, speaking to a lost child), the right hemisphere was found to be dominant (Dwyer & Rinn, 1981). It is for these and other reasons that the right half of the brain sometimes is thought to be the more intuitive half of the cerebrum.

. Correspondingly when the right hemisphere is damaged, the ability to process, recall, or even recognize these nonverbal nuances is greatly attenuated. For example, although able to comprehend individual sentences and paragraphs, such patients have difficulty understanding context and emotional connotation, drawing inferences, relating what is heard to its proper context, determining the overall gist or theme, and recognizing discrepancies such that they are likely to miss the point, respond to inappropriate details, and fail to appreciate fully when they are being presented with information that is sarcastic, incongruent or even implausible.

Such patients frequently tend to be very concrete and literal. For example, when presented with the statement, "He had a heavy heart" and requested to choose several interpretations, right-brain damaged (vs. aphasic) patients are more likely to choose a picture of an individual staggering under a large heart vs. a crying person. They also have difficulty describing morals, motives, emotions, or overall main points (e.g. they lose the gestalt), although the ability to recall isolated facts and details is preserved (Delis et al. 1986; Hough 1990; Wapner et al. 1981) -details being the province of the left hemisphere.

Although they are not aphasic, individuals with right hemisphere damage sometimes have difficulty comprehending complex verbal and written statements, particularly when there are features which involve spatial transformations or incongruencies. For example, when presented with the question "Bob is taller than George. Who is shorter? ", those with right-brain damage have difficulties due, presumably, to a deficit in nonlingusitic imaginal processing or an inability to search a spatial representation of what they hear (Carmazza et al. 1976).

In contrast, when presented with "Bob is taller than George. Who is taller?" patients with right-hemisphere damage perform similar to normals, which indicates that the left cerebrum is responsible for providing the solution (Carmazza et al. 1976) given that the right hemisphere is injured and the question does not require any type of spatial transformation. That is, because the question "Who is shorter?" does not necessarily follow the first part of the statements (i.e., incongruent), whereas "Who is taller?" does, these differential findings further suggest that the right hemisphere is more involved than the left in the analysis of incongruencies.

RIGHT HEMISPHERE EMOTIONAL-MELODIC LANGUAGE AXIS

Just as there are areas in the left frontal and temporal-parietal lobes which mediate the expression and comprehension of the denotative, temporal-sequential, grammatical-syntactial aspects of language, there are similar regions within the right hemisphere that mediate emotional speech and comprehension (Gorelick & Ross, 1987; Heilman et al. 1975; Joseph, 1982, 1988a, 1993; Lalande et al. 1992; Ross, 1981; Shapiro & Danly, 1985; Tucker et al., 1977); regions which become highly active when presented with complex nonverbal auditory stimuli (Roland et al. 1981) and when engaged in interpreting the figurative aspects of language (Bottini et al., 1994).

Moreover, it appears that during the early stages of neonatal and infant development, that the role of the right hemisphere in language expression and perception was even more pronounced. As originally proposed by Joseph (1982, 1988a), language in the neonate and infant is dominated by the right hemisphere, which in turn accounts for the initial prosodic, melodic, and emotional qualities of their vocalizations. Of course, the left hemisphere is genetically programmed to gain functional dominance and to acquire the grammatical, temporal sequential, word-rich, and expressive-motor aspects of speech--as is evident neuronatomically by the presence of asymmetries in the fetal and neontal planum temporal (Wada et al., 1975; Witelson & Palli, 1973), and the fact that the left cerebral pyramidal tract descends and establishes synaptic contact with the brainstem and spinal cord in advance of the right (Kertesz & Geschwind 1971; Yakovlev & Rakic 1966).

However, as also based on evoked potential studies, the pattern of neurological activity, during the performance of language tasks, does not begin to resemble the adult pattern until the onset of puberty (Hollcomb et al., 1992). Moreover, although the left hemisphere gradually acquires language, the right hemisphere continues to participate even in non-emotional language processing, including reading, as demonstrated by functional imaging studies (Bottini et al., 1994; Cuenod, et al., 1995; Price et al., 1996).

For example, the right temporal and parietal areas are activated when reading (Bottini et al., 1994; Price et al., 1996), and the right temporal lobe becomes highly active when engaged in interpreting the figurative aspects of language (Bottini et al., 1994). Moreover, bilateral frontal activation is seen when speaking--though this activity is greater on the left (Passingham, 1997; Peterson et al., 1988). In part, however, these latter findings may well reflect those aspects of right hemisphere language processing (temporal-parietal) and expression (frontal-parietal) which are concerned with extracting and vocalizing emotional, motivational, personal, and contextual details.

For example, right frontal damage has been associated with a loss of emotional speech and emotional gesturing and a significantly reduced ability to mimic various nonlinguistic vocal patterns. In these instances, speech can becomes flat and monotone or characterized by inflectional distortions.

With lesions that involve the right temporal-parietal area, the ability to comprehend or produce appropriate verbal prosody, emotional speech, or to repeat emotional statements is reduced significantly. Indeed, when presented with neutral sentences spoken in an emotional manner, right hemisphere damage disrupts perception and discrimination (Heilman et al. 1975; Lalande et al. 1992) and the comprehension of emotional prosody (Heilman et al. 1984; Starkstein et al. 1994) regardless of whether it is positive or negative in content. Moreover, the ability to differentiate between different and even oppositional emotional qualities (e.g., "sarcasm vs irony" or "love" vs "hate") can become distorted, and the capacity to appreciate and comprehend humor or mirth may be attenuated (Gardner et al. 1975).

The semantic-contextual ability of the right hemisphere is not limited to prosodic and paralinguistic features, however, but includes the ability to process and recognize familiar, concrete, highly imaginable words, as well as emotional language in general.

The disconnected right hemisphere also can read printed words, retrieve objects with the left hand in response to direct and indirect verbal commands, e.g. "a container for liquids" (Joseph, 1988b; Sperry, 1982), and spell simple three- and four-letter words with cut-out letters (Sperry, 1982). However, it cannot comprehend complex, non-emotional, written or spoken language.

As noted, the right hemisphere dominance for vocal (and non-verbal) emotional expression and comprehension is believed to be secondary to hierarchical neocortical representation of limbic system functions. It may well be dominance by default, however. That is, at one time both hemispheres may well have contributed more or less equally to emotional expression, but with the evolution of language and right handedness, the left hemisphere gradually lost this capacity where it was retained in the right cerebrum (chapter 6). Even so, without the participation of the limbic system, the amygdala and cingulate gyrus in particular, emotional language capabilities would for the most part be nonexistent.

LIMBIC LANGUAGE: The Amygdala & Cingulate.

The amygdala appears to contribute to the perception and comprehension of emotional vocalizations which it extracts and imparts to the neocortical language centers via the axonal pathway, the arcuate fasiculus, which links the frontal convexity, the inferior parietal lobule, Wernicke's area, the primary auditory area, and the lateral amygdala (Joseph, 1993). That is, sounds perceived are shunted to and fro between the primary and secondary auditory receiving areas and the amygdala which then acts to sample and analyze them for motivational significance (see chapter 13). Indeed, the amygdala becomes activated when listening to emotional words and sentences (Halgren, 1992; Heith et al., 1989), and if damaged, the ability to vocalize emotional nuances can be disrupted (see chapter 13).

In addition, the anterior cingulate becomes activated when speaking (Dolan et al., 1997; Passingham, 1997) processes and expressed emotional vocalization (Jurgens, 1990; MacLean, 1990) and contributes to emotional sound production via axonal interconnections with the right and left frontal convexity (Broca's area). Indeed, it has been repeatedly demonstrated, using functional imagery, that the anterior cingulate, the right cingulate in particular becomes highly active when vocalizing (Frith & Dolan, 1997; Passingham, 1997; Paulesu et al., 1997; Peterson et al., 1988).

Over the course of evolution the anterior cingulate appears to have given rise to large portions of the medial frontal lobe and the supplementary motor areas which in turn continued to evolve thus forming the lateral convexity including Broca's area. Via these interconnections, emotional nuances may be imparted directly into the stream of vocal utterances.

Conversely, subcortical right cerebral lesions involving the anterior cingulate, the amygdala, or these fiber interconnections can also result in emotional expressive and receptive disorders (chapter 15). Similarly, lesions to the left temporal or frontal lobe may result in disconnection which in turn may lead to distortions in the vocal expression or perception of emotional nuances -that is, within the left hemisphere.

LANGUAGE & THOUGHT

VERBAL THINK & VERBAL THOUGHT

Non-verbal imagery and hallucinations, as well as visual-emotional dream activity are associated with activity within the right hemisphere and inferior temporal lobR -as well as within the brainstem (chapter 17). Conversely, verbal thinking is associated with the left hemisphere (Joseph 1982). Although an individual may utilize visual, emotional, olfactory, musical, or tactile "imagery" when they think, thinking may also take the form of "words" which might be "heard" or rather, experienced, within one's own head (or mind). When engage in verbal thought, the language axis of the left hemisphere typically becomes activated as indicated by functional imaging (Buchel et al., 1998; Demonet, et al., 1994; Paulesu, et al., 1993; Peterson et al., 1988).

Verbal thinking is clearly a form of communication and generally consists of an organized hierarchy of associations, symbols, and labels which appear before an observor, or which are heard by the thinker --within the minds "ear" and "eye," or rather, within Wernicke's area and the frontal lobe.

Thought (i.e.verbal thinking), can be a means of deduction, clarification, plan and goal formation, and reality manipulation (Craik, 1943; Freud, 1900; Miller et al. 1960; Piaget, 1962). However, it is also a progression, an associative advance which leads from an inner or outer perception to linguistic-motor expression (Freud, 1900); and an elaboration which some have argued appears with an initial or leading idea that is followed by a series of related verbal ideations, or, as originating developmentally from the non-accessible regions of the mind (Freud, 1900; James, 1961; Jung, 1954; Piaget, 1962). In the process of thinking in-words, one often acts to organized information which is "not thought out" and that is not clearly understood, so that it may become thought out and thus comprehended (Ach, 1951; James, 1961; Joseph, 1982; Schilder, 1951).

On the otherhand, sometimes the verbal "train of thought" emerges spontaneously and reflexively, as if albeit related ideas simply become strung and attached together with no specific goal or purpose in mind. Moreover, in these instances, sometimes these thoughts rapidly alternate in content and fluctuate between seemingly unrelated ideas --as if triggered by a verbal domino effect where associated ideas become sequentially aroused, each subsequent idea triggering the next. Verbal thoughts are also triggered by agents external to the left half of the brain (such as the right hemisphere or limbic system). Sometimes, however, the production of thought reflects random neural activity. Indeed, sometimes it is exactly that, random and reflexive.

THE PURPOSE OF VERBAL THOUGHT

Directed, reflexive or spontaneous, the verbal thoughts always unfold before an observer and are heard within the minds ear. It is a series of pseudo-auditory transactions which are experienced as well as produced sometimes as a purposeful means of explanation. Thinking sometimes is experienced as a form of self-explanation through which ideas, impulses, desires, or thing-in-the-world may be understood, comprehended and possibly acted upon. Paradoxically, it is often a process by which one explains things to oneself. Indeed, as a means of deduction or explanation, and as a form of internal language, it is almost as if one is talking to oneself inside one's head.

Nevertheless, the fact that one acts as both audience and orator, raises a curious question: "who is explaining what to whom ?" A functional duality and in fact a functional multiplicity is thus implied in the production and reception of thought.

Assuming that the subject of thought originates in me, the thinker, and given that the organization of this often linear verbal arrangement is also a product of Self-generated activity, then it should be expected in some instances that "I" should know what "I" am about to think prior to thinking it. "I" should also know the conclusion before it is communicated. In fact, often we do know (albeit non-verbally, tacitly) before we think (and while we think). Sometimes we do not think (at least in words), simply because the question-answer-implications are simultaneously understood without the aid of verbal thought (Joseph, 1982). There is thus some redundancy built into the thinking process as well as an almost inescapable sense of duality in its production and reception.

In that thinking is often a form of communication, it seems that one aspect of the Self, or rather, the brain, has access to the information which is to be verbally thought about, before it is thought about in a verbal form. That is, the source of what will become thought that will be thought about, is often within the Self. However, that source, or pre-thought, exists in a pre-verbal form, and must then be translated and be organized in a verbal linear sequence in order to be thought about and thus understood--at least verbally. In this regard, thinking is sometimes a form of communication through which one part of the brain gains access and an understanding regarding information or knowledge possessed in yet other brain regions; albeit, in non-linguistic form.

Indeed, thinking often serves in part as a means of organizing, interpreting, and explaining impulses which arise in the non-linguistic portions of the nervous system so that the language dependent regions may achieve understanding (Joseph, 1982, 1986a). In fact, although thought may take various non-linguistic forms, e.g. musical thought, visual-imagery, etc., one need only listen to one's own thoughts in order to realize that thinking often consists of an internal linguistic monologue, a series of words heard within one's own head. And, because these particular forms of thought are structured and perceived as words heard within one's head, then they must rely on the same neural pathways subserving the production and perception of language and speech sounds produced outside the head; i.e. the Language Axis--as also demonstrated through functional imaging (Buchel et al., 1998; Demonet, et al., 1994; Paulesu, et al., 1993; Peterson et al., 1988; Price, 1997).

DEVELOPMENT: LANGUAGE & THOUGHT

The left hemisphere is genetically predisposed to become dominant for the denotative, syntactic, lexical, grammatical, and motor-expressive aspects of speech--a consequence, in part, of the earlier maturation of the left corticospinal tract which provides the left frontal motor areas a competitive advantage over the right in motor expression. However, this genetic predisposition is also evident prenatally. For example, and as is well known, Wernicke's area and the left superior temporal lobe are dominant for language receptive--and the left superior planum temporal is generally larger in the left hemisphere. As originally determined by Geschwind and Levitsky (1968) the posterior sylvian fissures (the planum temporale that form the core of Wernicke's area) is larger in the left hemisphere in 65% of those brains examined, larger in the right hemisphere in 25% of the examined brain, with 10% showing no difference. This asymmetry, however, is present in the planum temporal of the fetus (Wada et al., 1975), as well as neonates (Witelson & Palli, 1973); which indicates that the structures involved in the comprehension of language are created prenatally and are determined genetically and by genetic patterns of neural cell migration.

Initially, however, the right and left hemisphere may be somewhat equipotent in regard to language acquisition. Hence, with early left hemisphere injury, language may be acquired by the right hemisphere (Joseph, 1986b; Joseph & Novelly, 1983). Indeed, right hemisphere language acquisition has been demonstrated through dichotic listening, tachistiscope, and by left hemisphere anesthesia in over 20 adults with histories of early left hemisphere injury (Joseph, 1986b, Joseph & Novelly, 1983). Since the right hemisphere also becomes activated during language tasks--as measured through functional imaging (Bookheimer, et al., 1995; Bottini et al., 1994; Peterson, et al., 1988; Price et al., 1996; Shaywitz, et al., 1995)-- and as this half of the brain is dominate for emotional language production in infancy, producing, hierarchically, what has been termed "limbic language" (Joseph, 1982) this half of the brain, therefore can also acquire language with massive early left hemisphere injury (Joseph, 1986b; Novelly & Joseph, 1983).

Over the ensuing years, the left hemisphere increasingly established dominance, though the right hemisphere remains dominant in regard to emotional melodic language production and comprehension, and due to the visual-spatial nature of the task, becomes activated while reading, as demonstrated by functional imaging studies (Bottini et al., 1994; Cuenod, et al., 1995; Price et al., 1996) and when engage in interpreting the figurative aspects of language (Bottini et al., 1994).

As the neocortex of the left hemisphere matures, is begins to stamp temporal sequences onto the melodic emotional patterns of right hemisphere/limbic speech, thus producing left hemisphere speech. However, this is a prolonged process, such that for most of the first year, language is limbic (see chapter 15). It is only near the end of the first year that the neocortex begins to hierarchically gain control, as evident by the development of jargon babbling and then the production of the first words. In fact, the pattern of neurological activity during the performance of language tasks, does not begin to resemble the adult pattern until the onset of pubery (Hollcomb et al., 1992).

THREE LINGUISTIC STAGES

Broadly considered, there are three maturational stages of verbal development that correspond to the acquisition and development of language and thought (Joseph 1982). Initially, linguistic expression in the infant is reflexive and/or indicative of generalized and diffuse feelings states. Vocalizations are largely emotional-prosodic in quality, and mediated by limbic and brainstem nuclei (Chapter 15).

At approximately 3-4 months of age the infants utterances begin to assume meaningful as well as semi-imitative qualities, and are indicative of specific feelings states, and begin to become influenced by both the right and left cerebral hemisphere. It is during this time period that a second babbling stage develops and the childs prosodic utterance begin to assume temporal-sequential characteristics. That is the left hemisphere begins to provide rhythm and specification to the melody and associated feeling states expressed by the right hemisphere and limbic system. From this point on true language begins to develop.

However, it is not until a third stage of linguistic functioning makes it appearance that the child begins to not only speak in words, but to think them out-loud. This final stage coincides with the expression and development of ego-centric speech.

LIMBIC & BRAINSTEM COGNITIONS

Initially and for the first few days after birth most behavior is initially mediated by limbic, brainstem, and spinal nuclei (Chugani, et al. 1987; Gibson, 1991; Joseph, 1982, Milner, 1967). For example, PET scan studies of glucose utilization in the newborn, indicates high levels of brainstem but very low levels of neocortical activity (Chugani, 1994; Chugani, et al. 1987). It is not until about one year of age that infant neocortical glucose activity begins to significantly increase (Chugani, et al. 1987) and not until ages 4-10 that the sensory and association cortical layers begin to become increasingly myelinated (Gibson, 1991; Lecours, 1975).

Therefore, because of neocortical immaturity the psychic functioning of the newborn is probably no more than a vague, somewhat undifferentiated awareness; consisting of a multitude of excitatory and inhibitory neuronal interactions and a series of transient feeling states and emotional upheavals which correspond to the activation of specific and related subcortical and limbic structures (chapters 23-25).

The neonate is essentially internally oriented, its psychic attentional functions almost entirely directed to stimuli impinging on the body-surface and sensations transmitted by the mouth (Milner, 1967). That is, although the newborn can cry and scream, turn his or her head to sounds, and within a few weeks can imitate facial expressions, reach for objects, and show defensive reactions (Gibson, 1991; Meltzoff 1990), these behaviors are under the control of the brainstem, limbic system and basal ganglia.

Cognition, therefore, consists largely of generalized and diffuse feeling states which are aimed at the alleviation of displeasure or painful affect and with the reactivation of experiences associated with pleasurable sensations (reviewed in chapter 13). In fact, from birth to 1 month, the infant displays only two attitudes, accepting and rejecting, and a very limited range of vocalization: crying and cooing (McGraw, 1969; Milner, 1967; Spitz & Wolf, 1946). These feeling states and vocalizations are largely mediated and expressed by the hypothalamus of the limbic system (Joseph, 1992a).

Indeed, as noted, the original impetus to speak springs forth from roots buried within the depths of the ancient limbic lobe and is bound with and tied to mood, impulse, feeling, desire, pleasure, pain, and fear. The infant cries, coes and produces various prosodic inflectional variations which are without temporal-sequential organization and which serves only to communicate diffuse feelings. It is only over the course of the first few months that these prosodic-melodic utterances become associated with specific moods and emotions (Joseph 1982, Piaget, 1952). It is at this time that babbling makes it appearance (Brain & Walton, 1969).

EARLY BABBLING & PROBABLE SPEECH

By 2-3 months of age amygdala-brainstem pyramidal fibers as well as corticospinal axons have already begun to myelinate. These maturational events coincide with an initial shift in the emotional utterances of the infant which become progressively complex and prosodic and increasingly subject to sequencing and segmentation. The infant begins to "coo," "goo," in a repetitive fashion that has been referred to as "early babbling." This early babbling stage generally involves the repetition of pleasant friction and voicing sounds which tend to be produced while making face-to-face and eye-to-eye contact and while engaged in social interaction (Kent & Miolo, 1995), which in turn implicates the amygdala (see chapter 15). Moreover, whereas the expression of pleasant sounds are in the ascendant, crying tends to become less frequent but more variable in tone, and can be differentiated into requests, calls, and sounds of discomfort (D'Odorico, 1984; Wolff, 1969). This indicates that the infant's behavior is less reflexive, and is increasingly under the control of the rapidly maturing limbic system, the amygdala in particular (see chapter 15).

As the amygdala and other limbic forebrain structures mature, and the larynx begins to assume an adult pattern of orientation, the infant not only babbles but vocalizes a variety of sounds which increasingly convey probable meanings which may signify to the listener a variety of diffuse feelings and needs (D'Odorico, 1984; Wolff, 1969). Infants will in fact produce different noncry vocalizations depending on context and in reaction to people vs objects (Fernald, 1992; Hauser, 1997). For example, if the 4 month old infant coos and babbles "mama," (depending on context, facial expression, and prosody/fundamental frequency) the mother may interpret this to mean: "mama come here," "mama I hurt," "mama I thirst," etc. (e.g., D'Odorico, 1984; Fernald, 1992; Joseph 1982; Piaget, 1952; Vygotsky, 1962; Wolff, 1969). Hence, although the infant's utterances are not referential and may at times represent little more than the random universal babbling produced by all infants, they can also convey meaning and serve as a means of communicating with the primary caretaker (see Fernald, 1992; Hauser, 1997, for related discussion).

Early babbling, in part is associated with the maturation of the amygdala, a structure which can trigger lip smacking, rhythmic jaw movement, and (fear-induced) babbling and manidibular-teeth "chattering" (see chapter 15). Likewise, early babbling may be produced by reflexive jaw movement (e.g, MacNeilage & Davis, 1990; Moore & Ruark, 1996; Weiss, 1951) and lip smacking. Hence, early babbling may reflect immature amygdala (as well as amygdala-striatal and motor neocortical) influences on the brainstem and periaqueductal gray which reflexively triggers the oral musculature thereby inducing rhythmic movement of the jaw.

"Early" babbling is soon replaced by "late" babbling which has its onset around 4 months of age (de Boysson-Bardies, Bacri, Sagart, & Poizat, 1981; Oller, 1980; Oller & Lunch, 1992). Late babbling is sometimes described as "repetitive babbling" (Mitchell & Kent, 1990), and at later stages of development may include the repetitive production of CV syllables in which the same consonant is repeated, such as "dadada." The progressive development of "late babbling" in turn is associated with the progressive maturation of the anterior cingulate (see chapter 15). In fact, electrical stimulation in the cingulate and surrounding medial frontal tissues can trigger the repetitive babbling repetition of certain words and sounds, such as "dadadada" (Dimmer & Luders, 1995; Penfield & Welch, 1951).

LATE BABBLING, JARGON BABBLING, & TEMPORAL SEQUENTIAL SPEECH

By time the infant has reached 4 months of age a second babbing stage develops, i.e. "late babbling". Late babbling hearlds the first real shift from emotional-prosodic-melodic speech to what will become, after around on year of age, temporal-sequential language (Joseph 1982; Leopold, 1947); i.e. left hemisphere speech.

The development of late babbling occurs in conjunction with the infant's increased ability to produce sophisticated social-emotional nuances, and appears to be associated with increasing cingulate (as well as amygdala) maturational influences. For example, around 4-months, the infant's intonational-melodic vocal repertoire becomes more elaborate and tied to a variety of specific feeling states (Piaget, 1952); which may reflect increasing amygdala maturational dominance.

However, over the ensuing months vocalizations also begin to assume an imitative quality (Nakazima, 1980) which are often context specific but which do not necessarily reflect the infant's internal state. Some vocalizations are produced in mimicry and in play (Piaget, 1952). The late babbling stage has also been repeatedly described as a form of "sound play;" an activity which increasingly contributes to phonetic development (de Boysson-Bardies et al. 1981; Ferguson & Macken, 1983). As the cingulate is associated with mimicry and the onset of play behavior (chapter 15) and since the production of these sounds do not necessarily reflect the infant's true emotional state, the cingulate, therefore, is implicated all aspects of the late babbling stage.

Repetitive, late babbling increases in frequency until around the seventh to tenth month of postnatal development (de Boysson-Bardies et al. 1981; Ferguson & Macken, 1983; Nakazima, 1980; Oller, 1980; Oller & Lunch, 1992), at which point the tendency to produce phonetically varied multisyllables becomes dominant. Thus the late babbling stage comes to be largely replaced by what has been termed "variagated" or "canonical" babbling (Oller, 1980; Oller & Lunch, 1992) which in turn is followed by "jargon" babbling (around 12 months).

The develpoment of jargon babbling appears to correspond to maturational events taking place in the motor areas of the neocortex and may represent increasing pyramidal influences on the brainstem and oral-laryngeal musculature. In fact, jargon babbling appears to be a function of the immature somatomotor areas slowly gaining control over the limbic system, midbrain inferior-colliculus, and periaqueductal gray (see also Herschkowitz et al. 1997).

For example, pyramidal fibers from the somatomotor neocortex to the brainstem become increasingly well myelinated from 4 to 12 months of age (Debakan, 1970; Yakovlev & Lecours, 1967). Likewise, the somatomotor areas of the neocortex begin to rapidly mature around the first postnatal year (Brody et al. 1987; Chi, Dooling, & Gilles, 1977; Gilles et al. 1983; Scheibel, 1991, 1993). Hence, the neocortex likely increasingly contributes to babbling behavior, especially around one year of age.

Moreover, just as the pyramidal/corticospinal tracts as well as the somatomotor areas continue to mature and myelinate over the first and second years (Conel, 1937, 1941; Debakan, 1970; Yakovlev & Lecours, 1967), babbling continues throughout the first and second years. It is during these same time periods in which the child gradually acquires and develops the phonetic structure which underlies speech production (de Boysson-Bardies et al. 1981; Oller, 1980; Oller & Lunch, 1992). This implies considerable forebrain as well as right and left neocortical influences over vocal behavior (see below).

With increasingly neocortical control, what appears to be a "new and unique motor skill" slowly emerges (Moore & Ruark, 1996) which directly contributes to the development of speech. That is, around 1 year of age, and as these limbic-neocortical pathways myelinate and the frontal-temporal lobes begin to mature, the brainstem vocalization centers and limbic receptive and expressive language functions become increasingly subject to neocortical influences and articulatory control. Once the neocortical speech areas begin to establish hierarchical control, and begin to program the oral-laryngeal motor areas, a new form of (neural-muscular) vocalization emerges which appears somewhat distinct from its precursors (e.g. Moore & Ruark, 1996). The infant begins to jargon babble, and they also begin to speak their first words (Capute, Palmer, Shapiro, Wachtel, Schmidt, & Ross, 1986; Nelson, 1973, 1981; Oller, 1980; Oller & Lynch, 1992).

EGOCENTRIC & CONVERSATIONAL JARGON BABBLING

As the neocortex of the left cerebral hemisphere begins to mature, it begins to stamp and impose temporal sequences onto the stress, pitch, and melodic intonational contours which up at until that time have characterized infant speech output (chapter 15). This is part of what the late and especially the jargon babbling stage hearalds: the ability to sequence.

That is, syllabication is imposed on the intonational contours of the child's speech by the still immature neocortex of left hemisphere, such that the melodic features of generalized vocal expression come to be puctuated, sequenced, and segmented, and vowel and consonantal elements begin to be produced (Joseph, 1982, 1993; see also Berry, 1969; De Boysson-Bardies, et al. 1980). Left hemisphere speech comes to be superimposed over limbic (and right hemisphere) melodic language output. However, due to the immaturity of the neocortex, the speech produced is "jargon."

Jargon babbling coincides with the production of the first words which are spoken around 11-12 months on average (Capute et al., 1986; Nelson, 1973, 1981; Oller, 1980; Oller & Lynch, 1992). In fact, jargon babbling resembles actual speech, and at a distance it may sound as if the infant is actually conversing and speaking real words, though in fact they are babbling prosodically sophisticated neologistic jargon. In fact, jargon ("conversational") babbling is similar to Wernicke's ("jargon") aphasia which is associated with severe injuries to the temporal-parietal junction (Christman, 1994; Goodglass & Kaplan, 2000; Kertesz, 1983; Marcie & Hecaen, 1979). However, rather than due to brain damage, jargon babbling reflects the extreme immaturity of the neocortical speech areas. Hence, the emergence of the jargon babbling stage signifies an obvious shift in sound production from the limbic system to the still immature neocortex.

In general, "jargon" babbling consists of normal stress and intonation, and is associated with the production of stops, nasals, and CV syllables as well as labial and dental/alveolar consonants (p, t, k, b, d, g, m, n, w, j, h, s), all of which are uttered in a temporal sequential fashion (Locke, 1995; Oller, 1980; Oller & Lynch, 1992). In part, it is the temporal sequential and varying prosodic nature of these utterances which give them their speech-like quality.

Jargon babbling not only resembles normal fluent speech but is often produced as the infant is gazing at or making eye-to-eye contact with the listener. The infant may appear to be engaging in an actual conversation, as if explaining some action, or a desire to direct the other's attention to some object or activity. Thus, jargon babbling often occurs in a social context and could be described as "conversational babbling."

Just as frequently, however, the infant may appear oblivious to any potential listener and may jargon babble while alone and at play, or while gazing at or exploring some object. As these vocalizations appear to be self-directed and are meant for the child's ears alone, they could be described as "egocentric babbling."

As noted, the emergence of jargon babbling is soon followed by the utterance of the infant's first words. However, although from age 1 through 2 vocabulary will expand from one word to over 300, the child will continue to produce "egocentric" and "conversational" jargon babble. In fact, jargon babbling does not completely disappear until well after age 2, and some children may continue to occasionally jargon babble as late as age 3 (Kent & Miolo, 1995; Locke, 1995).

However, at this latter age, although jargon babbling eventually disappears, the egocentric versus social-conversational nature of these vocalizations are retained. That is, as children acquire language, they will produce conversational speech that is directed toward others as well as speech which continues to be meant for their ears alone. It is this latter form of language which Piaget and Vygotsky identified as "egocentric speech," a form of overt thinking; that is, thinking out loud. According to Piaget (1952, 1962, 1974) and Vygotsky (1962), egocentric speech is slowly internalized between the ages of 3 and 5, and eventually becomes completely covert; at which point, the child has not only learned to speak in words, but to think in words as well.

EGOCENTRIC SPEECH

As the left hemisphere matures and wrests control of the peripheral and cranial musculature from subcortical and limbic influences, a distinct form of language emerges, i.e. left hemisphere speech--a type of language which is unique to humans but which initially consists of jargon but which later becomes true speech. In contrast to limbic language, left hemisphere speech is grammatical, temporal-sequential, denotative, consists of word units, and is closely bound with the eventual expression and development of verbal thought. Verbal thinking, however, does not appear until much later in development; an unfolding event which corresponds to the appearance of a yet another form of language which is unique to humans; a self-directed form of language appears to be a form of thinking out loud: egocentric speech.

Egocentric speech is self-directed speech that consists of an explanatory monologue in which children comment on or explain their play and other actions, usually after the action has occurred. That is, the child essentially talks to themselves but in an explanatory fashion.

Egocentric speech is essentially speech for oneself (Joseph, 1982; Piaget, 1962; Vygotsky, 1962). It is a self-directed form of communication which heralds the first attempts at self-explanation via thinking-out-loud. According to Vygotsky (1962), egocentric speech makes its first appearance at approximately 3 years of age. According to Piaget (1952, 1962, 1974), at its peak, egocentric speech comprises almost 40-50% of the preoperational child's language; the remainder consisting of social speech (denotative, interactional, and emotional).

Social speech is produced so as to communicate with others. Ego-centric speech is produced so as to communicate with no one other than the child who produces it.

Prior to the development of ego-centric speech, communication is directed strictly toward outside sources. There is no attempt to verbally communicate with the Self, for there is no internal dialogue. Verbal thought has not yet developed and children do not talk to themselves about their ongoing behavior or feelings.

At around age 3 egocentric speech--the peculiar linguistic structure from which thought will arise--appears in the context of social-denotative vocalizations (Vygotsky, 1962). That is, part of the time the child engages in social speech, whereas the remainder of speech activities are ego-centric and directed and produced for the sole benefit of the child who listens to his speech and external commentary as he or she plays.

While the child is engaging in egocentric speech, he/she does not appear concerned with the listening needs of his/her audience simply becasuse to all appearances his/her words are meant for his ears alone (Piaget, 1952, 1962, 1974; Vygotsky, 1962). The child is essentially thinking out loud in an explanatory fashion, commenting on and describing his or her actions (Joseph, 1982; Vygotsky, 1962).

When engaged in an egocentric monologue, there is no interest in influencing or explaining to others what in fact is being explained. In fact, the child will keep up a running verbal accompaniment to his actions, commenting on his behavior in an explanatory fashion even while alone. Moreover, while engaged in this self-directed external monlogue the child appears oblivious to the responses of others to his statements (Piaget, 1952, 1962, 1974; Vygotsky, 1956). It is as if the child has no awarness that others hear him. In fact, many a child has been shocked when he later hears his mother (or a friend) repeat or comment upon something he assumed no one else could hear.

Egocentric speech is not simply talking out loud, but rather tends to be self-explanatory, serving as a form of commentary that is initially produced only after an action has been completed. The child observes what he or she has done and then comments on and/or explains what has taken place.

Egocentric speech presents us with a curious anomally, for we must accept that the child knows what he has done without commenting or explaining her actions; moreover, she must know why she has performed certain behaviors without the need to explain them to herself. Nevertheless, the fact that she explains and comments upon her behaviors after they occur argues otherwise. Paradoxically, the child acts as both actor and witness, explainer and explained to. Clearly, the child explains his actions to himself (Vygotsky, 1962).

THE INTERNALIZATION OF EGOCENTRIC SPEECH

Initially egocentric speech is completely external and after the fact. Presumably it is external and after the fact because the child is incapable of internally generating linguistic thoughts (Piaget, 1952, 1962, 1974; Vygotsky, 1956). Presumably this is due to the slow pace of myelination in the Language Axis and, in particular, the corpus callosum (Joseph 1982). Because of these internal limitations, the child therefore thinks about his or her behavior, out-loud.

It is important to emphasize, however, that egocentric speech is not random or pointless, but is largely explanatory. They are explaining their behavior to themselves. Since they are utilizing words, it is thus apparent that they are explaining their actions to their left hemisphere. Because the explanation occurs, initially, only after the behavior has been completed, suggests that the left hemisphere did not have access to the behavioral plan or the motivation behind it, until after the act was completed; which is then explained as a verbal commentary.

This suggests that the behavior being explained was therefore planned, initiated, or mediated, presumably, by the right hemisphere or limbic system (Joseph 1982). Because of the slow pace of corpus callosum myelination, the left hemisphere of the child's brain cannot gain access to right cerebral impulses, memories, or plans, until after they are expressed and can be observed and thus commented on. However, as the callosum matures, the child's left hemisphere begins to receive earlier or advanced access and thus produces the egocentric commentary earlier and earlier as well.

For example, first a child will paint a picture and then explain it. As she ages she will paint a picture and explain it while she is painting. Finally, she will announced what she is going to paint, and then paints it (Vygotsky, 1962).

Hence, as the child grows older their comments and explanations occur earlier in the sequence of expression, until finally the child begins to explain his actions before they are performed instead of after they have occurred. Essentially, as the child ages she appears to receive advanced warning of her intentions and actions, until finally this information is available before rather than after she acts (Joseph, 1982). At this later stage, however, egocentric speech has been greatly internalized as verbal thought..

According to Vygotsky (1962), after its intitial appearance and elaboration, egocentric speech also begins to occur internally and in fact becomes progressively more covert as the child grows older.

At its overt maximum, when it appears to be fully developed (comprising by age 4 almost 50% of the child's speech), its traits and structures are simultaneously being internalized and strengthened and comprise a greater portion of the child's cognitive activities than may be witnessed (like an iceberg). That is, egocentric speech never disapears but becomes completely internalized in the form of inner speech, i.e. thought. The child has now learned to think words as well as to speak them; and to think them in a temporal and organized sequence which retains its original and primary function--self communication.

SELF-EXPLANATION & INTERHEMISPHERIC COMMUNICATION

The essential feature of the external components of egocentric speech is that it is based on stimuli and actions which occur outside the childs immediate sphere of understanding and experience; at least insofar as the language dependent left hemisphere is concerned. In this regard, children, when they "misbehave", are probably sometimes telling the truth when they say they "don't know why" they did such and such. That is, the left hemisphere does not know why.

Although egocentric speech is a self-directed monologue, it is nevertheless a product of the left cerebral hemisphere. That egocentric speech appears initially only after an action has occurred indicates that the left hemishere of the young child is responding to impulses and actions initiated outside its immediate realm of experiences and comprehension. It seems that the left hemispehre in the production of egocentric speech is not only "thinking out loud", but is attempting to interpret what it observes and experiences externally, thus creating a meaningful explanatory sequence which it then linguistically communicates to itself.

As noted above, the appearance and eventual internalizaton of egocentric speech occurs in response to several maturational changes in the central nervous system, and parrallels the myelination of the corpus callosum and the increased ability for the cerebral hemispheres to communicate (Joseph, 1982). Hence, as has been demonstrated by a number of independent researchers, communication between the right and left half of the brain is somewhat poor prior to age 3 and remains limited until approximately after age 5 (Deruelle & Schonen, 1991; Finlayson 1975; Gallagher & Joseph 1982; Galin et al. 1977, 1979; Joseph & Gallagher, 1985; Joseph et al. 1984; Kraft et al. 1980; Molfese et al. 1975; O'Leary, 1980; Ramaekers & Njiokiktjien, 1991; Salamy, 1978). Presumably this is a function of the immaturity of the corpus callosal fibers connections between the hemispheres (Yakovlev & Lecours, 1967).

Essentially, egocentric speech appears to be a function of the left hemispheres attempt to organize, interpet, and make sense of behavior inititated by the right hemisphere and limbic system (Joseph, 1982). Presumably, because interhemispheric communication is at best grossly incomplete, the left utilizes language to explain to itself the behavior which it observes itself to be be engaged.

As the commissures mature and information flow within and between the hemispheres increases, the left gaining increased access to these impulses as they are formulated in the non-linguistic portions of the brain, begins to internally linguistically organize what it experiences internally (rather than externally). Essentially, increased commissural transmission allows the left hemisphere access to right hemisphere impulses-to-action before the action occurs rather than forcing it to make sense of the behavior after its completion (which is typical of split-brain patients, see chapter 10).

As noted in Chapter 10, even in the "normal" intact adult, commissural transmission is often incomplete. As such, the adult left hemisphere sometimes finds itself witnessing and participating in behaviors which it did not initiate, and which it does not understand.




The Origins of Life
Table of Contents
Table of Contents


Biological Big Bang

Life On Earth Came From Other Planets




Copyright © 2000-2006 All Rights Reserved