Functional Neuroanatomy of the Temporal Lobe Primary Auditory Area

Functional Neuroanatomy of the Temporal Lobe Primary Auditory Area (Updated From: Neuropsychiatry, Neuropsychology, Behavioral Neurology, Academic Press, 2000) Rhawn Joseph

Functional Neuroanatomy of the Temporal Lobe Primary Auditory Area
Rhawn Joseph, Ph.D.
BrainMind.com


The temporal lobe is the most heterogenous of the four lobes of the human brain, as it consists of six layered neocortex, four to five layered mesocortex, and 3 layered allocortex, with the hippocampus and amygdala forming its limbic core.




As detailed in chapters, 5, 12, over the course of evolution the dorsally situated hippocampus became displaced and progressively assumed a ventral position, during the course of which it also contributed to the neocortical development of portions of the parietal, occipital and temporal lobe. Similarly, portions of the amygdala became increasingly cortical in structure, and together with the hippocampus, contributed to the evolution of the anterior, medial, superior, and lateral temporal lobes.

Because so much of the temporal lobe evolved from these limbic nuclei, unlike the other lobes, it consists of a mixture of allocortex, mesocortex, and neocortex, with allocortex and mesocortex being especially prominent in and around the medial-anterior inferiorally located uncus, beneath which and which abuts the amygdala and hippocampus.

Given the role of the amygdala and hippocampus in memory, emotion, attention, and the processing of complex auditory and visual stimuli, the temporal lobe became similarly organized (Gloor, 1997). Broadly considered, the neocortical surface of the temporal lobes can be subdivided into three main convolutions, the superior, middle, and inferior temporal gyri, which in turn are separated and distinguished by the sylvian fissure and the superior, middle, and inferior temporal sulci. Each of these subdivision performs different (albeit overlapping) functions, i.e. auditory, visual, and auditory-visual-affective perception including memory storage.

Specifically, the anterior and superior temporal lobes are concerned with complex auditory and linguistic functioning and the comprehension of language (Binder et al., 1994; Edeline et al. 2010; Geschwind 1965; Goodglass & Kaplan, 1992; Keser et al. 2016; Nelken et al., 2012; Nishimura et al., 2012; Price, 2007). The importance of the superior temporal lobe in auditory functioning, and (in the left hemisphere) language, is indicated by the fact that the superior temporal, including the primary auditory area, is larger in humans versus other primates and mammals. In addition, Heschl's gyri are larger on the left, and the left planum temporal is 10 times larger (among the majority of the population) than its counterpart on the right (Geschwind & Levinsky, 1968). The auditory areas, however, extend along the axis of the superior temporal lobe, extending in a anterior-inferior and posterior-superior arc, such that the connections of the primary auditory area extends outward in all directions innervating, in a temporal sequences, the second and third order auditory association areas (Pandya & Yeterian, 1985), association areas which in turn project back to the primary auditory area.




The inferior and medial temporal lobe harbors the amygdala and hippocampus, performs complex visual integrative activities including visual closure, and contains neurons which respond selectively to faces and complex geometric and visual stimuli (Gross & Graziano 1995; Nakamura et al. 1994; Rolls 1992; Tovee et al. 1994). The inferior and middle temporal lobes, are the recipients of one two diverging (dorsal and ventral) streams of visual input arising from within the occipital lobe and thalamus (Ungerlieder & Mishkin, 1982); i.e. the pulvinar and dorsal medial nucleus of the thalamus. The dorsal stream is more concerned with the detection of motion and movement, orientation, binocular disparity, whereas the ventral stream is concerned with the discrimination of shapes, textures, objects and faces, including individual faces (Baylis et al., 1985; Perrett et al., 1984, 1992). This information flows from the primary visual to visual association areas and is received and processed in the temporal lobes and is then shunted to parietal lobe, and to the amygdala and entorhinal cortex (the gateway to the hippocampus) where it may then be learned and stored in memory.




The temporal lobes, however, also receive extensive projections from the somesthetic and visual association areas, and processes gustatory, visceral, and olfactory sensations (Jones & Powell, 1970; Previc 2010; Seltzer & Pandya, 1978) including the feeling of hunger (Fisher 1994). Hence, the temporal lobes perform an exceedingly complex array of divergent and interrelated functions.

FUNCTIONAL OVERVIEW

The right and left temporal lobe are functionally lateralized, with the left being more concerned with non-emotional language functions, including, via the inferior-medial and basal temporal lobes reading and verbal (as verbal-visual) memory. For example, as determined based on functional imaging, when reading and speaking the left posterior temporal lobe becomes highly active, due, presumably to its involvement in lexical processing (Binder et al., 1994; Howard et al., 199). The superior temporal lobe (and supramarginal gyrus) also becomes more active when reading aloud than when reading silently (Bookheimer, et al., 1995), and becomes active during semantic processing as does the left angular gyrus (Price, 2007). These same temporal areas are activated during word generation (Shaywitz, et al., 1995; Warburton, et al., 1996), and sentence comprehension tasks (Bottini, et al., 1994; Fletcher et al., 1995). and (in conjunction with the angular gyrus) becomes highly active when retrieving the meaning of words during semantic processing and semantic decision tasks (Price, 2007). Likewise, single cell recordings from the auditory areas in the temporal lobe demonstrate that neurons become activated in response to speech, including the sound of the patient's own voice (Creutzfeldt, et al., 1989).

By contrast, the right is more concerned with perceiving emotional and melodic auditory signals (Evers et al., 2012; Parsons & Fox, 2007; Ross, 2013), and is dominant for storing and recalling emotional and visual memories (Abrams & Taylor, 1979; Cimino, et al. 1991; Cohen, Penick & Tarter, 1974; Deglin & Nikolaenko, 1975; Kimura, 1963; Shagass et al.,1979; Wexler, 1973). The right temporal lobe becomes highly active when engage in interpreting the figurative aspects of language (Bottini et al., 1994).

However, there is considerable functional overlap and these structures often become simultaneously activated when performing various tasks. For example, as based on functional imaging, when reading, the right posterior temporal cortex also becomes highly active (Bookheimer, et al., 1995; Bottini et al., 1994; Price, et al., 1996), and when making semantic decisions (involving reading words with similar meanings), there is increased activity bilaterally (Shaywitz, et al., 1995).

Presumably, in part, both temporal lobes become activated when speaking and reading, due to the left temporal lobe's specialization for extracting the semantic, temporal, sequential, and the syntactic elements of speech, thereby making language comprehension possible. By contrast, the right temporal lobe becomes active as it attempts to perceive, extract and comprehend the emotional (as well as the semantic) and gestalt aspects of speech and written language.

AUDITORY NEOCORTEX

Although the various cytoarchitectural functional regions are not well demarcated via the use of Brodmann's maps, it is possible to very loosely define the superior-temporal and anterior-inferior and anterior middle temporal lobes as auditory cortex (Pandya & Yeterian, 1985). These regions are linked together by local circuit (inter-) neurons and via a rich belt of projection fibers which include the arcuate and inferior fasciculus which project in a anterior-inferior and posterior-superior arc innervating (inferiorally) the amygdala and entorhinal cortex (Amaral et al., 1983) and posteriorally the inferior parietal lobule--as is evident based on brain dissection. It is also via the arcuate and inferior fasciculus that the inferior temporal lobe, entorhinal cortex (the "gateway to the hippocampus") as well as the amygdala (receive from) and transfer complex auditory information to the primary and secondary auditory cortex which simultaneously receives auditory input from the medial geniculate of the thalamus, the pulvinar, and (sparingly) the midbrain (Amaral et al., 2013; Pandya & Yeterian, 1985). As will be detailed below, it is within the primary and neocortical auditory association areas where linguistically complex auditory signals are analyzed and reorganized so as to give rise to complex, grammatically correct, human language.



It is noteworthy that immediately beneath the insula and approaching the auditory neocortex is a thick band of amygdala-cortex, the claustrum. Over the course of evolution the claustrum apparently split off from the amygdala due to the expansion of the temporal lobe and the passage of additional axons coursing throughout the white matter (Gilles et al., 1983) including the arcuate fasciculus. Nevertheless, the claustrum maintains rich interconnections with the amygdala, the insula, and to some extent, the auditory cortex (Gilles et al., 1983). This is evident from dissection of the human brain which reveals that the fibers of the arcuate fasciculus do not merely pass through this structure (breaking it up in the process) but sends and receives fibers from it.

CORTICAL ORGANIZATION

The functional neural-architecture of the auditory cortex is quite similar to the somesthetic and visual cortex (Peters & Jones, 1985). That is, neurons in the same vertical columns have the same functional properties and are activated by the same type or frequency of auditory stimulus. The auditory cortex, therefore, is basically subdivided into discrete columns which extend from the white matter (layers VII/6b) to the pial surface (layer I). However, although most neurons in a single column receive excitatory input from the contralateral ear, some receive input from the ipsilateral ear which may exert excitatory or inhibitory influences so as to suppressed, within the same column, input from itself or from the contralateral ear (Imig & Brugge, 1978). These interactions have been referred to as summation interaction (excitatory/excitatory) and suppression interaction (inhibitory/inhibitory, inhibitory/excitatory).

Moreover, some columns tend to engage in excitatory summation, whereas others tend to engage in inhibitory suppression (Imig & Brugge, 1978)--a process that would contribute to the focusing of auditory attention and the elimination of neural noise thereby promoting the ability to selectively attend to motivationally significant environmental and animal sounds including human vocalizations (Nelken et al., 2012).

Among humans, the primary auditory neocortex is located on the two transverse gyri of Heschl, along the dorsal-medial surface of the superior temporal lobe. The center most portion of the anterior transverse gyri contains the primary auditory receiving area (Broadman's area 41) and neocortically resembles the primary visual (area 17) and somesthetic cortices (area 3). The posterior transverse gyri consists of both primary and association cortices (areas 41 and 42 respectively). The major source of auditory input is derived from the medial geniculate thalamic nucleus as well as the pulvinar (which also provides visual input).

Heschl's gyri is especially well developed and no other species has a Heschl's gyrus that as prominent as is the case with humans (Yousry et al., 2015). In fact, some individuals appear to posses multiple Heschl gyri--though the significance of this is not clear. However, it has been reported that this may be a reflection of genetic disorders, learning disabilities, and so on (see Leonard, 1996).





In over 80-90% of right handers and in over 50% to 80% of left handers, the left hemisphere is dominant for expressive and receptive speech (Frost et al., 2015; Pujol, et al., 2012). In humans, the auditory cortex including Wernicke's (i.e. the planum temporale) is generally larger on the left temporal lobe (Geschwind & Levitsky, 1968; Geschwind & Galaburda 1985; Habib et al. 1995; Wada et al., 1975; Witelson & Pallie, 1973). Specifically, as originally determined by Geschwind & Levitsky (1968), the planum temporal is larger in the left hemisphere in 65% of brains studied, is larger on the right in 25%, whereas there is no difference in 10%. Geschwind, Galaburda and colleagues argue that the larger left planum temporale is a significant factor in the establishment of left hemisphere dominance for language.

AUDITORY TRANSMISSION FROM THE COCHLEA TO THE TEMPORAL LOBE

Within the cochlea of the inner ear are tiny hair cells which serve as sensory receptors. These cells give rise to axons which form the cochlear division of the 8th cranial nerve; i.e. the auditory nerve. This rope of fibers exits the inner ear and travels to and terminates in the cochlear nucleus which overlaps and is located immediately adjacent to the vestibular-nucleus from which it evolved within the brainstem (see chapter 5).



Among mammals once auditory stimuli are received in the cochlear nucleus there follows a series of transformations as this information is relayed to various nuclei, i.e., the superior olivary complex, the nucleus of the lateral lemniscus of the brainstem, the midbrain inferior colliculus and medial geniculate nucleus of the thalamus as well as the amygdala (which extracts those features which are emotionally or motivationally significant), and cingulate gyrus (Carpenter 1991; Devinksy et al. 1995; Edeline et al. 2010; Parent 1995). Auditory information is then relayed from the medial geniculate nucleus of the thalamus as well as via the amygdala (through the inferior fasciculus) to Heschl's gyrus, i.e. the primary auditory receiving area (Brodmann's areas 41 & 42) located within the superior temporal gyrus and buried within the depths of the Sylvian fissure.

Here auditory signals undergo extensive analysis and reanalysis and simple associations are established. However, by time they have reached the neocortex, auditory signals have undergone extensive analysis by the medial thalamus, amygdala, and the other ancient structures mentioned above.

As noted, unlike the primary visual and somesthetic areas located within the neocortex of the occipital and parietal lobes which receive signals from only one half of the body or visual space, the primary auditory region receives some input from both ears, and from both halves of auditory space (Imig & Adrian, 1977). This is a consequence of the considerable interconnections and cross-talk which occurs between different subcortical nuclei as information is relayed to and from various regions prior to transfer to the neocortex. Predominantly, however, the right ear transmits to the left cerebral neocortex and vice versa.

Although the majority of auditory neurons respond to auditory stimuli, approximately 25% also respond to visual stimuli, particularly those in motion. In fact, Penfield and Perot (1963) were able to trigger visual responses from stimulation in the superior temporal lobe. In addition, these neurons are involved in short-term memory, as lesions to the superior temporal lobe can induce short-term memory deficits (Heffner & Heffner, 1986).

Conversely, electrical stimulation not only induces visual responses, but complex auditory responses. Penfield and Perot (1963) report that electrical stimulation can produce the sound of voices, and the hearing of music. These neurons will also respond, electrophysiologically, to the sound of human voices, including the patient's own speech (Creutzfeldt, et al., 1989). Conversely, injury to the left and right superior temporal lobe can result in an inability to correctly perceive or hear complex sounds; a condition referred to as pure word deafness, and if limited to the right ear, agnosia for environmental and musical sounds (see below). In fact, right temporal injuries can disrupt the ability to remember musical tunes or to create musical imagery (Zatorre & Halpen, 2013).

AUDITORY RECEIVING AREA NEUROANATOMICAL-FUNCTIONAL ORGANIZATION.

The primary auditory area is tonotopically organized (Edeline et al. 2010; Merzenich & Brugge, 1973; Pantev et al. 1989; Woolsey & Fairman, 1946), such that different auditory frequencies are progressively anatomically represented. In addition, there is evidence to suggest that the auditory cortex is also "cochleotopically" organized (Swarz & Tomlinson 2010). Specifically, high frequencies are received and analyzed in the anterior-medial portions and low frequencies in the posterior-lateral regions of the superior temporal lobe (Merzenich & Brugge, 1973). In addition, the primary auditory cortex processes and perceives pitch and intensity, as well as modulations in frequency. Primary auditory neurons are especially responsive to the temporal sequencing of acoustic stimuli (Wolberg & Newman, 1972).

Moreover, the auditory cortex appears to be organized so as to detect tones in noise, i.e. the non-random structure of natural sounds, which enables it to selectively attend to environmental and animal sounds including human vocalizations (Nelken et al., 2012). In fact, these same neural fields can be activated by sign language, that is, among the congenitally deaf (Nishimura et al., 2012).

Some neurons are also specialized to perceive specific-specific calls (Hauser, 2007) or the human voice (Crutzfeldt et al., 1989). This has been determined by single unit recording. Some neurons will react to the patient's voice, whereas yet others will selectively respond to a particular (familiar) voice or call, but not to others--which in regard to voices, indicates that these cells (or neural assemblies) have engaged in learning.

Auditory neurons in fact, display neurplasticity in response to learning as well as injury. For example, not only due auditory synapses display learning-induced changes, but auditory neurons can be classically conditioned so as to form associations between stimuli, such that if a neutral stimulus is paired with a noxious stimulus, the neurons response properties will be greatly altered (Weinberger, 2013).

These neurons also display neuroplasticity in response to injury and loss of hearing. For example, due to cochlear or peripheral hearing loss, neurons that no longer receive high frequency auditory input, begin to respond instead to middle frequencies (Robertson & Irvine, 1989). In fact, in response to complete loss of hearing, such as congenital deafness, these neurons may cease to respond to auditory input and may instead respond to body language, such as the signing used by the deaf (Nishimura et al., 2012).

FILTERING, FEEDBACK & TEMPORAL-SEQUENTIAL REORGANIZATION

The old cortical centers located in the midbrain and brain stem evolved long before the appearance of neocortex and have long been adapted and specialized for performing a considerable degree of information analysis (Buchwald et al. 1966). This is evident from observing the behavior of reptiles and amphibians where auditory cortex is either absent or minimally developed.

Moreover, many of these old cortical nuclei also project back to each other such that each subcortical structure might hear and analyze the same sound repeatedly (Brodal 1981; Pandya & Yeterian, 1985). In this manner the brain is able heighten or diminish the amplitude of various sounds via feedback adjustment (Joseph, 2013; Luria, 1980). In fact, not only is feedback provided, but the actual order of the sound elements perceived can be rearranged when they are played back.

This same process continues at the level of the neocortex which has the advantage of being the recipient of signals that have already been highly processed and analyzed (Buchwald et al. 1966; Edeline et al. 2010; Scharz & Tomlinson 2010). Primary auditory neurons are especially responsive to the temporal sequencing of acoustic stimuli (Wolberg & Newman, 1972). It is in this manner, coupled with the capacity of auditory neurons to extract non-random sounds from noise, that language related sounds begin to be organized and recognized (e.g. Nelken et al., 2012; Scwazrz & Tomlinson 2010).

For example, neurons located in the primary auditory cortex can determine and recognize differences and similarities between harmonic complex tones and demonstrated auditory response patterns that vary in response to lower and higher frequency and to specific tones (Nelken et al. 2012; Scwarz & Tomlinson 2010). Some display "tuning bandwidths" for pure tones, whereas others are able to identify up to seven components of harmonic complex tones. In this manner, pitch can also be discerned (e.g. Pantev et al. 1989).

Sustained Auditory Activity.

One of the main functions of the primary auditory neocortical receptive area appears to be the retention of sounds for brief time periods (up to a second) so that temporal and sequential features may be extracted and discrepancies in spatial location identified; i.e. so that we can determine from where a sound may have originated (see Mills & Rollman 1980). This prolonged activity, presumably also allows for additional processing and so that comparisons to be made with sounds that were just previously received and those which are just arriving. Hence, as based on functional imaging, the left temporal lobe becomes increasingly active as word length increases (Price, 2007), due presumably to the increased processing necessary.

Moreover, via their sustained activity, these neurons are able to prolong (perhaps via a perseverating feedback loop with the thalamus) the duration of certain sounds so that they are more amenable to analysis--which may explain why activity increases in response to unfamiliar words and as word length increases (Price, 2007). In this manner, even complex sounds can be broken down into components which are then separately analyzed. Hence, sounds can be perceived as sustained temporal sequences. It is perhaps due to this attribute that injuries to the superior temporal lobe result in short-term auditory memory deficits as well as disturbances in auditory discrimination (Hauser, 2007; Heffner & Heffner, 1986).

Although it is apparent that the auditory regions of both cerebral hemispheres are capable of discerning and extracting temporal-sequential rhythmic acoustics (Milner, 1962; Wolberg & Newman, 1972), the left temporal lobe apparently contains a greater concentration of neurons specialized for this purpose as the left half of the brain is clearly superior in this capacity (Evers et al., 2012).

For example the left hemisphere has been repeatedly shown to be specialized for sorting, separating and extracting in a segmented fashion, the phonetic and temporal-sequential or linguistic-articulatory features of incoming auditory information so as to identify speech units. It is also more sensitive to rapidly changing acoustic cues be they verbal or non-verbal as compared to the right hemisphere (Shankweiler & Studdert-Kennedy, 1967; Studdert-Kennedy & Shankweiler, 1970). Moreover, via dichtoic listening tasks, the right ear (left temporal lobe) has been shown to be dominant for the perception of real words, word lists, numbers, backwards speech, morse code, consonants, consonant vowell syllables, nonsense syllables, the transitional elements of speech, single phonemes, and rhymes (Blumstein & Cooper, 1974; Bryden, 1967; Cutting, 1974; Kimura, 1961; Kimura & Folb, 1968; Levy, 1974; Mills & Rollman, 1979; Papcunm et al., 1974; Shankweiler & Studert-Kennedy, 1966, 1967; Studdert-Kennedy & Shankweiler, 1970). In addition, and as based on functional imaging, activity significantly increases in the left hemisphere during language tasks (Nelken et al., 2012; Nishimura et al., 2012), including reading (Binder et al., 1994; Price, 2007).

In part the association of the left hemisphere and left temporal lobe with performing complex temporal-sequential and linguistic analysis is due to its interconnections with the inferior parietal lobule (see chapters 6, 11)--a structure which also becomes highly active when reading and naming (Bookheimer, et al., 1995; Menard, et al., 1996; Price, 2007; Vandenberghe, et al., 1996) and which acts as a phonological storehouse that becomes activated during short-term memory and word retrieval (Demonet, et al., 1994; Paulesu, et al., 2013; Price, 2007).

As noted in chapters 6, 11, the inferior parietal lobule is in part an outgrowth of the superior temporal lobe but also consists of somesthetic and visual neocortical tissue. However, the inferior parietal lobule also acts to impose temporal sequences on incoming auditory, as well as visual and somesthetic stimuli, and also serves to provide (via its extensive interconnections with surrounding brain tissue) and integrate related associations thus making complex and grammatically correct human language possible (chapters 5, 11).




However, the language capacities of the left temporal lobe are also made possible via feedback from "subcortical" auditory neurons, and via sustained (vs diminished) activity and analysis. That is, because of these "feedback" loops the importance and even order of the sounds perceived can be changed, filtered or heightened; an extremely important development in regard to the acquisition of human language (Joseph, 2013; Luria, 1980). In this manner sound elements composed of consonants, vowels, and phonemes and morphemes can be more readily identified, particularly within the auditory neocortex of the left half of the brain (Cutting 1974; Shakweiler & Studdert-Kennedy 1966, 1967; Studdert-Kennedy & Shankweiler, 1970).

For example, normally a single phoneme may be scattered over several neighboring units of sounds. A single sound segment may in turn carry several successive phonemes. Therefore a considerable amount of reanalysis, reordering, or filtering of these signals is required so that comprehension can occur (Joseph 2013). These processes, however, presumably occurs both at the neocortical and old cortical level. In this manner a phoneme can be identified, extracted and analyzed and placed in its proper category and temporal position (see edited volume by Mattingly & Studdert-Kennedy, 1991 for related discussion).

Take, for example, three sound units, "t-k-a," which are transmitted to the superior temporal auditory receiving area. Via a feedback loop the primary auditory area can send any one of these units back to the thalamus which again sends it back to the temporal lobe thus amplifying the signal and/or which allows for rearranging their order, "t-a-k," or "K-a-t." A multitude of such interactions are in fact possible so that whole strings of sounds can be arranged or rearranged in a certain order (Joseph 2013). Mutual feed back characterizes most other neocortical-thalamic interactions as well, be it touch, audition, or vision (Brodal 1981; Carpenter 1991; Parent 1995).



Activation of the temporal lobes in response to language

Via these interactions and feedback loops sounds can be repeated, their order can be rearranged, and the amplitude on specific auditory signals can be enhanced whereas others can be filtered out. It is in this manner, coupled with experience and learning (Edeline et al. 2010; Diamond & Weinberger 1989) that fine tuning of the nervous system occurs so that specific signals are attended to, perceived, processed, committed to memory and so on. Indeed, a significant degree of plasticity in response to experience as well as auditory filtering occurs throughout the brain not only in regard to sound, but visual and tactual information as well (Greenough et al. 1987; Hubel & Wiesel, 1970; Juliano et al. 1994). Moreover, the same process occurs when organizing words for expression.

This ability to perform so many operations on individual sound units has in turn greatly contributed to the development of human speech and language. For example, the ability to hear human speech requires that temporal resolutions occur at great speed so as to sort through the overlapping and intermixed signals being perceived. This requires that these sounds are processed in parallel, or stored briefly and then replayed in a different order so that discrepancies due to overlap in sounds can be adjusted for (see Mattingly & Studdert-Kennedy, 1991 for related discussion), and this is what occurs many times over via the interactions of the old cortex and the neocortex. Similarly, when speaking or thinking, sound units must also be arranged in a particular order so that what we say is comprehensible to others and so that we may understand our own verbal thoughts.

SPATIAL LOCALIZATION, ATTENTION & ENVIRONMENTAL SOUNDS

In conjunction with the inferior colliculus and the frontal lobe (Graziano et al., 2012), and due to bilateral auditory input, the primary auditory area plays a significant role in orienting to and localizing the source of various sounds (Sanchez-Longo & Forster, 1958); for example, by comparing time and intensity differences in the neural input from each ear. A sound arising from one's right will reach and sound louder to the right ear as compared to the left.

Indeed, among mammals, a considerable number of auditory neurons respond or become highly excited only in response to sounds from a particular location (Evans & Whitfield, 1968). Moreover, some of these neurons become excited only when the subject looks at the source of the sound (Hubel et al., 1959). Hence, these neurons act so that location may be identified and fixated upon. In addition to the frontal lobe (Graziano et al., 2012) these complex interactions probably involve the parietal area (7), as well as the midbrain colliculi and limbic system. As based on lesions studies in humans, the right temporal lobe is more involved than the left in discerning location (Nunn et al., 2012; Penfield & Evans, 1934; Ploner et al., 199; Shankweiler, 1961).

There is also some indication that certain cells in the auditory area are highly specialized and will respond only to certain meaningful vocalizations (Wollberg & Newman, 1972). In this regard they seemed to be tuned to respond only to specific auditory parameters so as to identify and extract certain meaningful features, i.e. feature detector cells. For example, some cells will respond to cries of alarm and sounds suggestive of fear or indicating danger, whereas others will react only to specific sounds and vocalizations.

Nevertheless, although the left temporal lobe appears to be more involved in extracting certain linguistic features and differentiating between semantically related sounds (Schnider et al. 1994), the right temporal region is more adept at identifying and recognizing acoustically related sounds and non-verbal environmental acoustics (e.g. wind, rain, animal noises), prosodic-melodic nuances, sounds which convey emotional meaning, as well as most aspects of music including temp and meter (Heilman et al. 1975, 1984; Joseph, 1988a; Kester et al. 1991; Schnider et al. 1994; Spinnler & Vignolo 1966).





Indeed, the right temporal lobes spatial-localization sensitivity coupled with its ability to perceive and recognize environmental sounds no doubt provided great survival value to early primitive man and woman. That is, in response to a specific sound (e.g. a creeping predator), one is able to immediately identify, localize, locate, and fixate upon the source and thus take appropriate action. Of course, even modern humans relie upon the same mechanisms to avoid cars when walking across streets or riding bicycles, or to ascertain and identify approaching individuals, etc.

HALLUCINATIONS

Electrical stimulation of Heschyl's gyrus produces elementary hallucinations (Penfield & Jasper, 1954; Penfield & Perot, 1963). These include buzzing, clicking, ticking, humming, whispering, and ringing, most of which are localized as coming from opposite side of the room. Tumors involving this area also give rise to similar, albeit transient hallucinations, including tinnitus (Brodal, 1981). Patients may complain that sounds seem louder and/or softer than normal, closer and/or more distant, strange or even upleasant (Hecaen & Albert, 1978). There is often a repetitive quality which makes the experience even more disagreeable.

In some instances the hallucination may become meaningful. These include the sound of footsteps, clapping hands, or music, most of which seem (to the patient) to have an actual external source.



Penfield and Perot (1963) report that electrical stimulation of the superior temporal gyrus, the right temporal lobe in particular results in musical hallucinations. Patients with tumors and seizure disorders, particularly those involving the right temporal region, may also experience musical hallucinations. Frequently the same melody is heard over and over. In some instances patients have reported the sound of singing voices and individual instruments may be heard (Hecaen & Albert, 1978).

Conversely, it has been frequently reported that lesions or complete surgical destruction of the right temporal lobe significantly impairs the ability to name or recognize melodies and musical passages. It also disrupts time sense, the perception of timbre, loudness, and meter (Chase, 1967; Joseph, 1988a; Kester et al. 1991; Milner, 1962; Shankweiler, 1966).

Auditory verbal hallucinations seem to occur with right or left temporal destruction or stimulation (Hecaen & Albert, 1978; Penfield & Perot, 1963; Tarachow, 1941) --although left temporal involvement is predominant. The hallucination may involve single words, sentences, commands, advice, or distant conversations which can't quite be made out. According to Hecaen and Albert (1978), verbal hallucinations may precede the onset of an aphasic disorder, such as due to a developing tumor or other destructive process. Patients may complain of hearing "distorted sentences", "incromprehensible words" etc.

CORTICAL DEAFNESS

In some instances, such as due to middle cerebral artery stroke, the primary auditory receiving areas of the right or left cerebral hemisphere and/or the unerlying "white matter" fiber tracts may be destroyed. This results in a disconnection syndrome such that sounds relayed from the thalamus cannot be received or anlayzed by the temporal lobe. In some cases, however, the strokes may be bilateral. When both primary auditory receiving areas and their subcortical axonal connections have been destroyed the patient is said to be suffering from cortical deafness (Goodglass & Kaplan, 2012; Tanaka et al. 1991).

However, sounds continue to be processed subcortically --much like cortical blindness. Hence the ability to hear sounds per se, is retained. Nevertheless, since the sounds which are heard are not received neocortically and thus cannot be transmitted to the adjacent associaiton areas, sounds become stripped of meaning. That is, meaning cannot be extracted or assigned and the patient becomes agnosic for auditory stimuli (Albert et al. 1972; Schnider et al. 1994; Spreen et al. 1965). Rather, only differences in intensity are discernable --much like cortical blindness.

When lesions are bilateral, patients cannot respond to questions, do not show startle responses to loud sounds, lose the ability to discern the melody for music, cannot recognize speech or enviornmental sounds, and tend to experience the sounds they do hear as distorted and disagreeable, e.g. like the baning of tin cans, buzzing and roaring, etc. (Albert et al. 1972; Auerbach et al., 1982; Earnest et al. 1977; Kazui et al. 2010; Mendez & Geehan, 1988; Reinhold, 1950; Tanaka et al. 1991).

Commonly such patients also experience difficulty discriminating sequences of sound, detecting difference in temporal patterning, and determining sound duration wheareas intensity discriminations are better preserved. For example, when a 66 year old right handed male suffering from bitemporal subcortical lesions, "woke up in the morning and turned on the television, he found himself unable to hear anything but buzzing noises. He then tried to talk to himself, saying "This TV is broken." However, his voices sounded only as noise to him. Although the patient could hear his wife's voice, he could not interpret the meaning of her speech. He was also unable to identify many environmental sounds" (Kazui, et al. 2010; p. 477).

Nevertheless, individuals who are cortically deaf are not aphasic (e.g. Kazui et al. 2010. They can read, write, speak, comprehend pantomime, and are fully aware of their deficit. However, afflicted individuals may also display auditory inattention (Hecaen & Albert, 1978) and a failure to respond to loud sounds. Nevertheless, although not aphasic, per se, speech is sometimes noted to be hypophonic and contaminated by occasional literal paraphasias.

In some instances, rather than bilateral, a destructive lesion may be limited to the primary receving area of just the right or left cerebral hemisphere. These patient's are not considered cortically deaf. However, when the auditory receiving area of the left temporal lobe is destroyed the patient suffers from a condition referred to as pure word deafness. If the lesion is in the right temporal receiving area, the patient is more likely to suffer a non-verbal auditory agnosia (Schnider et al. 1994).

PURE WORD DEAFNESS

With a destructive lesion involving the left auditory receiving area, Wernicke's area becomes disconnected from almost all sources of acoustic input and patients are unable to recognize or perceive the sounds of language, be it sentences, single words, or even single letters (Hecaen & Albert, 1978). All other aspects of comprehension are preserved, including reading, writing, and expressive speech.

Moreover, if the right temporal region is spared, the ability to recognize musical and environmental sounds is preserved. However, the ability to name or verbally describe them is impaired --due to disconnection and the inability of the right hemisphere to talk.

Pure word deafness is most common with bilateral lesions in which case environmental sound recognition is also effected (e.g. Kazui et al. 2010). In these instances the patient is considered cortically deaf. Pure word deafness is more common with bilateral lesions simply because this prevents the any remaining intact auditory areas in the left hemisphere from receiving auditory input from the right temporal lobe via the corpus callosum.

Pure word deafness, when due to a unilateral lesion of the left temporal lobe, is partly a consequence of an inability to extract temporal-sequential features from incoming sounds. Hence, linguistic messages cannot be recognized. However, pure word deafness can sometimes be partly overcome if the patient is spoken to in an extremely slowed manner (Albert & Bear, 1974). The same is true of those with Wernicke's aphasia.



AUDITORY AGNOSIA

An individual with cortical deafness, due to bilateral lesions suffers from a generalized auditory agnosia involving words and non-linguistic sounds. However, in many instances an auditory agnosia, with preserved perception of language may occur with lesions restricted to the right temporal lobe (Fujii et al., 2010). In these instances, an indvidual loses the capability to correctly discern environmental (e.g. birds singing, doors closing, keys jangling) and acoustically related sounds, emotional-prosodic speech, as well as music (Nielsen, 1946; Ross, 2013; Samson & Zattore, 2008; Schnider et al. 2014; Spreen et al. 1965; Zatorre & Hapern, 2013).

These problems are less likely to come to the attention of a physician unless accompanied by secondary emotional difficulties. That is, most individuals with this disorder, being agnosic, would not know that they have a problem and thus would not complain. If they are their families notice (for example, if a patient does not respond to a knock on the door) the likelihood is that the problem will be attributed to faulty hearing or even forgetfulness.

However, because such individuals may also have difficulty discerning emotional- melodic nuances, it is likely that they will misperceive and fail to comprehend a variety of paralinguistic social-emotional messages; a condition referred to as social-emotional agnosia (Joseph, 1988a) and phonagnosia (van Lancker, et al., 2008). This includes difficulty correctly identifying the voices of loved ones or friends, or discerning what others may be implying, or in appreciating emotional and contextual cues, including variables such as sincerity or mirthful intonation. Hence, a host of behavioral difficulties may arise (see chapter 10).

For example, a patient may complain that his wife no longer loves him, and that he knows this from the sound of her voice. In fact, a patient may notice that the voices of friends and family sound in some manner different, which, when coupled with difficulty discerning nuances such as humor and friendliness may lead to the development of paranoia and what appears to be delusional thinking. Indeed, unless appropriately diagnoses it is likely that the patients problem will feed upon and reinforce itself and grow more severe.

It is important to note that rather than completely agnosic or word deaf, patients may suffer from only partial deficits. In these instances they may seem to be hard of hearing, frequently misinterpret what is said to them, and/or slowly develop related emotional difficulties.



REFERENCES


Copyright: 2006, 2000, 2010, 2018 - Rhawn Joseph, Ph.D.