IEEE Organizations related to Wernicke's Area

Back to Top

No organizations are currently tagged "Wernicke's Area"



Conferences related to Wernicke's Area

Back to Top

2023 Annual International Conference of the IEEE Engineering in Medicine & Biology Conference (EMBC)

The conference program will consist of plenary lectures, symposia, workshops and invitedsessions of the latest significant findings and developments in all the major fields of biomedical engineering.Submitted full papers will be peer reviewed. Accepted high quality papers will be presented in oral and poster sessions,will appear in the Conference Proceedings and will be indexed in PubMed/MEDLINE.


2018 International Joint Conference on Neural Networks (IJCNN)

IJCNN is the flagship conference of the International Neural Network Society and the IEEE ComputationalIntelligence Society. It covers a wide range of topics in the field of neural networks, from biological neuralnetwork modeling to artificial neural computation.


2011 5th International Symposium on Computational Intelligence and Intelligent Informatics (ISCIII)

The conference is encouraging high quality innovative papers and is commited to facilitate encounters and exchanges among the top researches in the field.



Periodicals related to Wernicke's Area

Back to Top

No periodicals are currently tagged "Wernicke's Area"


Most published Xplore authors for Wernicke's Area

Back to Top

Xplore Articles related to Wernicke's Area

Back to Top

Spatiotemporal Characteristics of Cortical Activities Associated with Articulation of Speech Perception

2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2018

Recently, brain computer interface (BCI) technologies that control external devices with human brain signals have been developed. However, most of the BCI systems, such as P300-speller, can only discriminate among options that have been given in advance. Therefore, the ability to decode the state of a person's perception and recognition, as well as that person's fundamental intention and emotions, from ...


Localization of event-related magnetic fields during speech processing by magnetoencephalography

Proceedings of 17th International Conference of the Engineering in Medicine and Biology Society, 1995

In this study the authors investigated several approaches to localize cortical areas activated during speech processing. The event-related activation was elicited by visually presented stimuli. One paradigm consisted of written mono syllable nouns where the subjects were asked to read the words and imagine the described object. Another task was to silently name objects presented to them. The magnetic evoked ...


Investigation of relation between speech perception and production based on EEG source reconstruction

2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2015

Mirror neuron system has been investigated using the functional magnetic resonance imaging (fMRI) technique. Activation of the Broca's area and the premotor cortex (PMC), which related with speech production, were observed during speech perception, and seems to be a mirror. However, it is not clear how the mirror neurons function between speech production and perception. This study attempts to investigate ...


Multiclass Classification of Word Imagination Speech With Hybrid Connectivity Features

IEEE Transactions on Biomedical Engineering, 2018

Objective: In this study, electroencephalography data of imagined words were classified using four different feature extraction approaches. Eight subjects were recruited for the recording of imagination with five different words, namely; “go,” “back,” “left,” “right,” and “stop.” Methods: One hundred trials for each word were recorded for both imagination and perception, although this study utilized only imagination data. Two different ...


fMRI Evidence for Cortical Modification during Learning of Mandarin Lexical Tone

Journal of Cognitive Neuroscience, 2003

Functional magnetic resonance imaging was employed before and after six native English speakers completed lexical tone training as part of a program to learn Mandarin as a second language. Language-related areas including Broca's area, Wernicke's area, auditory cortex, and supplementary motor regions were active in all subjects before and after training and did not vary in average location. Across all ...


More Xplore Articles

Educational Resources on Wernicke's Area

Back to Top

IEEE-USA E-Books

  • Spatiotemporal Characteristics of Cortical Activities Associated with Articulation of Speech Perception

    Recently, brain computer interface (BCI) technologies that control external devices with human brain signals have been developed. However, most of the BCI systems, such as P300-speller, can only discriminate among options that have been given in advance. Therefore, the ability to decode the state of a person's perception and recognition, as well as that person's fundamental intention and emotions, from cortical activity is needed to develop a more general-use BCI system. In this study, two experiments were conducted. First, articulations were measured for Japanese monosyllabic utterances masked by several levels of noise. Second, auditory brain magnetic fields evoked by the monosyllable stimuli used in the first experiment were recorded, and neuronal current sources were localized in regions associated with speech perception and recognition - the auditory cortex (BA41), the Wernicke's area (posterior part of BA22), Broca's area (BA22), motor (BA4), and premotor (BA6) areas. Although the source intensity did not systematically change with SNR, the peak latency changed along SNR in the posterior superior temporal gyrus in the right hemisphere. The results suggest that the information associated with articulation is processed in this area.

  • Localization of event-related magnetic fields during speech processing by magnetoencephalography

    In this study the authors investigated several approaches to localize cortical areas activated during speech processing. The event-related activation was elicited by visually presented stimuli. One paradigm consisted of written mono syllable nouns where the subjects were asked to read the words and imagine the described object. Another task was to silently name objects presented to them. The magnetic evoked activity of 13 normal subjects (9 right handed, 4 left handed) was recorded with the 37 channel biomagnetic system KRENIKON/sup R/ (Siemens). The time course of the averaged evoked signals showed a high inter- individual variability. However they had two features in common, a wave starting at approximately 350 ms and a wave starting at 500 ms after the stimulus onset. The cortical representation of the waves in the cases with a high signal to noise ratio was found in the Wernicke's area and Broca's area respectively.

  • Investigation of relation between speech perception and production based on EEG source reconstruction

    Mirror neuron system has been investigated using the functional magnetic resonance imaging (fMRI) technique. Activation of the Broca's area and the premotor cortex (PMC), which related with speech production, were observed during speech perception, and seems to be a mirror. However, it is not clear how the mirror neurons function between speech production and perception. This study attempts to investigate the functions of the mirror neurons by utilizing the high temporal resolution of the Electroencephalography (EEG) system. The participants watched Chinese material from screen then heard the material reading from an earphone, finally made a judgement about the consistency of the two stimuli. The high-density EEG signal under source reconstruction revealed that the Wernicke's area activated before the Broca's area and PMC during the speech perception tasks. Results are also consistent with the mirror neuron system: the speech production related regions are working during the speech perception tasks.

  • Multiclass Classification of Word Imagination Speech With Hybrid Connectivity Features

    Objective: In this study, electroencephalography data of imagined words were classified using four different feature extraction approaches. Eight subjects were recruited for the recording of imagination with five different words, namely; “go,” “back,” “left,” “right,” and “stop.” Methods: One hundred trials for each word were recorded for both imagination and perception, although this study utilized only imagination data. Two different connectivity methods were applied, namely; a covariance-based and a maximum linear cross-correlation- based connectivity measure. These connectivity measures were further computed to extract the phase-only data as an additional method of feature extraction. In addition, four different channel selections were used. The final connectivity matrix from each of the four methods was vectorized and used as the feature vector for the classifier. To classify EEG data, a sigmoid activation function-based linear extreme learning machine was used. Result and Significance: We achieved a maximum classification rate of 40.30% (p <; 0.007) and 87.90% (p <; 0.003) in multiclass (five classes) and binary settings, respectively. Thus, our results suggested that EEG responses to imagined speech could be successfully classified using an extreme learning machine. Conclusion: This study involving the classification of imagined words can be a milestone contribution toward the development of practical brain-computer interface systems using silent speech.

  • fMRI Evidence for Cortical Modification during Learning of Mandarin Lexical Tone

    Functional magnetic resonance imaging was employed before and after six native English speakers completed lexical tone training as part of a program to learn Mandarin as a second language. Language-related areas including Broca's area, Wernicke's area, auditory cortex, and supplementary motor regions were active in all subjects before and after training and did not vary in average location. Across all subjects, improvements in performance were associated with an increase in the spatial extent of activation in left superior temporal gyrus (Brodmann's area 22, putative Wernicke's area), the emergence of activity in adjacent Brodmann's area 42, and the emergence of activity in right inferior frontal gyrus (Brodmann's area 44), a homologue of putative Broca's area. These findings demonstrate a form of enrichment plasticity in which the early cortical effects of learning a tone-based second language involve both expansion of preexisting language-related areas and recruitment of additional cortical regions specialized for functions similar to the new language functions.

  • Transcranial Direct Current Stimulation Improves Word Retrieval in Healthy and Nonfluent Aphasic Subjects

    A number of studies have shown that modulating cortical activity by means of transcranial direct current stimulation (tDCS) affects performances of both healthy and brain-damaged subjects. In this study, we investigated the potential of tDCS to enhance associative verbal learning in 10 healthy individuals and to improve word retrieval deficits in three patients with stroke-induced aphasia. In healthy individuals, tDCS (20 min, 1 mA) was applied over Wernicke's area (position CP5 of the International 10–20 EEG System) while they learned 20 new “words” (legal nonwords arbitrarily assigned to 20 different pictures). The healthy subjects participated in a randomized counterbalanced double-blind procedure in which they were subjected to one session of anodic tDCS over left Wernicke's area, one sham session over this location and one session of anodic tDCS stimulating the right occipito- parietal area. Each experimental session was performed during a different week (over three consecutive weeks) with 6 days of intersession interval. Over 2 weeks, three aphasic subjects participated in a randomized double-blind experiment involving intensive language training for their anomic difficulties in two tDCS conditions. Each subject participated in five consecutive daily sessions of anodic tDCS (20 min, 1 mA) and sham stimulation over Wernicke's area while they performed a picture-naming task. By the end of each week, anodic tDCS had significantly improved their accuracy on the picture-naming task. Both normal subjects and aphasic patients also had shorter naming latencies during anodic tDCS than during sham condition. At two follow-ups (1 and 3 weeks after the end of treatment), performed only in two aphasic subjects, response accuracy and reaction times were still significantly better in the anodic than in the sham condition, suggesting a long-term effect on recovery of their anomic disturbances.

  • Spatiotemporal human brain activities on recalling 4-legged mammal and fruit names

    The authors have measured electroencephalograms (EEGs) from subjects observing images of 4-legged mammal and/or fruit, and recalling their name silently. The equivalent current dipole source localization (ECDL) method has been applied to the induced event related potentials (ERPs): averaged EEGs. The equivalent current dipoles (ECDs) were localized to the primary visual area V1 around 100ms, to the ventral pathway (TE) around 270ms, to the parahippocampal gyrus (ParaHip) around 380ms. Then ECDs were localized to the Broca's area around 450ms, to the fusiform gyrus (FuG) around 600ms, and again to the Broca's area around 760ms. According to the previous researches, the process of search and preservation in the memory is presumed to be done in the ParaHip. From the results of the present experiment, the authors supposed that both long shape and round shape visual stimuli are processes by Wernicke's area, but only long shape pass through angular gyrus (AnG) before arriving at Wernicke's area.

  • Spatiotemporal Human Brain Activities by Visual Stimulus of Directional Characters and Symbols

    To investigate the brain activity during human recognition of characters and symbols with directional meanings, the authors recorded electroencephalograms (EEGs) from subjects in viewing four types of Kanji (Chinese characters being used currently in the Japanese language) and four arrows presented on the CRT which means direction for Upward, Downward, Leftward and Rightward. As a result, the reaction time for each direction was almost equal, when characters or arrows were presented, regardless of the directions. However, for all the directions, the latency of peak for character was longer than that for arrow. The latency of peak for the word meaning upward or downward was a little shorter than those for other characters. EEGs were averaged for each stimulus type, and event-related potentials (ERPs) were determined. Tendencies in ERPs were compared, and marked changes in amplitude were seen near a latency of 420 ms for (upward) and (downward) and 500 ms for (leftward) and (rightward). Marked changes in amplitude were seen near a latency of 500 ms for all arrow symbols. When comparing ERPs between kanji characters and arrow symbols, differences in latency were noted, as were similarities in marked amplitude changes. When comparing ERPs between kanji characters and arrow symbols with opposing meanings, peak latencies for marked amplitude changes were predominantly similar, but polarities were opposite. Peak latency of ERPs was subjected to equivalent current dipole source localization (ECDL). ECD was estimated at a latency of around 110 ms in the MT field and then around 300 ms in the precentral gyrus. No marked differences in this tendency were noted among the eight stimuli. After ECD was estimated in the precentral gyrus, with the kanji characters, ECD was estimated in the right middle temporal gyrus regardless of direction. ECD was then estimated in areas related to language, such as the Wernicke's area in the left middle temporal gyrus, the left angular gyrus and the left lingual gyrus. ECD was later estimated in the left middle frontal gyrus, left inferior frontal gyrus and prefrontal area. ECD was estimated in the precentral gyrus just before the amplitude of ERPs changed markedly. With arrow symbols, ECD was estimated in the right middle temporal gyrus, and then ECD was estimated in areas related to working memory for spatial perception, such as the right inferior or right middle frontal gyrus. Then, as with kanji characters, ECD was estimated in the prefrontal area and precentral gyrus.

  • Reading Speech from Still and Moving Faces: The Neural Substrates of Visible Speech

    Speech is perceived both by ear and by eye. Unlike heard speech, some seen speech gestures can be captured in stilled image sequences. Previous studies have shown that in hearing people, natural time-varying silent seen speech can access the auditory cortex (left superior temporal regions). Using functional magnetic resonance imaging (fMRI), the present study explored the extent to which this circuitry was activated when seen speech was deprived of its time- varying characteristics. In the scanner, hearing participants were instructed to look for a prespecified visible speech target sequence (“voo” or “ahv”) among other monosyllables. In one condition, the image sequence comprised a series of stilled key frames showing apical gestures (e.g., separate frames for “v” and “oo” [from the target] or “ee” and “m” [i.e., from nontarget syllables]). In the other condition, natural speech movement of the same overall segment duration was seen. In contrast to a baseline condition in which the letter “V” was superimposed on a resting face, stilled speech face images generated activation in posterior cortical regions associated with the perception of biological movement, despite the lack of apparent movement in the speech image sequence. Activation was also detected in traditional speech- processing regions including the left inferior frontal (Broca's) area, left superior temporal sulcus (STS), and left supramarginal gyrus (the dorsal aspect of Wernicke's area). Stilled speech sequences also generated activation in the ventral premotor cortex and anterior inferior parietal sulcus bilaterally. Moving faces generated significantly greater cortical activation than stilled face sequences, and in similar regions. However, a number of differences between stilled and moving speech were also observed. In the visual cortex, stilled faces generated relatively more activation in primary visual regions (V1/V2), while visual movement areas (V5/MT+) were activated to a greater extent by moving faces. Cortical regions activated more by naturally moving speaking faces included the auditory cortex (Brodmann's Areas 41/42; lateral parts of Heschl's gyrus) and the left STS and inferior frontal gyrus. Seen speech with normal time-varying characteristics appears to have preferential access to “purely” auditory processing regions specialized for language, possibly via acquired dynamic audiovisual integration mechanisms in STS. When seen speech lacks natural time-varying characteristics, access to speech-processing systems in the left temporal lobe may be achieved predominantly via action-based speech representations, realized in the ventral premotor cortex.

  • An unsupervised learning method for representing simple sentences

    A recent neurocomputational study showed that it is possible for a model of the language areas of the brain (Wernicke's area, Broca's area, etc.) to learn to process words correctly. This model is unique in that it is a neuroanatomically based model of word learning derived from the Wernicke- Lichtheim-Geschwind theory of language processing. For example, when subjected to simulated focal damage, the model breaks down in ways reminiscent of the classic aphasias. While such results are intriguing, this previous work was limited to processing only single words: nouns corresponding to concrete objects. Here we take the first steps towards generalizing the methods used in this earlier model to work with full sentences instead of isolated words. We gauge the richness of the neural representations that emerge during purely unsupervised learning in several ways. For example, using a separate ldquorecognition networkrdquo, we demonstrate that the model's encoding of sentences is adequate to permit subsequent extraction of a symbolic, hierarchical representation of sentence meaning. Although our results are encouraging, substantial further work will be needed to create a large-scale model of the human cortical network for language.



Standards related to Wernicke's Area

Back to Top

No standards are currently tagged "Wernicke's Area"


Jobs related to Wernicke's Area

Back to Top