143 resources related to Multisensory Integration
- Topics related to Multisensory Integration
- IEEE Organizations related to Multisensory Integration
- Conferences related to Multisensory Integration
- Periodicals related to Multisensory Integration
- Most published Xplore authors for Multisensory Integration
The conference program will consist of plenary lectures, symposia, workshops and invitedsessions of the latest significant findings and developments in all the major fields of biomedical engineering.Submitted papers will be peer reviewed. Accepted high quality papers will be presented in oral and postersessions, will appear in the Conference Proceedings and will be indexed in PubMed/MEDLINE
The CDC is the premier conference dedicated to the advancement of the theory and practice of systems and control. The CDC annually brings together an international community of researchers and practitioners in the field of automatic control to discuss new research results, perspectives on future developments, and innovative applications relevant to decision making, automatic control, and related areas.
IEEE/SICE SII is the premier symposium series presenting the state of the art and future perspectives of System integration, where industry experts, researchers, and academics share ideas and experiences surrounding frontier technologies, breakthrough and innovative solutions and applications.2020 IEEE/SICE International Symposium on System Integrations (SII 2020) will be as the 12th symposium on system integration. System integration is one of the key technologies and the integration of hardware and software is especially important to solve the industrial or social system problems in new century. This symposium focuses to the new research and industrial application of system integration, and discusses the approach method to improve effectiveness of system integration.
to be scoped
Haptic devices enable human-machine interaction through the senses of force and touch. World Haptics is the premier international conference addressing all aspects related to haptics, covering the basic scientific underpinnings, technological developments, and algorithms and applications.
IEEE Computer Graphics and Applications (CG&A) bridges the theory and practice of computer graphics. From specific algorithms to full system implementations, CG&A offers a strong combination of peer-reviewed feature articles and refereed departments, including news and product announcements. Special Applications sidebars relate research stories to commercial development. Cover stories focus on creative applications of the technology by an artist or ...
Methods, algorithms, and human-machine interfaces for physical and logical design, including: planning, synthesis, partitioning, modeling, simulation, layout, verification, testing, and documentation of integrated-circuit and systems designs of all complexities. Practical applications of aids resulting in producible analog, digital, optical, or microwave integrated circuits are emphasized.
Telemedicine, teleradiology, telepathology, telemonitoring, telediagnostics, 3D animations in health care, health information networks, clinical information systems, virtual reality applications in medicine, broadband technologies, and global information infrastructure design for health care.
Measurements and instrumentation utilizing electrical and electronic techniques.
Synergetic integration of mechanical engineering with electronic and intelligent computer control in the design and manufacture of industrial products and processes. (4) (IEEE Guide for Authors) A primary purpose is to have an aarchival publication which will encompass both theory and practice. Papers will be published which disclose significant new knowledge needed to implement intelligent mechatronics systems, from analysis and ...
2018 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), 2018
This paper introduces two multisensory object classification models inspired by multisensory integration in the brain. The "feature-integrating" model emulates integration in the sub-cortical superior colliculus by combining unisensory features which are subsequently classified by a multisensory classifier. In the "decision-integrating" model, which emulates integration in primary cortical areas, the unisensory stimuli are first classified independently by unisensory classifiers and the ...
2011 International Conference on System science, Engineering design and Manufacturing informatization, 2011
In real bio-systems, animals can combine multiple independent sources of information (sensory cues) to reduce uncertainty and improve perceptual performance. Despite intense recent interest in cue integration, the underlying neural mechanisms remains unclear. Continuous attractor neural network (CANN) can be interpreted as an efficient framework for implementing population coding and decoding. In this work, we show that CANN can account ...
2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video, 2009
The stream/bounce effect is an example of audio/visual interaction in which two identical luminance-defined targets in a 2-D display move toward one another from opposite sides of a display, coincide, and continue past one another along collinear trajectories. The targets can be perceived to either stream past or bounce off of one another. Streaming is the dominant perception in visual ...
2012 19th Iranian Conference of Biomedical Engineering (ICBME), 2012
The brain of human combines multiple sensory information to form coherent and unified percept. Central Nervous System (CNS) estimates the effector's position by integrating the sensory information (Vision and proprioception) to perform a movement, for example reaching to a cup. There are different models that explain this phenomenon. Disadvantage of mathematical model such as Bayesian interface is that they aren't ...
2015 7th International IEEE/EMBS Conference on Neural Engineering (NER), 2015
Clinical telegaming integrates telecare and videogaming to enable a more convenient and enjoyable experience for patients when providers diagnose, monitor, and treat a variety of health problems via web-enabled telecommunications. In recent years, clinical telegaming systems have been applied to physical therapy and rehabilitation, evaluation of mental health, and prevention and management of obesity and diabetes. Parkinson's disease (PD) is ...
Heterogeneous Integration Roadmap
Grid Integration Systems and Mobility with Keynote Sila Kiliccote - IEEE WIE ILC 2017
Dr. Bernd Kosch on Industrie 4.0 and manufacturing - WF-IoT 2015
28nm CMOS Wireless Connectivity Combo IC - Chia-Hsin Wu - RFIC Showcase 2018
IRDS: More Moore Outbrief - Mustafa Badaroglu at INC 2019
KeyTalk with Ljubisa Stevanovic: From SiC MOSFET Devices to MW-scale Power Converters - APEC 2017
Nanophotonic Devices for Quantum Information Processing: Optical Computing - Carsten Schuck at INC 2019
EMBC 2011-Speaker Highlights-Ron Newbower, PhD
Kostas Kontogiannis: IoT Systems Delivery and Deployment Agility - Research Challenges and Issues: WF IoT 2016
3D Printing for Sensor Platform Integration - Benjamin Ingis - IEEE EMBS at NIH, 2019
Advanced Packaging and Energy as Integration to Reboot Computing: IEEE Rebooting Computing 2017
Security for SDN/NFV and 5G Networks - Ashutosh Dutta - India Mobile Congress, 2018
Technology for Health Summit 2017 - Welcome & Summit Opening
Toward Cognitive Integration of Prosthetic Devices - IEEE WCCI 2014
CASS Lecture by Dr. Chris Hull, "Millimeter-Wave Power Amplifiers in FinFET Technology"
Levente Klein: Drone-based Reconstruction for 3D Geospatial Data Processing: WF-IoT 2016
Panel: Integrating POC Testing for HLBS Diseases into Clinical Care - IEEE EMBS at NIH, 2019
Merge Network for a Non-Von Neumann Accumulate Accelerator in a 3D Chip - Anirudh Jain - ICRC 2018
A Historical Perspective on Computational Intelligence in N-player Games - IEEE WCCI 2014
This paper introduces two multisensory object classification models inspired by multisensory integration in the brain. The "feature-integrating" model emulates integration in the sub-cortical superior colliculus by combining unisensory features which are subsequently classified by a multisensory classifier. In the "decision-integrating" model, which emulates integration in primary cortical areas, the unisensory stimuli are first classified independently by unisensory classifiers and the results are combined and classified by a multisensory classifier. The models are implemented using multilayer perceptron classifiers. Through several sets of experiments involving the classification of auditory and visual representations of ten digits, it is shown that the multisensory classification systems demonstrate the "inverse effectiveness principle" by yielding significantly higher classification accuracies when compared with those of the unisensory classifiers. Furthermore, the flexibility offered by the generalized models makes it possible to simulate and evaluate various combinations of multi-modal stimuli and classifiers under varying uncertainty conditions.
In real bio-systems, animals can combine multiple independent sources of information (sensory cues) to reduce uncertainty and improve perceptual performance. Despite intense recent interest in cue integration, the underlying neural mechanisms remains unclear. Continuous attractor neural network (CANN) can be interpreted as an efficient framework for implementing population coding and decoding. In this work, we show that CANN can account for many empirical principles in multisensory integration. Viewed from single neuron behaviors, CANN model can account for the principle of inverse effectiveness and the spatial principle. From the perspective of the activities of population of neurons, CANN can account for the mathematical rule by which multisensory neurons combine their inputs with respect to different cue reliabilities.
The stream/bounce effect is an example of audio/visual interaction in which two identical luminance-defined targets in a 2-D display move toward one another from opposite sides of a display, coincide, and continue past one another along collinear trajectories. The targets can be perceived to either stream past or bounce off of one another. Streaming is the dominant perception in visual only displays while bouncing predominates when an auditory transient tone is presented at the point of coincidence. We extended previous findings on audio/visual interactions, using 3-D displays, and found the following two points. First, the sound-induced bias towards bouncing persists in spite of the introduction of spatial offsets in depth between the trajectories, which reduce the probability of motion reversals. Second, audio/visual interactions are similar for luminance-defined and disparity-defined displays, indicating that audio/visual interaction occurs at or beyond the visual processing stage where disparity-defined form is recovered.
The brain of human combines multiple sensory information to form coherent and unified percept. Central Nervous System (CNS) estimates the effector's position by integrating the sensory information (Vision and proprioception) to perform a movement, for example reaching to a cup. There are different models that explain this phenomenon. Disadvantage of mathematical model such as Bayesian interface is that they aren't based on neural mechanism. So models such as population codes are proposed. For situations in which the sensory stimuli are one source, some neural model is proposed but for situations in which the sensory stimuli are far apart, a neural model has not been suggested yet. The purpose of this study is to propose a neural network model for this situation. The model is inspired by the neuro-imaging findings. In the model, there are two populations of neurons coding visual and proprioceptive sensory stimuli positions in a multilayer recurrent neural network. Also the two populations have connections in between. In this way the model can to simulate the effect of sensory attention. The model was tested by behavioral experiments that explained briefly in this paper.
Clinical telegaming integrates telecare and videogaming to enable a more convenient and enjoyable experience for patients when providers diagnose, monitor, and treat a variety of health problems via web-enabled telecommunications. In recent years, clinical telegaming systems have been applied to physical therapy and rehabilitation, evaluation of mental health, and prevention and management of obesity and diabetes. Parkinson's disease (PD) is suitable for development of new clinical telegaming applications because PD patients are known to experience motor symptoms that can be improved by physical therapy. Recent research suggests that sensory processing deficits may also play an important role in these motor impairments because successful motor function requires multisensory integration. In this paper, we describe a new web-enabled software system that uses clinical telegaming to evaluate and improve multisensory integration ability in users. This software has the potential to be used in diagnostic and therapeutic telegaming for PD patients.
Alzheimer's disease (AD) is the most common form of dementia among older people. The number of dementia is increasing rapidly with appearing phenomenon of aging. A previous study showed that brain glucose metabolism was different with that between healthy people and Alzheimer patients during the response to passive audiovisual stimulation. In the present study, we designed a discrimination task that unimodal visual stimuli, unimodal auditory stimuli and bimodal audiovisual stimuli were randomly presented left or right side of screen and investigated the multisensory integration of peripherally presented audiovisual stimuli in older adults using Event-Related Potential. Two periods of significant interactions were found: the right occipital area at 160-200 ms after the presentation of the stimulus and the left fronto-central area at 300-600 ms after the presentation of the stimulus. Our results identified multisensory integration brain activities in the healthy older adults. In next study, we will discuss audiovisual integration for patients with Alzheimer's disease using experiment design of this study, and hope to find out an excellent method for Alzheimer's disease early detection.
Many studies evaluate the multisensory integration to investigate how the human brain combines senses information and weighted each of senses in different situations. The effect of sensory training on multisensory integration is subject to a few of research. The purpose of the present study is to evaluate and compare the effect of both modalities, i.e., visual and proprioception training in multisensory integration. To achieve to this aim, a set of experiment was designed in which the subject was asked to move his/her hand in a circle and estimate its position through a new designed setup. The experiment was performed on 8 subjects with trained proprioception and 8 subjects with visual training. Results of the experiments have three important points: 1) learning rate of visual is significantly more than that of proprioception; 2) the mean of visual and proprioceptive errors are decreased by training but statistical analysis shows this decreasing is significant for proprioceptive error and non-significant for visual error, and 3) visual errors in training phase even in the beginning of it, is much less than the main test stage because in the main test the subject have to focus on two sense.
Humans experience the self as localized within their body. This aspect of bodily self-consciousness can be experimentally manipulated by exposing individuals to conflicting multisensory input, or can be abnormal following focal brain injury. Recent technological developments helped to unravel some of the mechanisms underlying multisensory integration and self-location, but the neural underpinnings are still under investigation, and the manual application of stimuli resulted in large variability difficult to control. This paper presents the development and evaluation of an MR-compatible stroking device capable of presenting moving tactile stimuli to both legs and the back of participants lying on a scanner bed while acquiring functional neuroimaging data. The platform consists of four independent stroking devices with a travel of 16-20 cm and a maximum stroking velocity of 15 cm/s, actuated over non-magnetic ultrasonic motors. Complemented with virtual reality, this setup provides a unique research platform allowing to investigate multisensory integration and its effects on self-location under well-controlled experimental conditions. The MR-compatibility of the system was evaluated in both a 3 and a 7 Tesla scanner and showed negligible interference with brain imaging. In a preliminary study using a prototype device with only one tactile stimulator, fMRI data acquired on 12 healthy participants showed visuo-tactile synchrony-related and body-specific modulations of the brain activity in bilateral temporoparietal cortex.
In this paper, we employ the measurement of phase synchrony to investigate the activities of multisensory integration during cognitive process. This approach is illustrated by exploiting a complex Morlet wavelet to quantify the frequency-specific synchronization (i.e., transient phase-locking value, PLV) between two neuroelectric signals at a predefined frequency range. The motivation for its development is to be able to investigate the role of neural synchronies as a putative mechanism for neural integration during cognitive tasks. Unlike the more traditional method of spectral coherence, PLV separates the phase and amplitude components and can be directly interpreted in the framework of neural integration. We apply PLV to investigate electrode recordings from subjects performing a visual and auditory discrimination task. We find different synchronies in brain regions between unisensory and multisensory stimuli conditions. We argue that phase synchrony effects do reflect cognitive processing. Obtaining results indicate a possible relevance of this method based on Morlet wavelet in the context of cognitive science as well as in other fields.
A neural mass model of interacting macro-columns is stimulated to reproduce unisensory, auditory and visually evoked potentials and multisensory (concurrent audiovisual) evoked potentials. These were elicited from patients conducting a reaction response task and recorded from intracranial electrodes placed on the parietal lobe. Important features of this model include inhibitory and excitatory feedback connections to pyramidal cells and extrinsic input to the stellate cell pool, with provision for hierarchical positioning depending on extrinsic connections. Both auditory and visually evoked potentials were best fit using a top-down paradigm. The multisensory response reconstructed from its constituent models was then compared to the actual multisensory EP. Fitting of the multisensory response from constituent models to the actual response required no significant changes to the architecture but did require a decrease in top-down feedback delay. This suggests that multisensory integration, and its related improvement in reaction behavior is not an automatic process but instead controlled by a central executive functioning
No standards are currently tagged "Multisensory Integration"