Conferences related to Emotion recognition

Back to Top

2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

HRI is a highly selective annual conference that showcases the very best research and thinking in human-robot interaction. HRI is inherently interdisciplinary and multidisciplinary, reflecting work from researchers in robotics, psychology, cognitive science, HCI, human factors, artificial intelligence, organizational behavior, anthropology, and many other fields.

  • 2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    HRI is a highly selective annual conference that showcases the very best research and thinking in human-robot interaction. HRI is inherently interdisciplinary and multidisciplinary, reflecting work from researchersin robotics, psychology, cognitive science, HCI, human factors, artificial intelligence, organizational behavior,anthropology, and many other fields.

  • 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    The conference serves as the primary annual meeting for researchers in the field of human-robot interaction. The event will include a main papers track and additional sessions for posters, demos, and exhibits. Additionally, the conference program will include a full day of workshops and tutorials running in parallel.

  • 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    This conference focuses on the interaction between humans and robots.

  • 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    HRI is a single -track, highly selective annual conference that showcases the very bestresearch and thinking in human -robot interaction. HRI is inherently interdisciplinary and multidisciplinary,reflecting work from researchers in robotics, psychology, cognitive science, HCI, human factors, artificialintelligence, organizational behavior, anthropology, and many other fields.

  • 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    HRI is a highly selective annual conference that showcases the very best research and thinking in human -robot interaction. HRI is inherently interdisciplinary and multidisciplinary, reflecting work from researchers in robotics, psychology, cognitive science, HCI, human factors, artificial intelligence, organizational behavior, anthropology, and many other fields.

  • 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    HRI is a single -track, highly selective annual conference that showcases the very best research and thinking in human-robot interaction. HRI is inherently interdisciplinary and multidisciplinary, reflecting work from researchers in robotics, psychology, cognitive science, HCI, human factors, artificial intelligence, organizational behavior, anthropology, and many other fields.

  • 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    HRI is a single-track, highly selective annual conference that showcases the very best research and thinking in human-robot interaction. HRI is inherently interdisciplinary and multidisciplinary, reflecting work from researchers in robotics, psychology, cognitive science, HCI, human factors, artificial intelligence, organizational behavior, anthropology, and many other fields.

  • 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    Robot companions Lifelike robots Assistive (health & personal care) robotics Remote robots Mixed initiative interaction Multi-modal interaction Long-term interaction with robots Awareness and monitoring of humans Task allocation and coordination Autonomy and trust Robot-team learning User studies of HRI Experiments on HRI collaboration Ethnography and field studies HRI software architectures HRI foundations Metrics for teamwork HRI group dynamics.

  • 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    TOPICS: Robot companions, Lifelike robots, Assistive (health & personal care) robotics, Remote robots, Mixed initiative interaction, Multi-modal interaction, Long-term interaction with robots, Awareness and monitoring of humans, Task allocation and coordination, Autonomy and trust, Robot-team learning, User studies of HRI, Experiments on HRI collaboration, Ethnography and field studies, HRI software architectures

  • 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    * Robot companions * Lifelike robots * Assistive (health & personal care) robotics * Remote robots * Mixed initiative interaction * Multi-modal interaction * Long-term interaction with robots * Awareness and monitoring of humans * Task allocation and coordination * Autonomy and trust * Robot-team learning * User studies of HRI * Experiments on HRI collaboration * Ethnography and field studies * HRI software architectures

  • 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    Robot companions Lifelike robots Assistive (health & personal care) robotics Remote robots Mixed initiative interaction Multi-modal interaction Long-term interaction with robots Awareness and monitoring of humans Task allocation and coordination Autonomy and trust Robot-team learning User studies of HRI Experiments on HRI collaboration Ethnography and field studies HRI software architectures HRI foundations Metrics for teamwork HRI group dynamics Individual vs. group HRI

  • 2007 2nd Annual Conference on Human-Robot Interaction (HRI)


2019 41st Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC)

The conference program will consist of plenary lectures, symposia, workshops andinvitedsessions of the latest significant findings and developments in all the major fields ofbiomedical engineering.Submitted papers will be peer reviewed. Accepted high quality paperswill be presented in oral and postersessions, will appear in the Conference Proceedings and willbe indexed in PubMed/MEDLINE & IEEE Xplore


2019 IEEE International Conference on Image Processing (ICIP)

The International Conference on Image Processing (ICIP), sponsored by the IEEE SignalProcessing Society, is the premier forum for the presentation of technological advances andresearch results in the fields of theoretical, experimental, and applied image and videoprocessing. ICIP 2019, the 26th in the series that has been held annually since 1994, bringstogether leading engineers and scientists in image and video processing from around the world.


2019 IEEE International Conference on Systems, Man and Cybernetics (SMC)

2019 IEEE International Conference on Systems, Man, and Cybernetics (SMC2019) will be held in the south of Europe in Bari, one of the most beautiful and historical cities in Italy. The Bari region’s nickname is “Little California” for its nice weather and Bari's cuisine is one of Italian most traditional , based of local seafood and olive oil. SMC2019 is the flagship conference of the IEEE Systems, Man, and Cybernetics Society. It provides an international forum for researchers and practitioners to report up-to-the-minute innovations and developments, summarize state­of-the-art, and exchange ideas and advances in all aspects of systems science and engineering, human machine systems and cybernetics. Advances have importance in the creation of intelligent environments involving technologies interacting with humans to provide an enriching experience, and thereby improve quality of life.


2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

robotics, intelligent systems, automation, mechatronics, micro/nano technologies, AI,


More Conferences

Periodicals related to Emotion recognition

Back to Top

Audio, Speech, and Language Processing, IEEE Transactions on

Speech analysis, synthesis, coding speech recognition, speaker recognition, language modeling, speech production and perception, speech enhancement. In audio, transducers, room acoustics, active sound control, human audition, analysis/synthesis/coding of music, and consumer audio. (8) (IEEE Guide for Authors) The scope for the proposed transactions includes SPEECH PROCESSING - Transmission and storage of Speech signals; speech coding; speech enhancement and noise reduction; ...


Computational Intelligence Magazine, IEEE

The IEEE Computational Intelligence Magazine (CIM) publishes peer-reviewed articles that present emerging novel discoveries, important insights, or tutorial surveys in all areas of computational intelligence design and applications.


Computer Graphics and Applications, IEEE

IEEE Computer Graphics and Applications (CG&A) bridges the theory and practice of computer graphics. From specific algorithms to full system implementations, CG&A offers a strong combination of peer-reviewed feature articles and refereed departments, including news and product announcements. Special Applications sidebars relate research stories to commercial development. Cover stories focus on creative applications of the technology by an artist or ...


Consumer Electronics, IEEE Transactions on

The design and manufacture of consumer electronics products, components, and related activities, particularly those used for entertainment, leisure, and educational purposes


Image Processing, IEEE Transactions on

Signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing. Includes theory, algorithms, and architectures for image coding, filtering, enhancement, restoration, segmentation, and motion estimation; image formation in tomography, radar, sonar, geophysics, astronomy, microscopy, and crystallography; image scanning, digital half-toning and display, andcolor reproduction.


More Periodicals

Most published Xplore authors for Emotion recognition

Back to Top

Xplore Articles related to Emotion recognition

Back to Top

Automatic Emotion Recognition Based on Body Movement Analysis: A Survey

IEEE Computer Graphics and Applications, 2014

Humans are emotional beings, and their feelings influence how they perform and interact with computers. One of the most expressive modalities for humans is body posture and movement, which researchers have recently started exploiting for emotion recognition. This survey describes emerging techniques and modalities related to emotion recognition based on body movement, as well as recent advances in automatic emotion ...


Confidence Measures for Speech Emotion Recognition: A Start

Speech Communication; 10. ITG Symposium, 2012

Speech emotion recognition (SER) in use today lacks the ability to evaluate reliability of recognition results although it has matured to the degree of first applicability. In this paper, we thus propose a novel confidence measure for SER systems. The confidence measure is based on human labeller agreement. This information is used to build a series of emotion scoring models ...


Cross-Corpus Acoustic Emotion Recognition: Variances and Strategies

IEEE Transactions on Affective Computing, 2010

As the recognition of emotion from speech has matured to a degree where it becomes applicable in real-life settings, it is time for a realistic view on obtainable performances. Most studies tend to overestimation in this respect: Acted data is often used rather than spontaneous data, results are reported on preselected prototypical data, and true speaker disjunctive partitioning is still ...


Are You Communicating?

IT Professional, 2012

Nonverbal communication is critical when conveying a message; body language is just as important as your tone of voice. Learn about gestures, posture, facial expressions, and eye contact, so you carry yourself with confidence and can effectively read your audience and respond accordingly.


A Wearable Device for Fast and Subtle Spontaneous Smile Recognition

IEEE Transactions on Affective Computing, 2017

Facial expressions are usually linked to emotional states of a person, and are among the most salient cues for automatic emotion recognition. They are an indispensable social communication tool, and therefore, they can also be fabricated to face complex situations in social interaction. Despite efforts to either deliberately or unconsciously conceal an emotion, micro-expressions are usually leaked. Because of their ...


More Xplore Articles

Educational Resources on Emotion recognition

Back to Top

IEEE-USA E-Books

  • Automatic Emotion Recognition Based on Body Movement Analysis: A Survey

    Humans are emotional beings, and their feelings influence how they perform and interact with computers. One of the most expressive modalities for humans is body posture and movement, which researchers have recently started exploiting for emotion recognition. This survey describes emerging techniques and modalities related to emotion recognition based on body movement, as well as recent advances in automatic emotion recognition. It also describes application areas and notation systems and explains the importance of movement segmentation. It then discusses unsolved problems and provides promising directions for future research. The Web extra (a PDF file) contains tables with additional information related to the article.

  • Confidence Measures for Speech Emotion Recognition: A Start

    Speech emotion recognition (SER) in use today lacks the ability to evaluate reliability of recognition results although it has matured to the degree of first applicability. In this paper, we thus propose a novel confidence measure for SER systems. The confidence measure is based on human labeller agreement. This information is used to build a series of emotion scoring models to provide multiple agreement levels for a hypothesised emotion state. A fusion is carried out on multiple agreement levels for a confidence score. Experimental results on the FAU Aibo Emotion Corpus of the INTERSPEECH 2009 Emotion Challenge show that the proposed confidence score has strong correlation with the unweighted average recall of the target task - emotion -, thus effectively indicating the usefulness of the confidence measures.

  • Cross-Corpus Acoustic Emotion Recognition: Variances and Strategies

    As the recognition of emotion from speech has matured to a degree where it becomes applicable in real-life settings, it is time for a realistic view on obtainable performances. Most studies tend to overestimation in this respect: Acted data is often used rather than spontaneous data, results are reported on preselected prototypical data, and true speaker disjunctive partitioning is still less common than simple cross-validation. Even speaker disjunctive evaluation can give only a little insight into the generalization ability of today's emotion recognition engines since training and test data used for system development usually tend to be similar as far as recording conditions, noise overlay, language, and types of emotions are concerned. A considerably more realistic impression can be gathered by interset evaluation: We therefore show results employing six standard databases in a cross-corpora evaluation experiment which could also be helpful for learning about chances to add resources for training and overcoming the typical sparseness in the field. To better cope with the observed high variances, different types of normalization are investigated. 1.8 k individual evaluations in total indicate the crucial performance inferiority of inter to intracorpus testing.

  • Are You Communicating?

    Nonverbal communication is critical when conveying a message; body language is just as important as your tone of voice. Learn about gestures, posture, facial expressions, and eye contact, so you carry yourself with confidence and can effectively read your audience and respond accordingly.

  • A Wearable Device for Fast and Subtle Spontaneous Smile Recognition

    Facial expressions are usually linked to emotional states of a person, and are among the most salient cues for automatic emotion recognition. They are an indispensable social communication tool, and therefore, they can also be fabricated to face complex situations in social interaction. Despite efforts to either deliberately or unconsciously conceal an emotion, micro-expressions are usually leaked. Because of their revealing nature, potential applications can arise from automatically detecting them, both at a personal and inter- personal levels. Therefore, we explored micro-expression detection using a wearable device that detects distal facial EMG signals unobtrusively, as an alternative to Computer Vision-based detection. EMG recognition is advantageous over commonly used video recognition techniques because of its robustness against occlusion and light changes, good temporal resolution, and independence of movement. We evaluated the performance of the wearable device on micro-smile recognition. The results show the potential of EMG to detect such fast and subtle spontaneous expressions. Finally, we evaluated the device as a tool to provide affective annotations of Advertisement Videos, and social interaction in a face-to-face context while watching these stimuli.

  • An Internet of Things Approach to “Read” the Emotion of Children with Autism Spectrum Disorder

    None

  • Continuous Tracking of User Emotion in Mandarin Emotional Speech

    Emotions play a significant role in decision-making, healthy, perception, human interaction and human intelligence. Automatic recognition of emotion in speech is very desirable because it adds to the human- computer interaction and becomes an important research area in the last years. However, to the best of our knowledge, no works have focused on automatic emotion tracking of continuous Mandarin emotional speech. In this paper, we present an emotion tracking system, by dividing the utterance into several independent segments, each of which contains a single emotional category. Experimental results reveal that the proposed system produces satisfactory results. On our testing database composed of 279 utterances which are obtained by concatenating short sentences, the average accuracy achieves 83% by using weighted D-KNN classifier and LPCC and MFCC features.

  • Facial emotion recognition using emotional neural network and hybrid of fuzzy c-means and genetic algorithm

    Facial emotion recognition (FER) is a critical task for both human-human (HHI) and human-computer interactions (HCl). In this paper, a brain-inspired neural basis computational model of FER is proposed based on emotional neural networks (ENN), fuzzy c-means (FCM) and genetic algorithms (GA). The proposed model can be applied in both HHI and HCI applications. In HHI, it can be used for improving communication skills, and in HCI it can be used in various treatment processes e.g. anxiety treatment, cancer radiation treatment and remote children/elderlies monitoring systems. The proposed model consists of main modules of emotional brain which recognize the facial emotions. In the experimental studies, the proposed model is examined on children's facial sad recognition as a case study. The results show that our model is valid and can be applied for various FER tasks.

  • Normalizing multi-subject variation for drivers' emotion recognition

    The paper attempts the recognition of multiple drivers' emotional state from physiological signals. The major challenge of the research is the severe inter-subject variation such that it is extreme difficult to build a general model for multiple drivers. In this paper, we focus on discovering an optimal feature mapping by utilizing the additional attribute from the drivers. Two models are reported, specifically an auxiliary dimension model and a factorization model. Experimental results show that the proposed method outperform existing algorithms used for emotional state recognition.

  • Jitter measurements for performance enhancement in the service sector

    In a leading service economy like India, services lie at the very center of economic activity. Competitive organizations now look not only at the skills and knowledge, but also at the behavior required by an employee to be successful on the job. Emotionally competent employees can effectively deal with occupational stress and maintain psychological well-being. This study explores the scope of the first two formants and jitter to assess seven common emotional states present in the natural speech in English. The k-means method was used to classify emotional speech as neutral, happy, surprised, angry, disgusted and sad. The accuracy of classification obtained using raw jitter was more than 65 percent for happy and sad but less accurate for the others. The overall classification accuracy was 72% in the case of preprocessed jitter. The experimental study was done on 1664 English utterances of 6 females. This is a simple, interesting and more proactive method for employees from varied backgrounds to become aware of their own communication styles as well as that of their colleagues' and customers and is therefore socially beneficial. It is a cheap method also as it requires only a computer. Since knowledge of sophisticated software or signal processing is not necessary, it is easy to analyze.



Standards related to Emotion recognition

Back to Top

No standards are currently tagged "Emotion recognition"


Jobs related to Emotion recognition

Back to Top