Timbre

View this topic in
]] In music, timbre is the quality of a musical note or sound or tone that distinguishes different types of sound production, such as voices and musical instruments. (Wikipedia.org)






Conferences related to Timbre

Back to Top

2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

HRI is a highly selective annual conference that showcases the very best research and thinking in human-robot interaction. HRI is inherently interdisciplinary and multidisciplinary, reflecting work from researchers in robotics, psychology, cognitive science, HCI, human factors, artificial intelligence, organizational behavior, anthropology, and many other fields.

  • 2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    HRI is a highly selective annual conference that showcases the very best research and thinking in human-robot interaction. HRI is inherently interdisciplinary and multidisciplinary, reflecting work from researchersin robotics, psychology, cognitive science, HCI, human factors, artificial intelligence, organizational behavior,anthropology, and many other fields.

  • 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    The conference serves as the primary annual meeting for researchers in the field of human-robot interaction. The event will include a main papers track and additional sessions for posters, demos, and exhibits. Additionally, the conference program will include a full day of workshops and tutorials running in parallel.

  • 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    This conference focuses on the interaction between humans and robots.

  • 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    HRI is a single -track, highly selective annual conference that showcases the very bestresearch and thinking in human -robot interaction. HRI is inherently interdisciplinary and multidisciplinary,reflecting work from researchers in robotics, psychology, cognitive science, HCI, human factors, artificialintelligence, organizational behavior, anthropology, and many other fields.

  • 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    HRI is a highly selective annual conference that showcases the very best research and thinking in human -robot interaction. HRI is inherently interdisciplinary and multidisciplinary, reflecting work from researchers in robotics, psychology, cognitive science, HCI, human factors, artificial intelligence, organizational behavior, anthropology, and many other fields.

  • 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    HRI is a single -track, highly selective annual conference that showcases the very best research and thinking in human-robot interaction. HRI is inherently interdisciplinary and multidisciplinary, reflecting work from researchers in robotics, psychology, cognitive science, HCI, human factors, artificial intelligence, organizational behavior, anthropology, and many other fields.

  • 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    HRI is a single-track, highly selective annual conference that showcases the very best research and thinking in human-robot interaction. HRI is inherently interdisciplinary and multidisciplinary, reflecting work from researchers in robotics, psychology, cognitive science, HCI, human factors, artificial intelligence, organizational behavior, anthropology, and many other fields.

  • 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    Robot companions Lifelike robots Assistive (health & personal care) robotics Remote robots Mixed initiative interaction Multi-modal interaction Long-term interaction with robots Awareness and monitoring of humans Task allocation and coordination Autonomy and trust Robot-team learning User studies of HRI Experiments on HRI collaboration Ethnography and field studies HRI software architectures HRI foundations Metrics for teamwork HRI group dynamics.

  • 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    TOPICS: Robot companions, Lifelike robots, Assistive (health & personal care) robotics, Remote robots, Mixed initiative interaction, Multi-modal interaction, Long-term interaction with robots, Awareness and monitoring of humans, Task allocation and coordination, Autonomy and trust, Robot-team learning, User studies of HRI, Experiments on HRI collaboration, Ethnography and field studies, HRI software architectures

  • 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    * Robot companions * Lifelike robots * Assistive (health & personal care) robotics * Remote robots * Mixed initiative interaction * Multi-modal interaction * Long-term interaction with robots * Awareness and monitoring of humans * Task allocation and coordination * Autonomy and trust * Robot-team learning * User studies of HRI * Experiments on HRI collaboration * Ethnography and field studies * HRI software architectures

  • 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI)

    Robot companions Lifelike robots Assistive (health & personal care) robotics Remote robots Mixed initiative interaction Multi-modal interaction Long-term interaction with robots Awareness and monitoring of humans Task allocation and coordination Autonomy and trust Robot-team learning User studies of HRI Experiments on HRI collaboration Ethnography and field studies HRI software architectures HRI foundations Metrics for teamwork HRI group dynamics Individual vs. group HRI

  • 2007 2nd Annual Conference on Human-Robot Interaction (HRI)


2019 IEEE International Conference on Systems, Man and Cybernetics (SMC)

2019 IEEE International Conference on Systems, Man, and Cybernetics (SMC2019) will be held in the south of Europe in Bari, one of the most beautiful and historical cities in Italy. The Bari region’s nickname is “Little California” for its nice weather and Bari's cuisine is one of Italian most traditional , based of local seafood and olive oil. SMC2019 is the flagship conference of the IEEE Systems, Man, and Cybernetics Society. It provides an international forum for researchers and practitioners to report up-to-the-minute innovations and developments, summarize state­of-the-art, and exchange ideas and advances in all aspects of systems science and engineering, human machine systems and cybernetics. Advances have importance in the creation of intelligent environments involving technologies interacting with humans to provide an enriching experience, and thereby improve quality of life.


ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

ICASSP is the world’s largest and most comprehensive technical conference focused on signal processing and its applications. The conference will feature world-class presentations by internationally renowned speakers, cutting-edge session topics and provide a fantastic opportunity to network with like-minded professionals from around the world.


2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)

AVSS 2018 addresses underlying theory, methods, systems, and applications of video and signal based surveillance.


2018 26th European Signal Processing Conference (EUSIPCO)

Audio and acoustic signal processingSpeech and language processingImage and video processingMultimedia signal processingSignal processing theory and methodsSensor array and multichannel signal processingSignal processing for communicationsRadar and sonar signal processingSignal processing over graphs and networksNonlinear signal processingStatistical signal processingCompressed sensing and sparse modelingOptimization methodsMachine learningBio-medical image and signal processingSignal processing for computer vision and roboticsComputational imaging/ Spectral imagingInformation forensics and securitySignal processing for power systemsSignal processing for educationBioinformatics and genomicsSignal processing for big dataSignal processing for the internet of thingsDesign/implementation of signal processing systemsOther signal processing areas

  • 2017 25th European Signal Processing Conference (EUSIPCO)

    Audio and acoustic signal processingSpeech and language processingImage and video processingMultimedia signal processingSignal processing theory and methodsSensor array and multichannel signal processingSignal processing for communicationsRadar and sonar signal processingSignal processing over graphs and networksNonlinear signal processingStatistical signal processingCompressed sensing and sparse modelingOptimization methodsMachine learningBio-medical image and signal processingSignal processing for computer vision and roboticsInformation forensics and securitySignal processing for power systemsSignal processing for educationBioinformatics and genomicsSignal processing for big dataSignal processing for the internet of thingsDesign and implementation of signal processing systemsOther signal processing areas

  • 2016 24th European Signal Processing Conference (EUSIPCO)

    EUSIPCO is the flagship conference of the European Association for Signal Processing (EURASIP). The 24th edition will be held in Budapest, Hungary, from 29th August - 2nd September 2016. EUSIPCO 2016 will feature world-class speakers, oral and poster sessions, keynotes, exhibitions, demonstrations and tutorials and is expected to attract in the order of 600 leading researchers and industry figures from all over the world.

  • 2015 23rd European Signal Processing Conference (EUSIPCO)

    EUSIPCO is the flagship conference of the European Association for Signal Processing (EURASIP). The 23rd edition will be held in Nice, on the French Riviera, from 31st August - 4th September 2015. EUSIPCO 2015 will feature world-class speakers, oral and poster sessions, keynotes, exhibitions, demonstrations and tutorials and is expected to attract in the order of 600 leading researchers and industry figures from all over the world.

  • 2014 22nd European Signal Processing Conference (EUSIPCO)

    EUSIPCO is one of the largest international conferences in the field of signal processing and addresses all the latest developments in research and technology. The conference will bring together individuals from academia, industry, regulation bodies, and government, to discuss and exchange ideas in all the areas and applications of signal processing. The conference will feature world-class keynote speakers, special sessions, plenary talks, tutorials, and technical sessions.

  • 2013 21st European Signal Processing Conference (EUSIPCO)

    The EUSIPCO is organized by the European Association for Signal, Speech, and Image Processing (EURASIP). The focus will be on signal processing theory, algorithms, and applications.

  • 2012 20th European Signal Processing Conference

    The focus: signal processing theory, algorithms and applications. Papers will be accepted based on quality, relevance, and novelty and will be indexed in the main databases. Organizers: University POLITEHNICA of Bucharest and Telecom ParisTech.


More Conferences

Periodicals related to Timbre

Back to Top

Audio, Speech, and Language Processing, IEEE Transactions on

Speech analysis, synthesis, coding speech recognition, speaker recognition, language modeling, speech production and perception, speech enhancement. In audio, transducers, room acoustics, active sound control, human audition, analysis/synthesis/coding of music, and consumer audio. (8) (IEEE Guide for Authors) The scope for the proposed transactions includes SPEECH PROCESSING - Transmission and storage of Speech signals; speech coding; speech enhancement and noise reduction; ...


Automatic Control, IEEE Transactions on

The theory, design and application of Control Systems. It shall encompass components, and the integration of these components, as are necessary for the construction of such systems. The word `systems' as used herein shall be interpreted to include physical, biological, organizational and other entities and combinations thereof, which can be represented through a mathematical symbolism. The Field of Interest: shall ...


Circuits and Systems for Video Technology, IEEE Transactions on

Video A/D and D/A, display technology, image analysis and processing, video signal characterization and representation, video compression techniques and signal processing, multidimensional filters and transforms, analog video signal processing, neural networks for video applications, nonlinear video signal processing, video storage and retrieval, computer vision, packet video, high-speed real-time circuits, VLSI architecture and implementation for video technology, multiprocessor systems--hardware and software-- ...


Computer Graphics and Applications, IEEE

IEEE Computer Graphics and Applications (CG&A) bridges the theory and practice of computer graphics. From specific algorithms to full system implementations, CG&A offers a strong combination of peer-reviewed feature articles and refereed departments, including news and product announcements. Special Applications sidebars relate research stories to commercial development. Cover stories focus on creative applications of the technology by an artist or ...


Consumer Electronics, IEEE Transactions on

The design and manufacture of consumer electronics products, components, and related activities, particularly those used for entertainment, leisure, and educational purposes


More Periodicals

Most published Xplore authors for Timbre

Back to Top

Xplore Articles related to Timbre

Back to Top

Template-based personalized singing voice synthesis

2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2012

In this paper, a template-based personalized singing voice synthesis method is proposed. It generates singing voices by means of conversion from the narrated lyrics of a song with the use of template recordings. The template voices are parallel speaking and singing voices recorded from professional singers, which are used to derive the transformation models for acoustic feature conversion. When converting ...


Exploring Data Analysis in Music Using Tool Praat

2008 First International Conference on Emerging Trends in Engineering and Technology, 2008

Music or audio data digitally sampled contains information about musical clip such as pitch, amplitude etc at multiple points along timeline. This huge data needs to be understood in the form of musical notations for the musicians. Praat is a tool used for acoustic analyses by audio researchers. We have processed the data produced by Praat using our own program ...


Research on instrument timbre characteristics for music emotional expression

2012 IEEE Symposium on Robotics and Applications (ISRA), 2012

Timbre is a major factor that affect music emotional expression. This paper obtains the musical signal spectrum by using Short-time Fourier transform. Then contact the timbre characteristics of music signal with music emotional expression though the spectrum analysis.


A discrete-cepstrum based spectral-envelope estimation scheme with improvements

2010 International Conference on Wireless Communications & Signal Processing (WCSP), 2010

Approximating spectral envelope with regularized discrete-cepstrum coefficients was proposed by previous researchers. In this paper, we study two problems encountered in practice when adopting this approach. The first is which spectral peaks should be selected, and the second is what frequency axis scaling function should be adopted. After some efforts of trying and experiments, we propose two feasible solution methods ...


Selecting non-uniform units from a very large corpus for concatenative speech synthesizer

2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221), 2001

This paper proposes a two-module text to speech system (TTS) structure, which bypasses the prosody model that predicts numerical prosodic parameters for synthetic speech. Instead, many instances of each basic unit from a large speech corpus are classified into categories by a classification and regression tree (CART), in which the expectation of the weighted sum of square regression error of ...


More Xplore Articles

Educational Resources on Timbre

Back to Top

IEEE.tv Videos

No IEEE.tv Videos are currently tagged "Timbre"

IEEE-USA E-Books

  • Template-based personalized singing voice synthesis

    In this paper, a template-based personalized singing voice synthesis method is proposed. It generates singing voices by means of conversion from the narrated lyrics of a song with the use of template recordings. The template voices are parallel speaking and singing voices recorded from professional singers, which are used to derive the transformation models for acoustic feature conversion. When converting a new instance of speech, its acoustic features are modified to approximate those of the actual singing voice based on the transformation models. Since the pitch contour of the synthesized singing is derived from an actual singing voice, it is more natural than modifying a step contour to implement pitch fluctuations such as overshoot and vibrato. It has been shown from the subjective tests that nearly natural singing quality with the preservation of the timbre can be achieved with the help of our method.

  • Exploring Data Analysis in Music Using Tool Praat

    Music or audio data digitally sampled contains information about musical clip such as pitch, amplitude etc at multiple points along timeline. This huge data needs to be understood in the form of musical notations for the musicians. Praat is a tool used for acoustic analyses by audio researchers. We have processed the data produced by Praat using our own program to find useful information. We have focused on finding notation, duration of note and amplitude in a flute clip of Hindustani classical music. The clip was an alap clip in which there is no rhythm accompaniment to flute. Our attempt has given us encouraging preliminary results.

  • Research on instrument timbre characteristics for music emotional expression

    Timbre is a major factor that affect music emotional expression. This paper obtains the musical signal spectrum by using Short-time Fourier transform. Then contact the timbre characteristics of music signal with music emotional expression though the spectrum analysis.

  • A discrete-cepstrum based spectral-envelope estimation scheme with improvements

    Approximating spectral envelope with regularized discrete-cepstrum coefficients was proposed by previous researchers. In this paper, we study two problems encountered in practice when adopting this approach. The first is which spectral peaks should be selected, and the second is what frequency axis scaling function should be adopted. After some efforts of trying and experiments, we propose two feasible solution methods for these two problems. Then, we combine these solution methods with the method for regularizing and computing discrete cepstrum coefficients to form a spectral-envelope estimation scheme. This scheme has been verified, by measuring spectral- envelope approximation error, to be much better than the original scheme. In addition, we have applied this scheme to build a voice-timbre transformation system. This system demonstrates that the proposed estimation scheme is very effective.

  • Selecting non-uniform units from a very large corpus for concatenative speech synthesizer

    This paper proposes a two-module text to speech system (TTS) structure, which bypasses the prosody model that predicts numerical prosodic parameters for synthetic speech. Instead, many instances of each basic unit from a large speech corpus are classified into categories by a classification and regression tree (CART), in which the expectation of the weighted sum of square regression error of prosodic features is used as splitting criterion. Better prosody is achieved by keeping slender diversity in prosodic features of instances belonging to the same class. A multi-tier non-uniform unit selection method is presented. It makes the best decision on unit selection by minimizing the concatenated cost of a whole utterance. Since the largest available and suitable units are selected for concatenating, distortion caused by mismatches at concatenated points is minimized. Very natural and fluent speech is synthesized, according to informal listening test.

  • On detecting note onsets in piano music

    This paper presents an overview of our researches in the use of connectionist systems for transcription of polyphonic piano music and concentrates on the issue of onset detection in musical signals. We propose a new technique for detecting onsets in a piano performance. The technique is based on a combination of a bank of auditory filters, a network of integrate-and-fire neurons and a multilayer perceptron. Such a structure introduces several advantages over the standard peak-picking onset detection approach and we present its performance on several synthesized and real piano recordings. Results show that our approach represents a viable alternative to existing onset detection algorithms.

  • Measurement and generation method of many impulse responses for 22.2 multichannel sound production

    NHK has developed a 22.2 multichannel (22.2 ch) sound production system for the audio system of 8K Super Hi-Vision. In 22.2 ch sound production, it is necessary to control the spatial impression by adding reverberation. Therefore, we are developing a three - dimensional reverberator that can add the reverberation of various types of three-dimensional space to sound materials. In this paper, we describe a method of measuring impulse responses suitable for the three - dimensional reverberator, signal processing to generate many impulse responses from a limited number of responses, and signal processing to increase the reverberation time of the responses.

  • An algorithm to convert KANSEI data into human motion

    Human KANSEI is based on tacit knowledge and experience has received a great deal of attention from researchers. "KANSEI" is a Japanese word that is similar in meaning to sensibility. In the engineering field, KANSEI has been researched in regard to facial expression, gestures and voice (F. Hara and K. Tanaka, 1997), and has also be used to design keyboard switches (Kajiro Watanabe and Hiroaki Kosaka, 1995). KANSEI in music has also been researched from different perspectives. KANSEI does affect human motion; however, its effects on human motion have not yet been clarified. Therefore, the goal of this study is to clarify the effects of human KANSEI on human motion. The bowing in violin playing was selected as an example. The relationship between the timbre terms that are KANSEI data and the bowing parameters that are motion parameters were analyzed via factor analysis. An algorithm to convert timbre terms into bowing parameters was constructed by using the results from factor analysis, and violin sounds were produced by using this algorithm.

  • Emotional recognition for chime bell music

    Music emotion describes the inherent emotional connotation of a music clip. This work aims to apply fuzzy strategies to studying emotional recognition of digital chime bell music, and a three-layer hierarchical framework is presented to automate the task of music emotion recognition from chime bell music data, by consulting some music experts as well as music literature. Firstly, This work constructs an emotional loop as the emotional model for chime bell music. Secondly, a fuzzy multi-decision-making model for multi- object system and membership function are established. At last, by means of fuzzy math, the emotion of chime bell music is recognized. Experimental evaluations indicate that the proposed system can produce satisfactory results.

  • Auditory Perceptual Information Extraction from ACF Physical Factors of Road-Vehicle Noise

    Road-vehicle noise samples generated by common vehicles under various operating conditions were replayed using headphones. Ten subjects were instructed to describe and quantify their subjective responses to the noises. A wealth of numerical perceptual information was obtained and investigated by principal component analysis. Based on the human auditory-brain system theory, the physical factors extracted from the autocorrelation function (ACF) of binaural signals were calculated so as to explain the possible original sensations and representative auditory properties of road-vehicles sounds. Three primary auditory sensations-loudness, pitch and timbre have been acquired by applying PCA to the ACF factors extracted from 232 noise samples. There certainly exist elementary factors to quantify various auditory sensations. Noises possessed different ACF properties usually induced dissimilar perceptions with distinct essential auditory attributes, which would control or determine its auditory impression.



Standards related to Timbre

Back to Top

No standards are currently tagged "Timbre"


Jobs related to Timbre

Back to Top