171 resources related to Speech Disorders
- Topics related to Speech Disorders
- IEEE Organizations related to Speech Disorders
- Conferences related to Speech Disorders
- Periodicals related to Speech Disorders
- Most published Xplore authors for Speech Disorders
The conference program will consist of plenary lectures, symposia, workshops and invitedsessions of the latest significant findings and developments in all the major fields of biomedical engineering.Submitted full papers will be peer reviewed. Accepted high quality papers will be presented in oral and poster sessions,will appear in the Conference Proceedings and will be indexed in PubMed/MEDLINE.
The scope of the 2020 IEEE/ASME AIM includes the following topics: Actuators, Automotive Systems, Bioengineering, Data Storage Systems, Electronic Packaging, Fault Diagnosis, Human-Machine Interfaces, Industry Applications, Information Technology, Intelligent Systems, Machine Vision, Manufacturing, Micro-Electro-Mechanical Systems, Micro/Nano Technology, Modeling and Design, System Identification and Adaptive Control, Motion Control, Vibration and Noise Control, Neural and Fuzzy Control, Opto-Electronic Systems, Optomechatronics, Prototyping, Real-Time and Hardware-in-the-Loop Simulation, Robotics, Sensors, System Integration, Transportation Systems, Smart Materials and Structures, Energy Harvesting and other frontier fields.
The ICASSP meeting is the world's largest and most comprehensive technical conference focused on signal processing and its applications. The conference will feature world-class speakers, tutorials, exhibits, and over 50 lecture and poster sessions.
HRI is a highly selective annual conference that showcases the very best research and thinking in human-robot interaction. HRI is inherently interdisciplinary and multidisciplinary, reflecting work from researchers in robotics, psychology, cognitive science, HCI, human factors, artificial intelligence, organizational behavior, anthropology, and many other fields.
ISMICT provides a forum to present new research and development results, discuss practices, andshare experiences among Technology and Medicine sides, including healthcare, wellness, clinicaltherapy, and surgery, as well as ICT and biomedical engineering. Standardization, regulationactivities and business for medical ICT devices, systems and services will also be promoted.
Speech analysis, synthesis, coding speech recognition, speaker recognition, language modeling, speech production and perception, speech enhancement. In audio, transducers, room acoustics, active sound control, human audition, analysis/synthesis/coding of music, and consumer audio. (8) (IEEE Guide for Authors) The scope for the proposed transactions includes SPEECH PROCESSING - Transmission and storage of Speech signals; speech coding; speech enhancement and noise reduction; ...
Broad coverage of concepts and methods of the physical and engineering sciences applied in biology and medicine, ranging from formalized mathematical theory through experimental science and technological development to practical clinical applications.
Telephone, telegraphy, facsimile, and point-to-point television, by electromagnetic propagation, including radio; wire; aerial, underground, coaxial, and submarine cables; waveguides, communication satellites, and lasers; in marine, aeronautical, space and fixed station services; repeaters, radio relaying, signal storage, and regeneration; telecommunication error detection and correction; multiplexing and carrier techniques; communication switching systems; data communications; and communication theory. In addition to the above, ...
Computer, the flagship publication of the IEEE Computer Society, publishes peer-reviewed technical content that covers all aspects of computer science, computer engineering, technology, and applications. Computer is a resource that practitioners, researchers, and managers can rely on to provide timely information about current research developments, trends, best practices, and changes in the profession.
IEEE Computer Graphics and Applications (CG&A) bridges the theory and practice of computer graphics. From specific algorithms to full system implementations, CG&A offers a strong combination of peer-reviewed feature articles and refereed departments, including news and product announcements. Special Applications sidebars relate research stories to commercial development. Cover stories focus on creative applications of the technology by an artist or ...
2015 Intelligent Systems and Computer Vision (ISCV), 2015
In light of the scarcity of both published and free Acoustic Arabic databases, we propose in this paper Acoustic Arabic database to be a reference in the field of automatic Arabic speech recognition, this database is the result of a case study that has been developed to contribute to the automatic diagnosis of speech disorders in Arabic speaking children, the ...
2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2009
Evaluation of the quality of tracheoesophageal (TE) speech using machines instead of human experts can enhance the voice rehabilitation process for patients who have undergone total laryngectomy and voice restoration. Towards the goal of devising a reference-free TE speech quality estimation algorithm, we investigate the efficacy of speech signal features that are used in standard telephone-speech quality assessment algorithms, in ...
2016 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques (ICEECCOT), 2016
Speech disorder refers to multiple reasons due to which a person will be unable to produce speech sounds fluently. Articulation and Phonological disorders are the most common disorders identified among the people. Analysis of speech disorders helps in providing necessary information for providing liable treatment by speech language pathologists. Extensive preparation and proper selection of target syllables is required for ...
2017 IEEE 5th International Conference on Serious Games and Applications for Health (SeGAH), 2017
Speech impediment affecting children with hearing difficulties and speech disorders requires speech therapy and much practice to overcome. To motivate the children to practice more, serious games can be used because children are more inclined to play games. In this paper, we have designed and implemented a serious game in which children can learn to speak specific words that they ...
2016 IEEE Ecuador Technical Chapters Meeting (ETCM), 2016
This paper describes a project to create a novel design of a communication tool for individuals with hearing disabilities and speech disorders. It provides a detailed analysis of the engineering and scientific aspects of the system, and the fundamentals taken into account for social inclusion of such individuals. It also describes a comprehensive study of present and future applications of ...
Translational Neural Engineering: Bringing Neurotechnology into the Clinics - IEEE Brain Workshop
A Conversation with Danielle Bassett: IEEE TechEthics Interview
APEC 2012 - Dr. Fred Lee Plenary
Scientific Discovery & Deep Brain Stimulation: Jerrold Vitek, MD, PhD
Emerging Technologies for the Control of Human Brain Dynamics: IEEE TechEthics Keynote with Danielle Bassett
APEC 2011-GaN Based Power Devices in Power Electronics
ICRA Keynote: Dr. Matt Mason
ICASSP 2012 - Opening Ceremony
ICASSP 2012 Plenary-Dr. Stephane Mallat
ICASSP 2011 Trends in Design and Implementation of Signal Processing Systems
ICRA Plenary: Raffaello D'Andrea
ECCE Plenary: Paul Hamilton, part 2
ICASSP 2011 Trends in Multimedia Signal Processing
ECCE Plenary: Pedro Ray, part 2
Q&A with Dr. Jennifer Gelinas: IEEE Brain Podcast, Episode 8
Technology for Health Summit 2017 - Policy Keynote: Ilias Iakovidis
IEEE Innovation Day 2011- Plenary Address
ECCE Plenary: Pedro Ray, part 1
ECCE Plenary: Richard K. Williams, part 2
In light of the scarcity of both published and free Acoustic Arabic databases, we propose in this paper Acoustic Arabic database to be a reference in the field of automatic Arabic speech recognition, this database is the result of a case study that has been developed to contribute to the automatic diagnosis of speech disorders in Arabic speaking children, the field work was in collaboration with experts in communication and relying on some multinational Arabic schools to record samples of various Arabic speech dialects in normal circumstances. The letter "R" has been selected as the most common letters that children suffer from. In this paper we will explain the mechanism of the development and design of this database, which is divided into three sub databases: the first is for diagnosis of the disorder in a letter "R" when its position is in the beginning of the word and the second when it is in the middle of the word the last one when it is at end of the word, each sub database contains speech recordings for 60 children; 30 males and 30 females, each child repeats the voice disorder five times. We hope that this acoustic database will be considered challenge for working more and to be a reference for future researches as the identification of Arabic speech in general and especially for the automatic diagnosis and treatment of speech disorders for children.
Evaluation of the quality of tracheoesophageal (TE) speech using machines instead of human experts can enhance the voice rehabilitation process for patients who have undergone total laryngectomy and voice restoration. Towards the goal of devising a reference-free TE speech quality estimation algorithm, we investigate the efficacy of speech signal features that are used in standard telephone-speech quality assessment algorithms, in conjunction with a recently introduced speech modulation spectrum measure. Tests performed on two TE speech databases demonstrate that the modulation spectral measure and a subset of features in the standard ITU-T P.563 algorithm estimate TE speech quality with better correlation (up to 0.9) than previously proposed features.
Speech disorder refers to multiple reasons due to which a person will be unable to produce speech sounds fluently. Articulation and Phonological disorders are the most common disorders identified among the people. Analysis of speech disorders helps in providing necessary information for providing liable treatment by speech language pathologists. Extensive preparation and proper selection of target syllables is required for the treatment of subjects with speech disorders. The present work aims at establishing a minimum assessment protocol to estimate the prevalence of speech disorder. This paper presents a method to develop the recorded syllables with the use of PRAAT tool. The sample consisting of a 3-4 second recording of the six standard syllables. The samples was obtained using the PRAAT program and recorded in the .wav format in 20 different laptops. The samples was recorded inside a laboratory with minimal acoustic conditions. These .wav formats are now analyzed in the PRAAT tool which yields the formant frequency listing of all the 120 samples. The obtained frequencies of 120 samples are compared with the reference sample collected from the noise free acoustic testing laboratory with the use of sound level meter (type 2250) at gain of 120db. The ideal sample with the least difference is used for the further processing and analysis. Further, the hardware implementation is done by inserting the specifications of the best sample to an external sound card that is externally connected to a laptop which directly helps the speech samples to be saved in .wav format. This protocol helps speech language pathologists (SLP) in saving time for treatment planning and considerable effort involved in generating word containing target syllables.
Speech impediment affecting children with hearing difficulties and speech disorders requires speech therapy and much practice to overcome. To motivate the children to practice more, serious games can be used because children are more inclined to play games. In this paper, we have designed and implemented a serious game in which children can learn to speak specific words that they are expected to know before the age of 7. The game consists of an avatar controlled by the child through speech, with the objective of moving the avatar around the environment to earn coins. The avatar is controlled by voice commands such as Jump, Ahead, Back, Left, Right. Children will be guided by an arrow during the game instead of a getting help from a therapist or a teacher to guide the child to the next coin. This allows the child to practice longer hours, compared to clinical approaches under the supervision of a therapist, which are time-limited.
This paper describes a project to create a novel design of a communication tool for individuals with hearing disabilities and speech disorders. It provides a detailed analysis of the engineering and scientific aspects of the system, and the fundamentals taken into account for social inclusion of such individuals. It also describes a comprehensive study of present and future applications of this technology to provide an enhanced tool to individuals to further improve their communication skills. Morse code is the base over which this new technology is proposed, which has gathered feedback from specialists and individuals with disabilities, to develop in a near future as a newer communication tool solution, with a robust functionality and an ergonomic design.
Aphasia is an acquired communication disorder that affects the ability to speak and understand spoken language. The purpose of this project is to create a virtual clinician to help individuals with aphasia practice self-initiated speech in everyday communication situations. The project will gauge the interaction and quality of this virtual clinician against those of a real clinician. The first of these tests was carried out using a prototype virtual- clinician avatar with a human driver, data was collected to compared efficiency of the client's speech to speech produced with a real clinician. Results suggest that clients respond appropriately to a virtual clinician.
Stuttering is fluid impairment in verbal speech which will be determined based on the involuntary repetitions and prolongations, either vowel or voiceless. A variety of methods is available to aid patients who suffer from stuttering. One of the treatment methods is using the DAF (Delayed Auditory Feedback) device, the patient hear his/her voice after few delays. DAF (Delayed Auditory Feedback) trains patients to speak fluently by playing back the patient's voice after some delays. DAF plays the person's own voice back to them, and they hear it with a slight delay, usually about one tenth of a second later. The aim of the paper is to design DAF in order to cure stuttering and relief patients from stresses that caused by stuttering. In this article, we have used AVR ATmega128 to design DAF in this aim. Microphone is also used to take the analog samples and A/D converter is used to convert analog to digital signals. Finally, the small “anti-stuttering” device that uses hearing aid technology to deliver Delayed Auditory Feedback (DAF) will be designed.
In this paper, we propose a hybrid technique that integrates both the audio visual analysis techniques to automate speech disorders treatment for Arabic language that can be used in many developing countries. We proposed a technique that is based on audio visual analysis of the patient's speech. For patient's audio analysis, we used the mel frequency cepstrum coefficients (MFCC's) and linear predictive cepstrum coefficients (LPCC's) as the key features to classify the audio. In addition, we used visual features for the analysis of the patient's video based on the patient's appearance. Audio visual features techniques are combined for increasing the efficiency of our recognition system. We present a comparative evaluation for both audio and video features. We perform the features evaluation using Dynamic time warping (DTW) for speech features and histogram based approach analysis for visual feature. Finally, we used a neural network based classifier to differentiate between normal and abnormal speech. This research presents an expert system with an interactive computerized environment that has the ability to treat patients with speech disorders problems.
Japan has a rapidly aging society. Concurrently, the number of patients suffering speech disorders is increasing every year, and the incidence rate is higher as age increases. Those suffering from speech disorders face communication problems in daily conversation. They are often able to communicate with speech substitutes, but these typically do not provide a sufficient sound frequency range to be understood in conversation. Therefore, we proposed a speech support system using body-conducted speech recognition. This system retrieves speech from body-conducted speech via a transfer function, using recognition to select a sub-word sequence and its duration. In this study, we demonstrate the effectiveness of producing clear body-conducted speech using a linear predictive coefficient instead of a transfer function. Then, instead of dividing body-conducted speech into syllables in a heuristic manner as in past studies, we used continuous sub-word recognition automatically. The improvement of the generated speech was confirmed in a jury test and articulatory feature analysis.
Acoustical measures of vocal function are routinely used in the assessments of disordered voice, and for monitoring the patient's progress over the course of voice therapy. Typically, acoustic measures are extracted from sustained vowel stimuli where short-term and long-term perturbations in fundamental frequency and intensity, and the level of "glottal noise" are used to characterize the vocal function. However, acoustic measures extracted from continuous speech samples may well be required for accurate prediction of abnormal voice quality that is relevant to the client's "real world" experience. In contrast with sustained vowel research, there is relatively sparse literature on the effectiveness of acoustic measures extracted from continuous speech samples. This is partially due to the challenge of segmenting the speech signal into voiced, unvoiced, and silence periods before features can be extracted for vocal function characterization. We propose a joint time-frequency approach for classifying pathological voices using continuous speech signals that obviates the need for such segmentation. The speech signals were decomposed using an adaptive time-frequency transform algorithm, and several features such as the octave max, octave mean, energy ratio, length ratio, and frequency ratio were extracted from the decomposition parameters and analyzed using statistical pattern classification techniques. Experiments with a database consisting of continuous speech samples from 51 normal and 161 pathological talkers yielded a classification accuracy of 93.4%.
No standards are currently tagged "Speech Disorders"