MPEG 7 Standard
936 resources related to MPEG 7 Standard
- Topics related to MPEG 7 Standard
- IEEE Organizations related to MPEG 7 Standard
- Conferences related to MPEG 7 Standard
- Periodicals related to MPEG 7 Standard
- Most published Xplore authors for MPEG 7 Standard
The conference program will consist of plenary lectures, symposia, workshops and invitedsessions of the latest significant findings and developments in all the major fields of biomedical engineering.Submitted papers will be peer reviewed. Accepted high quality papers will be presented in oral and postersessions, will appear in the Conference Proceedings and will be indexed in PubMed/MEDLINE
The International Conference on Image Processing (ICIP), sponsored by the IEEE SignalProcessing Society, is the premier forum for the presentation of technological advances andresearch results in the fields of theoretical, experimental, and applied image and videoprocessing. ICIP 2020, the 27th in the series that has been held annually since 1994, bringstogether leading engineers and scientists in image and video processing from around the world.
The International Conference on Consumer Electronics (ICCE) is soliciting technical papersfor oral and poster presentation at ICCE 2018. ICCE has a strong conference history coupledwith a tradition of attracting leading authors and delegates from around the world.Papers reporting new developments in all areas of consumer electronics are invited. Topics around the major theme will be the content ofspecial sessions and tutorials.
Multimedia technologies, systems and applications for both research and development of communications, circuits and systems, computer, and signal processing communities.
The ICASSP meeting is the world's largest and most comprehensive technical conference focused on signal processing and its applications. The conference will feature world-class speakers, tutorials, exhibits, and over 50 lecture and poster sessions.
Speech analysis, synthesis, coding speech recognition, speaker recognition, language modeling, speech production and perception, speech enhancement. In audio, transducers, room acoustics, active sound control, human audition, analysis/synthesis/coding of music, and consumer audio. (8) (IEEE Guide for Authors) The scope for the proposed transactions includes SPEECH PROCESSING - Transmission and storage of Speech signals; speech coding; speech enhancement and noise reduction; ...
Broadcast technology, including devices, equipment, techniques, and systems related to broadcast technology, including the production, distribution, transmission, and propagation aspects.
Video A/D and D/A, display technology, image analysis and processing, video signal characterization and representation, video compression techniques and signal processing, multidimensional filters and transforms, analog video signal processing, neural networks for video applications, nonlinear video signal processing, video storage and retrieval, computer vision, packet video, high-speed real-time circuits, VLSI architecture and implementation for video technology, multiprocessor systems--hardware and software-- ...
The design and manufacture of consumer electronics products, components, and related activities, particularly those used for entertainment, leisure, and educational purposes
CCNC 2006. 2006 3rd IEEE Consumer Communications and Networking Conference, 2006., 2006
Proceedings. International Conference on Image Processing, 2002
The emerging MPEG-7 standard embodies a visual descriptor that is associated with the dominant colors of an image. A threshold adaptation method for region based image and video segmentation that takes advantage of the MPEG-7 dominant color descriptor is presented. This method enables assignment of region growing parameters without any low-level processing. In the standard, it is proposed that the ...
2008 9th Symposium on Neural Network Applications in Electrical Engineering, 2008
In this paper the edge histogram descriptor, the scalable colour descriptor and the colour layout descriptor defined in the MPEG-7 standard are used for image semantic characterization. A comparative study of the performance and reliability of the image classification based in these descriptors is made. For that, classification methods like neural networks and k-nearest neighbors were used to detect relevant ...
2009 Digest of Technical Papers International Conference on Consumer Electronics, 2009
The increasing amount of home video content has resulted in the need of efficient systems for the management of home video content. In this paper, we discuss the design of a home video management system that relies on the use of several semantic web technologies. The proposed system first segments video into shots. In a next step, semantic information is ...
2007 Chinese Control Conference, 2007
A novel image encoding algorithm is presented based on lower tree to solve the problems such as low speed, huge capacity, repeated calculation in advanced compression codings. It is an efficient method of grouping the coefficients, coding them and reconstructing them based on the correlation of the wavelet coefficients in multiresolution. And an improved algorithm is proposed. The experimental result ...
IEEE Standard 1680: An Incentive to Design Greener Computers (com legendas em portugues)
2011 IEEE Awards James H. Mulligan, Jr. Education Medal - Raj Mittra
APEC 2011-NXP at APEC 2011
The Full Spectrum: Wireless Power Roundup
Advances in MgB2 - ASC-2014 Plenary series - 7 of 13 - Wednesday 2014/8/13
Marks and Stanwood: WirelessMAN Inside the IEEE 802.16 Standard
Standards In Fog Computing - Tao Zhang and John Zao, Fog World Congress 2018
A 20dBm Configurable Linear CMOS RF Power Amplifier for Multi-Standard Transmitters: RFIC Industry Showcase
Weizmann Institute Interviews, part 7
USB FAST CHARGING
The Josephson Effect: The Josephson Volt
Q&A with Robert Voigt: IEEE Rebooting Computing Podcast, Episode 7
5g Cellular: It Will Work!
The Vienna LTE-A Dowlink Link-Level Simulator
Multi-Standard 5Gbps to 28.2Gbps Adaptive, Single Voltage SerDes Transceiver with Analog FIR and 2-Tap Unrolled DFE in 28nm CMOS: RFIC Interactive Forum 2017
Panel Presentation: Yaniv Giat - ETAP Tel Aviv 2015
IEEE Standard 1680: An Incentive to Design Greener Computers
Prelude to IMS 2013
High Efficiency Supply-Modulated RF Power Amplifier for Handset Applications
The emerging MPEG-7 standard embodies a visual descriptor that is associated with the dominant colors of an image. A threshold adaptation method for region based image and video segmentation that takes advantage of the MPEG-7 dominant color descriptor is presented. This method enables assignment of region growing parameters without any low-level processing. In the standard, it is proposed that the dominant colors be extracted by clustering of color histograms. This property is used to determine color homogeneity that is formulated into the Lorentzian-based color distance norm and corresponding thresholds. The proposed algorithm is compared with other region growing algorithms, and results show that the threshold adaptation performs faster and more robustly.
In this paper the edge histogram descriptor, the scalable colour descriptor and the colour layout descriptor defined in the MPEG-7 standard are used for image semantic characterization. A comparative study of the performance and reliability of the image classification based in these descriptors is made. For that, classification methods like neural networks and k-nearest neighbors were used to detect relevant semantic features in images. The descriptors are individually used and combined with different multimodal techniques. A set with 460 images will be used for testing together with a set of 320 training images selected from the TRECVID 2008 development sound and vision database was used.
The increasing amount of home video content has resulted in the need of efficient systems for the management of home video content. In this paper, we discuss the design of a home video management system that relies on the use of several semantic web technologies. The proposed system first segments video into shots. In a next step, semantic information is automatically extracted from key frames that are representative for the shots found. The extracted information is subsequently represented using the resource description framework (RDF), satisfying the constraints of a web ontology language (OWL) schema. The use of RDF facilitates interoperability among different sources of metadata and makes the information understandable for machines. Further, the use of RDF in our system design also allows using the SPARQL protocol and RDF query language (SPARQL) to query our metadata and to retrieve the video content the user is interested in.
A novel image encoding algorithm is presented based on lower tree to solve the problems such as low speed, huge capacity, repeated calculation in advanced compression codings. It is an efficient method of grouping the coefficients, coding them and reconstructing them based on the correlation of the wavelet coefficients in multiresolution. And an improved algorithm is proposed. The experimental result shows that the improved algorithm increases the coding speed, reduces the memory and improves the image recovery quality, so it is an efficient method for image encoding.
With the increasing demand for content based manipulation of ever growing stores of audio data and the emergence of MPEG-7 has come the need for structured audio representations. However, while the necessity of such a representation has been recognised and, to some extent, its essential features have been identified, its actual development and implementation have generally been relegated as problems for another time or person to solve. This paper attempts to address the shortfall by defining an audio structure that will allow content-based manipulation of audio at the level of audio objects. The paper then summarises the processes required to generate such a structure. Further, details are provided as to how the second level of this structure can be derived from a low-level perceptually based audio representation previously developed by the authors to satisfy the requirements at the lowest level of the audio structure. Finally, initial experimental results are presented.
Presents a framework to generate digest video clips from contents managed by meta-data based on MPEG-7, which are personalized by profiles of an individual user. Mobile users request to receive condensed information instantly, so that it is important to generate short video clips from a long image sequence. Unstructured video contents should be therefore managed by standard meta-data. First, we propose a method to assign values that are changed in a time interval of indexes that present the content of the scene. Next, content profiles are estimated so as to generate typical digest video clips to meet similar demands from many users. In conclusion, our approach does not generate completely personalized video for all users, but provides some typical digest video clips for users without well-learned user profiles to enable them to choose their favourite ones, which can be combined in individual requests from users.
A personalized video summary is dynamically generated in our video personalization and summary system based on user preference and usage environment. The three-tier personalization system adopts the server- middleware-client architecture in order maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. The process includes the shot- to-sentence alignment, summary segment selection, and user preference matching and propagation. As a result, the relevant visual shot and audio sentence segments are aggregated and composed into a personalized video summary.
Image processing is a powerful technology that can be used to analyze an image for many useful purposes. However, software like this is often out of the typical user's reach, ilmage directly confronts this problem. The ilmage application takes the sophisticated technologies of image analysis and identification based on MPEG-7 image feature tools and makes them readily available on the go. With the use of the iPhone Source Development Kit, ilmage has become an application that is available on the world's most popular mobile device. The user simply takes a photo using the iPhone's built in camera, and in seconds appear similar images with relevant information from within our database.
In this paper, a new method to calculate the similarity among images using dominant color descriptor is discussed. Using earth mover's distance (EMD), better retrieval results can be obtained compared with those obtained from the original MPEG-7 reference software (XM) [Text of ISO/IEC 15938-6/FDIS Information Technology-Multimedia content description interface-Part 6: Reference Software]. To further improve the retrieval accuracy, texture information from edge histogram descriptor is added. In order to reduce the retrieval time, two different methods which can prune the images far from the query image are discussed. One is the lower bound of EMD, while the other is the M-tree index based on EMD distance. Experiments show that the lower bound is easier to implement and more efficient than the M-tree.
No standards are currently tagged "MPEG 7 Standard"