106 resources related to Image Annotation
- Topics related to Image Annotation
- IEEE Organizations related to Image Annotation
- Conferences related to Image Annotation
- Periodicals related to Image Annotation
- Most published Xplore authors for Image Annotation
No organizations are currently tagged "Image Annotation"
The conference program will consist of plenary lectures, symposia, workshops and invitedsessions of the latest significant findings and developments in all the major fields of biomedical engineering.Submitted papers will be peer reviewed. Accepted high quality papers will be presented in oral and postersessions, will appear in the Conference Proceedings and will be indexed in PubMed/MEDLINE
2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI 2020)
The IEEE International Symposium on Biomedical Imaging (ISBI) is the premier forum for the presentation of technological advances in theoretical and applied biomedical imaging. ISBI 2020 will be the 17th meeting in this series. The previous meetings have played a leading role in facilitating interaction between researchers in medical and biological imaging. The 2020 meeting will continue this tradition of fostering cross-fertilization among different imaging communities and contributing to an integrative approach to biomedical imaging across all scales of observation.
The International Conference on Image Processing (ICIP), sponsored by the IEEE SignalProcessing Society, is the premier forum for the presentation of technological advances andresearch results in the fields of theoretical, experimental, and applied image and videoprocessing. ICIP 2020, the 27th in the series that has been held annually since 1994, bringstogether leading engineers and scientists in image and video processing from around the world.
Multimedia technologies, systems and applications for both research and development of communications, circuits and systems, computer, and signal processing communities.
The 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2020) will be held in Metro Toronto Convention Centre (MTCC), Toronto, Ontario, Canada. SMC 2020 is the flagship conference of the IEEE Systems, Man, and Cybernetics Society. It provides an international forum for researchers and practitioners to report most recent innovations and developments, summarize state-of-the-art, and exchange ideas and advances in all aspects of systems science and engineering, human machine systems, and cybernetics. Advances in these fields have increasing importance in the creation of intelligent environments involving technologies interacting with humans to provide an enriching experience and thereby improve quality of life. Papers related to the conference theme are solicited, including theories, methodologies, and emerging applications. Contributions to theory and practice, including but not limited to the following technical areas, are invited.
No periodicals are currently tagged "Image Annotation"
2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC), 2019
Bags are not isolated in the Multiple-Instance Active Learning process, especially for image as bag, because each picture has its inherent background or metainformation, such as its taken time, taken place, the topic, and they have possible associations. With context associations, we can build the annotation tool providing more interactively user experience and thus increase the annotation efficiency. In this ...
2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN), 2018
Due to the advancements of Machine learning and vast usage of multimedia data, the image and video annotation has become quite popular now. In the area of image annotation, the researchers and specially Microsoft research lab have achieved nearly 98% accuracy. But same is not true for the video annotation. This paper presents a review of the state of the ...
2017 International Symposium on Computer Science and Intelligent Controls (ISCSIC), 2017
In recent years, personal photos on the internet have become important parts of people's lives. There is much automatic application software to search for photos, but searching still suffers from problems of semantic accuracy. Research has been conducted to satisfy user demands for a semantic model using a set of features or keyword annotation techniques. Keywords in photos give the ...
2018 International Conference on Information Networking (ICOIN), 2018
Recently, techniques for automatically interpreting images or videos through machine learning based on big data have been actively studied. In this paper, we propose a semiautomatic image and video annotation system to generate ground truth information, which is essential information for machine learning of images or videos. Unlike the conventional methods for generating simple ground truth information manually, the proposed ...
2017 International Conference on Computing, Communication, Control and Automation (ICCUBEA), 2017
Recently, various multimedia technologies has been developed, which increase the collection of digital images. In daily life, popularity of digital camera and social media is also increased which is resulted in huge digital data sharing. Within such large amount of image data, specific image searching is very difficult. To make ease of searching, dictionary learning becomes popular solution. The feature ...
Hamid R Tizhoosh - Fuzzy Image Processing
P2020 Establishing Image Quality Standards for Automotive
Solving Sparse Representation for Image Classification using Quantum D-Wave 2X Machine - IEEE Rebooting Computing 2017
Zohara Cohen AMA EMBS Individualized Health
Broadband IQ, Image Reject, and Single Sideband Mixers: MicroApps 2015 - Marki Microwave
Tapping the Computing Power of the Unconscious Brain
CPIQ Update and the Case for Image Quality Standards in Automotive
IEEE Low-Power Image Recognition Challenge (LPIRC)
Q&A with Ryan Dailey: IEEE Rebooting Computing Podcast, Episode 12
Welcome: Low Power Image Recognition Challenge
My Computer Speaks Colors! Fuzzy Color Spaces for Image Understanding, Description and Retrieval
Low Power Image Recognition: The Challenge Continues
Robotics History: Narratives and Networks Oral Histories: Ray Jarvis
Robotics History: Narratives and Networks Oral Histories: Minoru Asada
Resistive Coupled VO2 Oscillators for Image Recognition - Elisabetta Corti - ICRC 2018
Deeper Neural Networks - Kurt Keutzer - LPIRC 2019
Award-Winning Methods for LPIRC - Tao Sheng - LPIRC 2019
Noise Enhanced Information Systems: Denoising Noisy Signals with Noise
Welcome to IEEE Young Professionals
Bags are not isolated in the Multiple-Instance Active Learning process, especially for image as bag, because each picture has its inherent background or metainformation, such as its taken time, taken place, the topic, and they have possible associations. With context associations, we can build the annotation tool providing more interactively user experience and thus increase the annotation efficiency. In this paper, we propose a context aware images annotation framework that selects the images that are context related to query in the multiple-instance active learning with batch mode. Experiments show that it takes less time for annotation with the proposed framework compared to traditional ones, and improve the labeling efficiency.
Due to the advancements of Machine learning and vast usage of multimedia data, the image and video annotation has become quite popular now. In the area of image annotation, the researchers and specially Microsoft research lab have achieved nearly 98% accuracy. But same is not true for the video annotation. This paper presents a review of the state of the art tools being used for video annotation.
In recent years, personal photos on the internet have become important parts of people's lives. There is much automatic application software to search for photos, but searching still suffers from problems of semantic accuracy. Research has been conducted to satisfy user demands for a semantic model using a set of features or keyword annotation techniques. Keywords in photos give the best evidence to identify what photos are about. However, it does not always relate to the actual meaning of photos.For this reason, we propose a textual description with a hierarchical concept and comparison of the feature set with a hybrid similarity measure. The experimental results indicate that our proposed approach offers significant performance improvements in the interpretation of semantic meanings with a maximum success rate of 80.4%.
Recently, techniques for automatically interpreting images or videos through machine learning based on big data have been actively studied. In this paper, we propose a semiautomatic image and video annotation system to generate ground truth information, which is essential information for machine learning of images or videos. Unlike the conventional methods for generating simple ground truth information manually, the proposed system not only provides various ground truth information such as object information, motion information, and event information, but also uses a semi-automatic image and video annotation method for fast generation of ground truth information. The ground truth information generated by the proposed system is stored in the metadata database as a form of XML. The implementation results show that the proposed system provides not only fast ground truth annotation, but also more various ground truth information compared to the existing methods.
Recently, various multimedia technologies has been developed, which increase the collection of digital images. In daily life, popularity of digital camera and social media is also increased which is resulted in huge digital data sharing. Within such large amount of image data, specific image searching is very difficult. To make ease of searching, dictionary learning becomes popular solution. The feature based image annotation is the new area for image searching. In this image annotation task, some human keywords are assigned to the images, so that searching becomes easy. In this paper, we present a multi- label learning and multi keyword extraction for automatic image annotation. This framework is worked in two phases named as training and testing phase. In training phase, we build the classifier with the help of extracted features, mapping of tags and features and dictionary learning. This classifier is used to identify the labels for testing image. For classification we have used C4.5 classifier and prove that the accuracy and efficiency is better than naïve byes classifier. The performance of system is tested on IAPR TC12 dataset. Experimental results prove that the multiple label and multiple features extraction improves the efficiency of image annotation framework.
Crowdsourcing has been widely established as a means to enable human computation at large-scale, in particular for tasks that require manual labelling of large sets of data items. Answers obtained from heterogeneous crowd workers are aggregated to obtain a robust result. In this paper, we consider partial-agreement tasks that are common in many applications such as image tagging and document annotation, where items are assigned sets of labels. Going beyond the state-of-the-art, we propose a novel Bayesian nonparametric model to aggregate the partial-agreement answers in a generic way. This model enables us to compute the consensus of partially-sound and partially-complete worker answers, while taking into account mutual relations in labels and different answer sets. An evaluation of our method using real- world datasets reveals that it consistently outperforms the state-of-the-art in terms of precision, recall, and scalability.
In the field of image distortion, the difficulty of image annotation is mainly reflected in three points:(1)The appraiser's evaluation annotation is inconsistent; (2)The boundary definition of the image distortion type is fuzzy;(3)The environment atmosphere of the tagging work is complex. Three kinds of difficulties are often caused by the ambiguity in the distortion of image annotation results. As an effective solution to ambiguity and uncertainty. In this paper, a semi-automatic method based on neighborhood rough sets is proposed for distorted images. The aim of this paper is to improve the accuracy of annotation by constructing a global rough set model. Specifically, under the constraint of defined annotation rules. The sample is annotated by manual annotation, and the approximate neighborhood of the sample is constructed. Then according to the approximate neighborhood coordinates, calculate the coordinates in the neighborhood of upper approximation and lower approximation. Finally, construct the semantic association between the annotation words and the images, so as to classify the images. The experimental results show that the method has achieved effective results in image distortion classification.
Significantly outperforming traditional machine learning methods, deep convolutional neural networks have gained increasing popularity in the application of image classification and segmentation. Nevertheless, deep learning-based methods usually require a large amount of training data, which is quite labor-intensive and time-demanding. To deal with the problem in generating training data, we propose in this paper a novel approach to generate image annotations by transferring labels from aerial images to UAV images and refine the annotations using a densely connected CRF model with an embedded naive Bayes classifier. The generated annotations not only present correct semantic labels, but also preserve accurate class boundaries. To validate the utility of these automatic annotations, we deploy them as training data for pixel-wise image segmentation and compare the results with the segmentation using manual annotations. Experiment results demonstrate that the automatic annotations can achieve comparable segmentation accuracy as the manual annotations.
There are many types of relations between images and texts in native multimedia data on the Web describing, for instance, general subjects, events, persons, etc. We propose a novel automatic multimedia data cleansing method that selects pairs of an image and a piece of text (paragraph or sentence) describing only general subjects. The selected dataset describing the general subject is thus available to be used for annotation learning. Experiments conducted on Wikipedia data confirmed that our method can automatically select image and text pairs describing general subjects.
Semantic segmentation to understand an image at pixel level is an important problem in the computer vision. In the traditional object detection, each object in an image is detected at its minimum bounding rectangle level, but, in the semantic segmentation, it is detected at pixel level and thus the segmentation result is more flexible and meaningful. However, the characteristics of the semantic segmentation network might make over- segmentation with a few pixels misunderstood and result in the low precision rate. In this paper, we propose a method to enhance the precision rate of the semantic segmentation network. In order to address the over-segmentation, we define confidence-based and semantic-correlation-based outliers. Confidence- based outlier is defined by the confidence value, weighted by the number of segment's pixels, of the semantic segmentation network and semantic- correlation-based outlier is defined by the distance in the Word2Vec space. If a pixel is determined as not only confidence-based but also semantic- correlation-based outlier, the pixel is pruned from the segmentation result. We evaluate the proposed method with the images of COCO dataset and show the f-score, as well as the precision rate, of the semantic segmentation is significantly improved.
No standards are currently tagged "Image Annotation"