IEEE Organizations related to Saliency Detection

Back to Top

No organizations are currently tagged "Saliency Detection"



Conferences related to Saliency Detection

Back to Top

No conferences are currently tagged "Saliency Detection"


Periodicals related to Saliency Detection

Back to Top

No periodicals are currently tagged "Saliency Detection"


Most published Xplore authors for Saliency Detection

Back to Top

Xplore Articles related to Saliency Detection

Back to Top

Salient object detection using array images

2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2017

Most existing saliency detection methods utilize low- level features to detect salient objects. In this paper, we first verify that the foreground objects in the scene can be an effective cue for saliency detection. We then propose a novel saliency detection algorithm which combines low level features with high level object detection results to enhance the performance. For extracting the ...


Optimized Image Crop-based Video Retargeting

2018 International SoC Design Conference (ISOCC), 2018

This paper proposes a new video retargeting method that utilizes the cropping and bilinear scaling techniques. Specifically, the cropping positions for a given image are determined by using results of the logo and saliency detection, and the strength of the bilinear scaling is determined depending on the size of cropping. As a result, the proposed method effectively reduces distortion of ...


Robust background subtraction method via low-rank and structured sparse decomposition

China Communications, 2018

Background subtraction is a challenging problem in surveillance scenes. Although the low-rank and sparse decomposition (LRSD) methods offer an appropriate framework for background modeling, they fail to account for image's local structure, which is favorable for this problem. Based on this, we propose a background subtraction method via low-rank and SILTP-based structured sparse decomposition, named LRSSD. In this method, a ...


Contrast enhancement scheme for visual saliency detection

2017 International Conference on Signal Processing and Communication (ICSPC), 2017

Visual saliency detection becomes more popular research topic in the recent years because the prediction of attention finds application in object detection, object recognition and human computer interface etc. This paper presents the bottom up visual saliency detection with the help of contrast enhancement. Initially the SLIC segmentation is performed as preprocessing step. The global contrast enhancement scheme is adopted ...


Deep saliency features for video saliency prediction

2018 International Conference on Advanced Systems and Electric Technologies (IC_ASET), 2018

Recently, the research on the field of visual saliency estimation increases in both neuro-science and computer vision aspects. In this context, we present a new video saliency approach based on deep saliency features to highlight the most important objects in videos. Our approach investigate the usage of spatio-temporal object candidates for saliency in video data. We extract features using deep ...



Educational Resources on Saliency Detection

Back to Top

IEEE.tv Videos

No IEEE.tv Videos are currently tagged "Saliency Detection"

IEEE-USA E-Books

  • Salient object detection using array images

    Most existing saliency detection methods utilize low- level features to detect salient objects. In this paper, we first verify that the foreground objects in the scene can be an effective cue for saliency detection. We then propose a novel saliency detection algorithm which combines low level features with high level object detection results to enhance the performance. For extracting the foreground objects in a scene, we first make use of a camera array to obtain a set of images of the scene from different viewing angles. Based on the array images, we identify the feature points of the objects so as to generate the foreground and background feature point cues. Together with a new K-Nearest Neighbor model, a cost function is developed to allow a reliable and automatic segmentation of the foreground objects. The outliers in the segmentation are further removed by a low-rank decomposition method. Finally, the detected objects are fused with the low-level object features to generate the saliency map. Experimental results show that the proposed algorithm consistently gives a better performance compared to the traditional methods.

  • Optimized Image Crop-based Video Retargeting

    This paper proposes a new video retargeting method that utilizes the cropping and bilinear scaling techniques. Specifically, the cropping positions for a given image are determined by using results of the logo and saliency detection, and the strength of the bilinear scaling is determined depending on the size of cropping. As a result, the proposed method effectively reduces distortion of visually important contents and maintains a temporal consistency. Experimental results showed that the proposed method provided the highest preference compared with the existing methods in the paired- comparison-based user study evaluation.

  • Robust background subtraction method via low-rank and structured sparse decomposition

    Background subtraction is a challenging problem in surveillance scenes. Although the low-rank and sparse decomposition (LRSD) methods offer an appropriate framework for background modeling, they fail to account for image's local structure, which is favorable for this problem. Based on this, we propose a background subtraction method via low-rank and SILTP-based structured sparse decomposition, named LRSSD. In this method, a novel SILTP- inducing sparsity norm is introduced to enhance the structured presentation of the foreground region. As an assistance, saliency detection is employed to render a rough shape and location of foreground. The final refined foreground is decided jointly by sparse component and attention map. Experimental results on different datasets show its superiority over the competing methods, especially under noise and changing illumination scenarios.

  • Contrast enhancement scheme for visual saliency detection

    Visual saliency detection becomes more popular research topic in the recent years because the prediction of attention finds application in object detection, object recognition and human computer interface etc. This paper presents the bottom up visual saliency detection with the help of contrast enhancement. Initially the SLIC segmentation is performed as preprocessing step. The global contrast enhancement scheme is adopted in this proposed method. The color distribution and contrast enhancement map is multiplied together to get the final saliency map. The performance metrics of Precision, Recall, f-measure, ROC, AUC are obtained to validate the results. The benchmark dataset of MSRA and ECSSD are used to evaluate the performance metric of the proposed method and state-of-methods. The experimental results demonstrate that the proposed method is computationally efficient.

  • Deep saliency features for video saliency prediction

    Recently, the research on the field of visual saliency estimation increases in both neuro-science and computer vision aspects. In this context, we present a new video saliency approach based on deep saliency features to highlight the most important objects in videos. Our approach investigate the usage of spatio-temporal object candidates for saliency in video data. We extract features using deep convolutional neural network CNN, which recently has a large success in the field of visual recognition. We extract deep features from each video, we train a Random forest and assign saliency to each object candidate. Overall, our deep saliency approach demonstrate a successful performance when we evaluate it on two data sets Fukuchi and SegTrack v2 on both ROC and PR curves and in terms of F-score.

  • Multi-Path Feature Fusion Network for Saliency Detection

    Recent saliency detection methods have made great progress with the fully convolutional network. However, we find that the saliency maps are usually coarse and fuzzy, especially near the boundary of salient object. To deal with this problem, in this paper, we exploit a multi-path feature fusion model for saliency detection. The proposed model is a fully convolutional network with raw images as input and saliency maps as output. In particular, we propose a multi-path fusion strategy for deriving the intrinsic features of salient objects. The structure has the ability of capturing the low-level visual features and generating the boundary-preserving saliency maps. Moreover, a coupled structure module is proposed in our model, which helps to explore the high-level semantic properties of salient objects. Extensive experiments on four public benchmarks indicate that our saliency model is effective and outperforms state-of-the-art methods.

  • Top-Down Saliency Object Localization Based on Deep-Learned Features

    How to accurately and efficiently localize objects in images is a challenging computer vision problem. In this article, a novel top-down fine-grained saliency object localization method based on deep-learned features is proposed, which can localize the same object in input image as the query image. The query image and its three subsample images are used as top-down cues to guide saliency detection. We ameliorate Convolutional Neural Network (CNN) using the fast VGG network (VGG-f) and retrained on the Pascal VOC 2012 dataset. Experiment on the FiFA dataset demonstrates that the proposed algorithm can effectively localize the saliency region and find the same object (human face) as the query. Experiments on the David1 and Face1 sequences conclusively prove that the proposed algorithm is able to effectively deal with different challenging factors including appearance and scale variations, shape deformation and partial occlusion.

  • Visual Saliency Guided High Dynamic Range Image Compression

    Recent years have seen the emergence of the visual saliency-based image and video compression for low dynamic range (LDR) visual content. The high dynamic range (HDR) imaging is yet to follow such an approach for compression as the state-of-the-art visual saliency detection models are mainly concerned with LDR content. Although a few HDR saliency detection models have been proposed in the recent years, they lack the comprehensive validation. Current HDR image compression schemes do not differentiate salient and non-salient regions, which has been proved redundant in terms of the Human Visual System. In this paper, we propose a novel visual saliency guided layered compression scheme for HDR images. The proposed saliency detection model is robust and highly correlates with the ground truth saliency maps obtained from eye tracker. The results show a reduction of bit-rates up to 50% while retaining the same high visual quality in terms of HDR-Visual Difference Predictor (HDR-VDP) and the visual saliency-induced index for perceptual image quality assessment (VSI) metrics in the salient regions.

  • Saliency Detection using Boundary Aware Regional Contrast Based Seam-map

    Most of the saliency detection methods use the contrast and boundary prior to extract the salient region of an input image. These two approaches are followed in Boundary Aware Regional Contrast Based Visual Saliency Detection (BARC) [1] along with spatial distance information to achieve state of the art result. In this research, a more interesting cue is introduced to extract the salient region from an input image. Here, a combination of seam map and BARC [1] is presented to produce the saliency output. Seam importance map with boundary prior is also presented to measure the performance of this combination. Experiments with ten state of the art methods reveal that we get better saliency output by combining seam information of an input image with BARC [1].

  • An Innovative Saliency Guided ROI Selection Model for Panoramic Images Compression

    Saliency detection has been an increasingly important tool for ROI selection in image compression. Most previous works on saliency detection are dedicated to conventional images, however, with the rapid development of VR or AR technology, it is becoming more and more important to obtain visual attention for panoramic images. Meanwhile, panoramic images have more potential for improvement in compression performance compared with the conventional case. In this work, we propose an innovative saliency guided ROI selection model. Extensive evaluations show the proposed approach outperforms other methods in saliency accuracy especially for panoramic images. Meanwhile, we improve the compression quality of standard JPEG by using a higher bit rate to encode image regions flagged by our model and lower bit rate elsewhere in the image.



Standards related to Saliency Detection

Back to Top

No standards are currently tagged "Saliency Detection"