Saliency Detection using Maximum Symmetric Surround

Author

Radhakrishna Achanta

Abstract

Detection of visually salient image regions is useful for 
applications like object segmentation, adaptive compression, and
 object recognition. Recently, full-resolution salient maps that
 retain well-defined boundaries have attracted attention. In these 
maps, boundaries are preserved by retaining substantially more 
frequency content from the original image than older techniques.
However, if the salient regions comprise more than half the pixels 
of the image, or if the background is complex, the background gets 
highlighted instead of the salient object. In this paper, we
 introduce a method for salient region detection that retains the 
advantages of full resolution saliency maps with well-defined
 boundaries while overcoming their shortcomings. Our method exploits 
features of color and luminance, is simple to implement and is
 computationally efficient. We compare our novel algorithm to six
 state-of-the-art salient region detection methods using publicly 
available ground truth. Our method outperforms the six algorithms by 
achieving both higher precision and better recall. We also show an 
application of our saliency maps in an automatic salient object
 segmentation scheme using graph-cuts.

Reference

Radhakrishna Achanta and Sabine Susstrunk, Saliency Detection using Maximum Symmetric Surround, International Conference on Image Processing (ICIP), Hong Kong, September 2010.

Download C++ Code

MS Visual Studio 2008 workspace

Download Matlab Source Code


Saliency_MSSS_ICIP2010.zip

Visual Comparaison

Original [IT98] [MA03] [HA06] [HO07] [AC08] [AC09] [MSSS]

Quantitative comparison

The figure below shows the precision-recall curves obtained using ground truth. The number of techniques compared against is more than in the paper. Also the precision-recall curve for MSSS is higher than in the paper because of the use of a better RGB to CIELAB conversion function (available in the C++ code).

Other saliency detection methods

[IT98] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 20, no. 11, pp. 1254–1259, November 1998.
[MA03] Y.-F. Ma and H.-J. Zhang, “Contrast-based image attention analysis by using fuzzy growing,” ACM International Conference on Multimedia, pp. 374–381, November 2003.
[HA06] J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” Advances in Neural Information Processing Systems (NIPS), pp. 545–552, 2007.
[HO07] X. Hou and L. Zhang, “Saliency detection: A spectral residual approach,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8, June 2007.
[GR07]M. Mancas, B. Gosselin, and B. Macq, “Perceptual image representation,” Journal of Image and Video Processing, vol. 2007, no. 2, pp. 3–3, 2007.
[AIM07]N. Bruce and J. Tsotsos, “Attention based on information maximization,” Journal of Vision, vol. 7, no. 9, pp. 950–950, 6 2007.
[AC08] R. Achanta, F. Estrada, P. Wils, and S. Susstrunk, “Salient region detection and segmentation,” International Conference on Computer Vision Systems (ICVS), vol. 5008, pp. 66–75, 2008.
[SUN08]L. Zhang, M. H. Tong, T. K. Marks, H. Shan, and G. W. Cottrell,”SUN: A Bayesian framework for saliency using natural statistics”, Journal of Vision, vol. 8, no. 7, pp. 1–20, 12 2008.
[AC09] R. Achanta, S. Hemami, F. Estrada, and S. S¨usstrunk, “Frequency-tuned salient region detection,” IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1597–1604, June 2009.
[SWQ09]P. Bian and L. Zhang, “Biological plausibility of spectral domain approach for spatiotemporal visual saliency,” Advances in Neuro-Information Processing, vol. 5506/2009, pp. 251–258, 2009