Salgan360 Visual saliency prediction on 360 degree images with generative adversarial networks

Abstract : Understanding visual attention of observers on 360° images gains interest along with the booming trend of Virtual Reality applications. Extending existing saliency prediction methods from traditional 2D images to 360° images is not a direct approach due to the lack of a sufficient large 360° image saliency database. In this paper, we propose to extend the SalGAN, a 2D saliency model based on the generative adversarial network, to SalGAN360 by fine tuning the SalGAN with our new loss function to predict both global and local saliency maps. Our experiments show that the SalGAN360 outperforms the tested state-of-the-art methods. © 2018 IEEE.
Document type :
Conference papers
Complete list of metadatas

https://hal-univ-rennes1.archives-ouvertes.fr/hal-02042542
Contributor : Laurent Jonchère <>
Submitted on : Wednesday, February 20, 2019 - 2:30:28 PM
Last modification on : Thursday, April 25, 2019 - 3:33:07 PM

Identifiers

Citation

F.-Y. Chao, L. Zhang, Wassim Hamidouche, O. Deforges. Salgan360 Visual saliency prediction on 360 degree images with generative adversarial networks. 2018 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2018, Jul 2018, San Diego, United States. pp.8551543, ⟨10.1109/ICMEW.2018.8551543⟩. ⟨hal-02042542⟩

Share

Metrics

Record views

41