Y. Zhao, J. Ma, X. Li, and J. Zhang, Saliency detection and deep learning-based wildfire identification in UAV imagery, Sensors, vol.18, p.712, 2018.

J. C. Van-gemert, C. R. Verschoor, P. Mettes, K. Epema, L. P. Koh et al., Nature conservation drones for automatic localization and counting of animals, Workshop at the European Conference on Computer Vision

. Zurich, . Switzerland, and . Springer, , pp.255-270, 2014.

S. Postema, News Drones: An Auxiliary Perspective

A. O. Agbeyangi, J. O. Odiete, and A. B. Olorunlomerue, Review on UAVs used for aerial surveillance, J. Multidiscip. Eng. Sci. Technol, vol.3, pp.5713-5719, 2016.

L. Lee-morrison, State of the Art Report on Drone-Based Warfare; Division of Art History and Visual Studies, Department of Arts and Cultural Sciences, 2014.

Y. Zhou, D. Tang, H. Zhou, X. Xiang, and T. Hu, Vision-Based Online Localization and Trajectory Smoothing for Fixed-Wing UAV Tracking a Moving Target, Proceedings of the IEEE International Conference on Computer Vision Workshops, 2019.

P. Zhu, D. Du, L. Wen, X. Bian, H. Ling et al., VisDrone-VID2019: The Vision Meets Drone Object Detection in Video Challenge Results, Proceedings of the IEEE International Conference on Computer Vision Workshops, 2019.

W. G. Aguilar, M. A. Luna, J. F. Moya, V. Abad, H. Ruiz et al., Pedestrian detection for UAVs using cascade classifiers and saliency maps, International Work-Conference on Artificial Neural Networks

. Càdiz, . Spain, and . Springer, , pp.563-574, 2017.

T. Dang, S. Khattak, C. Papachristos, and K. Alexis, Anomaly Detection and Cognizant Path Planning for Surveillance Operations using Aerial Robots, Proceedings of the 2019 International Conference on Unmanned Aircraft Systems (ICUAS), pp.667-673, 2019.

A. Edney-browne, Vision, visuality, and agency in the US drone program. In Technology and Agency in International Relations, p.88, 2019.

V. Krassanakis, . Perreira-da, M. Silva, and V. Ricordel, Monitoring Human Visual Behavior during the Observation of Unmanned Aerial Vehicles (UAVs) Videos. Drones, vol.2, p.36, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01928841

I. P. Howard and B. Rogers, Depth perception. Stevens Handb, Exp. Psychol, vol.6, pp.77-120, 2002.

T. Foulsham, A. Kingstone, and G. Underwood, Turning the world around: Patterns in saccade direction vary with picture orientation, Vis. Res, vol.48, pp.1777-1790, 2008.

C. Papachristos, S. Khattak, F. Mascarich, T. Dang, and K. Alexis, Autonomous Aerial Robotic Exploration of Subterranean Environments relying on Morphology-aware Path Planning, Proceedings of the 2019 International Conference on Unmanned Aircraft Systems (ICUAS), pp.299-305, 2019.

L. Itti and C. Koch, Computational modelling of visual attention, Nat. Rev. Neurosci, 2001.

F. Katsuki and C. Constantinidis, Bottom-up and top-down attention: different processes and overlapping neural systems, Neuroscientist, vol.20, pp.509-521, 2014.

S. Krasovskaya and W. J. Macinnes, Salience Models: A Computational Cognitive Neuroscience Review, vol.3, p.56, 2019.

Y. Rai, P. Le-callet, and G. Cheung, Quantifying the relation between perceived interest and visual salience during free viewing using trellis based optimization, Proceedings of the 2016 IEEE 12th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), pp.1-5, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01676957

M. Kummerer, T. S. Wallis, and M. Bethge, Saliency benchmarking made easy: Separating models, maps and metrics, Proceedings of the European Conference on Computer Vision (ECCV), pp.770-787, 2018.

N. Riche, M. Duvinage, M. Mancas, B. Gosselin, and T. Dutoit, Saliency and human fixations: State-of-the-art and study of comparison metrics, Proceedings of the IEEE International Conference on Computer Vision, pp.1153-1160, 2013.

C. Guo and L. Zhang, A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression, IEEE Trans. Image Process, vol.19, pp.185-198, 2009.

S. D. Jain, B. Xiong, K. Grauman, and . Fusionseg, Learning to combine motion and appearance for fully automatic segmentation of generic objects in videos, Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp.2117-2126, 2017.

W. Wang, J. Shen, and L. Shao, Video salient object detection via fully convolutional networks, IEEE Trans. Image Process, vol.27, pp.38-49, 2017.

G. Li, Y. Xie, T. Wei, K. Wang, and L. Lin, Flow guided recurrent neural encoder for video salient object detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.3243-3252, 2018.

L. Meur, O. Coutrot, A. Liu, Z. Rämä, P. Le-roch et al., Visual attention saccadic models learn to emulate gaze patterns from childhood to adulthood, IEEE Trans. Image Process, vol.26, pp.4777-4789, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01650322

T. T. Brunye, S. B. Martis, C. Horner, J. A. Kirejczyk, and K. Rock, Visual salience and biological motion interact to determine camouflaged target detectability, Appl. Ergon, vol.73, pp.1-6, 2018.

A. F. Perrin, L. Zhang, and O. Le-meur, How well current saliency prediction models perform on UAVs, International Conference on Computer Analysis of Images and Patterns, pp.311-323, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02265047

M. Bindemann, Scene and screen center bias early eye movements in scene viewing, Vis. Res, vol.50, pp.2577-2587, 2010.

P. H. Tseng, R. Carmi, I. G. Cameron, D. P. Munoz, and L. Itti, Quantifying center bias of observers in free viewing of dynamic natural scenes, J. Vis, vol.9, p.4, 2009.

A. Van-opstal, K. Hepp, Y. Suzuki, and V. Henn, Influence of eye position on activity in monkey superior colliculus, J. Neurophysiol, vol.74, pp.1593-1610, 1995.

B. W. Tatler, The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions, J. Vis, vol.7, p.4, 2007.

L. Meur, O. Liu, and Z. , Saccadic model of eye movements for free-viewing condition, Vis. Res, vol.116, pp.152-164, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01204682

T. Vigier, M. P. Da-silva, and P. Le-callet, Impact of visual angle on attention deployment and robustness of visual saliency models in videos: From SD to UHD, Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), pp.689-693, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01438356

K. Zhang and Z. Chen, Video Saliency Prediction based on Spatial-temporal Two-stream Network, IEEE Trans. Circuits Syst. Video Technol, 2018.

L. Meur, O. Le-callet, P. Barba, D. Thoreau, and D. , A coherent computational approach to model bottom-up visual attention, IEEE Trans. Pattern Anal. Mach. Intell, vol.28, pp.802-817, 2006.
URL : https://hal.archives-ouvertes.fr/hal-00669578

Z. Bylinskii, T. Judd, A. Oliva, A. Torralba, and F. Durand, What Do Different Evaluation Metrics Tell us about Saliency Models?, IEEE Trans. Pattern Anal. Mach. Intell, vol.41, pp.740-757, 2019.

M. Paglin and A. M. Rufolo, Heterogeneous human capital, occupational choice, and male-female earnings differences, J. Labor Econ, vol.8, pp.123-144, 1990.

K. A. Ehinger, B. Hidalgo-sotelo, A. Torralba, and A. Oliva, Modelling search for people in 900 scenes: A combined source model of eye guidance, Vis. Cogn, vol.17, pp.945-978, 2009.

H. Liu and I. Heynderickx, Studying the added value of visual attention in objective image quality metrics based on eye movement data, Proceedings of the 2009 16th IEEE International Conference on Image Processing, pp.3097-3100, 2009.

T. Judd, F. Durand, and A. Torralba, A Benchmark of Computational Models of Saliency to Predict Human Fixations

K. T. Ma, T. Sim, and M. Kankanhalli, VIP: A unifying framework for computational eye-gaze research, Proceedings of the International Workshop on Human Behavior Understanding, pp.209-222, 2013.

K. Koehler, F. Guo, S. Zhang, and M. P. Eckstein, What do saliency models predict?, J. Vis, vol.14, pp.14-14, 2014.

A. Borji, L. Itti, and . Cat2000, A large scale fixation dataset for boosting saliency research. arXiv 2015

Z. Bylinskii, P. Isola, C. Bainbridge, A. Torralba, and A. Oliva, Intrinsic and extrinsic effects on image memorability, Vis. Res, vol.116, pp.165-178, 2015.

S. Fan, Z. Shen, M. Jiang, B. L. Koenig, J. Xu et al., Emotional attention: A study of image sentiment and visual attention, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.7521-7531, 2018.

M. B. Mccamy, J. Otero-millan, L. L. Di-stasi, S. L. Macknik, and S. Martinez-conde, Highly informative natural scene regions increase microsaccade production during visual scanning, J. Neurosci, vol.34, pp.2956-2966, 2014.

Y. Gitman, M. Erofeev, D. Vatolin, B. Andrey, and F. Alexey, Semiautomatic visual-attention modeling and its application to video compression, Proceedings of the 2014 IEEE International Conference on Image Processing, pp.1105-1109, 2014.

A. Coutrot and N. Guyader, How saliency, faces, and sound influence gaze in dynamic social scenes, J. Vis, vol.14, p.5, 2014.
URL : https://hal.archives-ouvertes.fr/hal-01018237

A. Coutrot and N. Guyader, An efficient audiovisual saliency model to predict eye positions when looking at conversations, Proceedings of the 2015 23rd European Signal Processing Conference (EUSIPCO), pp.1531-1535, 2015.

W. Wang, J. Shen, J. Xie, M. M. Cheng, H. Ling et al., Revisiting video saliency prediction in the deep learning era, IEEE Trans. Pattern Anal. Mach. Intell, 2019.

S. Oh, A. Hoogs, A. Perera, N. Cuntoor, C. C. Chen et al., A large-scale benchmark dataset for event recognition in surveillance video, Proceedings of the 24th IEEE Conference on Computer Vision and Pattern Recognition, pp.3153-3160, 2011.

R. Layne, T. M. Hospedales, and S. Gong, Investigating open-world person re-identification using a drone, European Conference on Computer Vision, pp.225-240, 2014.

M. Bonetto, P. Korshunov, G. Ramponi, and T. Ebrahimi, Privacy in mini-drone based video surveillance, Proceedings of the 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp.1-6, 2015.

T. Shu, D. Xie, B. Rothrock, S. Todorovic, and S. Chun-zhu, Joint inference of groups, events and human roles in aerial videos, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.4576-4584, 2015.

M. Mueller, N. Smith, and B. Ghanem, A benchmark and simulator for uav tracking, European Conference on Computer Vision, pp.445-461, 2016.

A. Robicquet, A. Sadeghian, A. Alahi, and S. Savarese, Learning social etiquette: Human trajectory understanding in crowded scenes, European Conference on Computer Vision, pp.549-565, 2016.

S. Li and D. Y. Yeung, Visual object tracking for unmanned aerial vehicles: A benchmark and new motion models, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pp.4-9, 2017.

M. Barekatain, M. Martí, H. F. Shih, S. Murray, K. Nakayama et al., Okutama-action: An aerial view video dataset for concurrent human action detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp.28-35, 2017.

M. R. Hsieh, Y. L. Lin, and W. H. Hsu, Drone-based object counting by spatially regularized regional proposal network, Proceedings of the IEEE International Conference on Computer Vision, pp.4145-4153, 2017.

R. Ribeiro, G. Cruz, J. Matos, and A. Bernardino, A dataset for airborne maritime surveillance environments, IEEE Trans. Circuits Syst. Video Technol, 2017.

H. J. Hsu and K. T. Chen, DroneFace: An open dataset for drone research, Proceedings of the 8th ACM on Multimedia Systems Conference, pp.187-192, 2017.

D. Bo?i?-?tuli?, ?. Maru?i?, and S. Gotovac, Deep Learning Approach in Aerial Imagery for Supporting Land Search and Rescue Missions, Int. J. Comput. Vis, vol.127, pp.1256-1278, 2019.

K. Fu, J. Li, H. Shen, and Y. Tian, How drones look: Crowdsourced knowledge transfer for aerial video saliency prediction, 2018.

P. Zhu, L. Wen, X. Bian, H. Ling, and Q. Hu, Vision meets drones: A challenge. arXiv 2018

M. Nyström, R. Andersson, K. Holmqvist, and J. Van-de-weijer, The influence of calibration method and eye physiology on eyetracking data quality, Behav. Res. Methods, vol.45, pp.272-288, 2013.

P. Itu-t-recommendation, Subjective Video Quality Assessment Methods for Multimedia Applications

, Telephone transmission quality, telephone installations, 2008.

I. Rec, Subjective Assessment Methods for Image Quality in High-Definition Television; BT. 710-4. International Telecommunication Union, Recommendations (R), Broadcasting service TV (BT), 1998.

F. W. Cornelissen, E. M. Peters, and J. Palmer, The Eyelink Toolbox: Eye tracking with MATLAB and the Psychophysics Toolbox, Behav. Res. Methods Instruments Comput, vol.34, pp.613-617, 2002.

I. Rec, Methodology for the Subjective Assessment of the Quality of Television Pictures; BT. 500-13 International Telecommunication Union, Recommendations (R), Broadcasting service TV (BT), 1998.

B. Wandell and S. Thomas, Foundations of vision, vol.42, p.649, 1997.

L. Meur, O. Baccino, and T. , Methods for comparing scanpaths and saliency maps: Strengths and weaknesses, Behav. Res. Methods, vol.45, pp.251-266, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00757615

S. Guznov, G. Matthews, J. S. Warm, and M. Pfahler, Training techniques for visual search in complex task environments, vol.59, pp.1139-1152, 2017.

M. Shah, O. Javed, and K. Shafique, Automated visual surveillance in realistic scenarios, IEEE MultiMedia, vol.14, pp.30-39, 2007.

H. Snellen, Test-Types for the Determination of the Acuteness of Vision, p.1868

S. Ishihara, Test for Colour-Blindness, Kanehara, 1987.

D. D. Salvucci and J. H. Goldberg, Identifying fixations and saccades in eye-tracking protocols, Proceedings of the 2000 Symposium on Eye Tracking Research & Applications: Palm Beach Gardens, pp.71-78, 2000.

V. Krassanakis, V. Filippakopoulou, and B. Nakos, EyeMMV toolbox: An eye movement post-analysis tool based on a two-step spatial dispersion threshold for fixation identification, Joural of Eye Movement Research, vol.7, issue.1, 2014.

V. Krassanakis, L. M. Misthos, and M. Menegaki, LandRate toolbox: An adaptable tool for eye movement analysis and landscape rating, Proceedings of the 3rd International Workshop, 2018.

V. Krassanakis, V. Filippakopoulou, and B. Nakos, Detection of moving point symbols on cartographic backgrounds, Joural of Eye Movement Research, vol.9, 2016.

K. Ooms and V. Krassanakis, Measuring the Spatial Noise of a Low-Cost Eye Tracker to Enhance Fixation Detection, J. Imaging, vol.4, p.96, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01928958

Y. Cui and J. M. Hondzinski, Gaze tracking accuracy in humans: Two eyes are better than one, Neurosci. Lett, vol.396, pp.257-262, 2006.

K. Holmqvist, M. Nyström, and F. Mulvey, Eye tracker data quality: What it is and how to measure it, Proceedings of the Symposium on Eye Tracking Research and Applications, pp.45-52, 2012.

I. T. Hooge, G. A. Holleman, N. C. Haukes, and R. S. Hessels, Gaze tracking accuracy in humans: One eye is sometimes better than two, Behav. Res. Methods, 2018.

Z. Bylinskii, T. Judd, A. Borji, L. Itti, F. Durand et al., MIT Technical Report, 2015.

H. Abdi and L. J. Williams, Tukey's honestly significant difference (HSD) test. Encyclopedia of Research Design, vol.Sage, pp.1-5, 2010.

L. Meur, O. Coutrot, and A. , Introducing context-dependent and spatially-variant viewing biases in saccadic models, Licensee MDPI, vol.121, pp.72-84, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01391745