C. Papachristos, S. Khattak, F. Mascarich, T. Dang, and K. Alexis, Autonomous Aerial Robotic Exploration of 583

, Subterranean Environments relying on Morphology-aware Path Planning. 2019 International Conference 584 on Unmanned Aircraft Systems (ICUAS), pp.299-305, 2019.

L. Itti and C. Koch, Computational modelling of visual attention, Nature reviews neuroscience, vol.2, p.16, 2001.

F. Katsuki and C. Constantinidis, Bottom-up and top-down attention: different processes and overlapping 587 neural systems, The Neuroscientist, vol.20, pp.509-521, 2014.

S. Krasovskaya and W. J. Macinnes, Salience Models: A Computational Cognitive Neuroscience Review, Vision, vol.589, p.56, 2019.

M. Kummerer, T. S. Wallis, and M. Bethge, Saliency benchmarking made easy: Separating models, maps and 591 metrics, Proceedings of the European Conference on Computer Vision (ECCV), vol.592, p.19, 2018.

N. Riche, M. Duvinage, M. Mancas, B. Gosselin, and T. Dutoit, Saliency and human fixations: State-of-the-art 593 and study of comparison metrics, Proceedings of the IEEE international conference on computer vision, vol.595, pp.1153-1160, 2013.

C. Guo and L. Zhang, A novel multiresolution spatiotemporal saliency detection model and its applications 596 in image and video compression, IEEE transactions on image processing, vol.19, p.21, 2009.

S. D. Jain, B. Xiong, K. Grauman, and . Fusionseg, Learning to combine motion and appearance for fully 598 automatic segmentation of generic objects in videos, IEEE conference on computer vision and pattern 599 recognition (CVPR), pp.2117-2126, 2017.

W. Wang, J. Shen, and L. Shao, Video salient object detection via fully convolutional networks, IEEE 601 Transactions on Image Processing, vol.27, pp.38-49, 2017.

G. Li, Y. Xie, T. Wei, K. Wang, and L. Lin, Flow guided recurrent neural encoder for video salient object 603 detection, Proceedings of the IEEE conference on computer vision and pattern recognition, 2018.

L. Meur, O. Coutrot, A. Liu, Z. Rämä, P. Le-roch et al., Visual attention saccadic models 606 learn to emulate gaze patterns from childhood to adulthood, IEEE Transactions on Image Processing, vol.607, pp.4777-4789, 2017.

T. T. Brunye, S. B. Martis, C. Horner, J. A. Kirejczyk, and K. Rock, Visual salience and biological motion interact 609 to determine camouflaged target detectability, Applied ergonomics, vol.73, p.26, 2018.

A. F. Perrin, L. Zhang, and O. Le-meur, How well current saliency prediction models perform on UAVs, 611 International Conference on Computer Analysis of Images and Patterns, vol.612, p.27, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02265047

M. Bindemann, Scene and screen center bias early eye movements in scene viewing, Vision research, vol.116, pp.152-164, 2010.

T. Vigier, M. P. Da-silva, and P. Le-callet, Impact of visual angle on attention deployment and robustness 623 of visual saliency models in videos: From SD to UHD, IEEE International Conference on Image 624 Processing (ICIP), pp.689-693, 2016.

K. Zhang and Z. Chen, Video Saliency Prediction based on Spatial-temporal Two-stream Network, IEEE 626 Transactions on Circuits and Systems for Video Technology, vol.627, p.34, 2018.

L. Meur, O. Le-callet, P. Barba, D. Thoreau, and D. , A coherent computational approach to model bottom-up 628 visual attention, vol.28, p.35, 2006.

Z. Bylinskii, T. Judd, A. Oliva, A. Torralba, and F. Durand, What Do Different Evaluation Metrics Tell Us 630

, About Saliency Models? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.41, p.36, 2019.

M. Paglin and A. M. Rufolo, Heterogeneous human capital, occupational choice, and male-female earnings 632 differences, Journal of Labor Economics, vol.8, pp.123-144, 1990.

K. A. Ehinger, B. Hidalgo-sotelo, A. Torralba, and A. Oliva, Modelling search for people in 900 scenes: A 634 combined source model of eye guidance, Visual cognition, vol.17, pp.945-978, 2009.

H. Liu and I. Heynderickx, Studying the added value of visual attention in objective image quality metrics 636 based on eye movement data, 16th IEEE international conference on image processing (ICIP), pp.3097-3100, 2009.

T. Judd, F. Durand, and A. Torralba, A benchmark of computational models of saliency to predict human 639 fixations, 2012.

K. T. Ma, T. Sim, and M. Kankanhalli, VIP: A unifying framework for computational eye-gaze research. 641 International Workshop on Human Behavior Understanding, pp.209-222, 2013.

K. Koehler, F. Guo, S. Zhang, and M. P. Eckstein, What do saliency models predict, Journal of vision, vol.643, pp.14-14, 2014.

A. Borji, L. Itti, and . Cat2000, A large scale fixation dataset for boosting saliency research, 2015.

Z. Bylinskii, P. Isola, C. Bainbridge, A. Torralba, and A. Oliva, Intrinsic and extrinsic effects on image 647 memorability, vol.116, pp.165-178, 2015.

S. Fan, Z. Shen, M. Jiang, B. L. Koenig, J. Xu et al., Emotional attention: A study 649 of image sentiment and visual attention, Proceedings of the IEEE Conference on Computer Vision and 650 Pattern Recognition, pp.7521-7531, 2018.

M. B. Mccamy, J. Otero-millan, L. L. Di-stasi, S. L. Macknik, and S. Martinez-conde, Highly informative 652 natural scene regions increase microsaccade production during visual scanning, Journal of neuroscience, vol.653, pp.2956-2966, 2014.

Y. Gitman, M. Erofeev, D. Vatolin, B. Andrey, and F. Alexey, Semiautomatic visual-attention modeling and its 655 application to video compression, IEEE International Conference on Image Processing (ICIP), pp.1105-1109, 2014.

A. Coutrot and N. Guyader, How saliency, faces, and sound influence gaze in dynamic social scenes, Journal, vol.658, issue.2014, pp.5-5
URL : https://hal.archives-ouvertes.fr/hal-01018237

A. Coutrot and N. Guyader, An efficient audiovisual saliency model to predict eye positions when looking at 660 conversations, 23rd European Signal Processing Conference (EUSIPCO), vol.661, p.49, 2015.

W. Wang, J. Shen, J. Xie, M. M. Cheng, H. Ling et al., Revisiting video saliency prediction in the deep 662 learning era, IEEE transactions, vol.663, p.50, 2019.

S. Oh, A. Hoogs, A. Perera, N. Cuntoor, C. C. Chen et al., , p.664

L. Others, A large-scale benchmark dataset for event recognition in surveillance video, pp.3153-3160, 2011.

R. Layne, T. M. Hospedales, and S. Gong, Investigating open-world person re-identification using a drone

M. Bonetto, P. Korshunov, G. Ramponi, and T. Ebrahimi, Privacy in mini-drone based video surveillance, 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol.669, 2015.

T. Shu, D. Xie, B. Rothrock, S. Todorovic, and S. Chun-zhu, Joint inference of groups, events and human 672 roles in aerial videos, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.4576-4584, 2015.

M. Mueller, N. Smith, and B. Ghanem, A benchmark and simulator for uav tracking, pp.445-461, 2016.

A. Robicquet, A. Sadeghian, A. Alahi, and S. Savarese, Learning social etiquette: Human trajectory 677 understanding in crowded scenes, vol.678, p.56, 2016.

S. Li and D. Y. Yeung, Visual object tracking for unmanned aerial vehicles: A benchmark and new motion 679 models, Thirty-First AAAI Conference on Artificial Intelligence, vol.680, p.57, 2017.

M. Barekatain, M. Martí, H. F. Shih, S. Murray, K. Nakayama et al., , p.681

, An aerial view video dataset for concurrent human action detection, Proceedings of the IEEE Conference 682 on Computer Vision and Pattern Recognition Workshops, pp.28-35, 2017.

M. R. Hsieh, Y. L. Lin, and W. H. Hsu, Drone-based object counting by spatially regularized regional proposal 684 network, Proceedings of the IEEE International Conference on Computer Vision, vol.685, p.59, 2017.

R. Ribeiro, G. Cruz, J. Matos, and A. Bernardino, A dataset for airborne maritime surveillance environments

, IEEE Trans. Circuits Syst, vol.687, p.60, 2017.

H. J. Hsu and K. T. Chen, DroneFace: an open dataset for drone research, Proceedings of the 8th ACM on 688

, Multimedia Systems Conference, pp.187-192, 2017.

D. Bo?i?-?tuli?, ?. Maru?i?, and S. Gotovac, Deep Learning Approach in Aerial Imagery for Supporting Land 690 Search and Rescue Missions, International Journal of Computer Vision, pp.1-23, 2019.

K. Fu, J. Li, H. Shen, and Y. Tian, How drones look: Crowdsourced knowledge transfer for aerial video 692 saliency prediction, vol.693, p.63, 2018.

P. Zhu, L. Wen, X. Bian, H. Ling, and Q. Hu, Vision meets drones: A challenge

M. Nyström, R. Andersson, K. Holmqvist, and J. Van-de-weijer, The influence of calibration method and eye 696 physiology on eyetracking data quality. Behavior research methods, vol.45, p.65, 2013.

P. Itu-t-recommendation, Subjective video quality assessment methods for multimedia applications. 698 International telecommunication union, vol.699, p.66, 2008.

I. Rec and . Bt, 710-4. Subjective assessment methods for image quality in high-definition television 1998, vol.700, p.67

F. W. Cornelissen, E. M. Peters, and J. Palmer, The Eyelink Toolbox: eye tracking with MATLAB and the 701

, Psychophysics Toolbox. Behavior Research Methods, Instruments, & Computers, vol.34, p.68, 2002.

I. Rec and . Bt, 500-13. Methodology for the subjective assessment of the quality of television pictures 2012, vol.703, p.69

B. Wandell and S. Thomas, Foundations of vision, vol.704, p.70, 1997.

L. Meur, O. Baccino, and T. , Methods for comparing scanpaths and saliency maps: strengths and weaknesses. 705 Behavior research methods, vol.45, pp.251-266, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00757615

S. Guznov, G. Matthews, J. S. Warm, and M. Pfahler, Training techniques for visual search in complex task 707 environments, Human factors, vol.59, pp.1139-1152, 2017.

M. Shah, O. Javed, and K. Shafique, Automated visual surveillance in realistic scenarios, IEEE MultiMedia, vol.709, p.73, 2007.

H. Snellen, Test-types for the determination of the acuteness of vision; Williams and Norgate, vol.711, p.74

S. Ishihara, Test for colour-blindness

K. Tokyo and J. , , vol.712, p.75, 1987.

D. D. Salvucci and J. H. Goldberg, Identifying fixations and saccades in eye-tracking protocols, Proceedings of 713 the 2000 symposium on Eye tracking research & applications, vol.714, p.76, 2000.

V. Krassanakis, V. Filippakopoulou, and B. Nakos, EyeMMV toolbox: An eye movement post-analysis tool 715 based on a two-step spatial dispersion threshold for fixation identification, Journal of eye movement research, vol.716

V. Krassanakis, L. M. Misthos, and M. Menegaki, LandRate toolbox: An adaptable tool for eye movement 718 analysis and landscape rating, Proceedings of the 3rd International, vol.80

Y. Cui and J. M. Hondzinski, Gaze tracking accuracy in humans: Two eyes are better than one, Neuroscience, vol.725, issue.2006, pp.257-262

K. Holmqvist, M. Nyström, and F. Mulvey, Eye tracker data quality: what it is and how to measure it

, Proceedings of the symposium on eye tracking research and applications, vol.728, p.82, 2012.

I. T. Hooge, G. A. Holleman, N. C. Haukes, and R. S. Hessels, Gaze tracking accuracy in humans: One eye is 729 sometimes better than two, Behavior Research Methods, vol.730, p.83, 2018.

Z. Bylinskii, T. Judd, A. Borji, L. Itti, F. Durand et al., , vol.731, p.84, 2015.

L. Meur, O. Coutrot, and A. , Introducing context-dependent and spatially-variant viewing biases in saccadic 732 models, vol.121, pp.72-84, 2016.

, Submitted to Drones for possible open access publication under the terms and conditions 734 of the Creative Commons Attribution (CC BY) license