Important strides have been made in understanding how people scan visual content for information as a function of the interplay between the fovea and the peripheral view, the interplay between intentional and automatic processes, and the content itself. Such knowledge may be used to predict performance on low-level visual tasks in different visual encodings, and could be used in many cases as an alternative to user experimentation.
The project aims to build on previous models that use an understanding of low-level visual behavior to predict performance on simple HCI tasks (e.g., EPIC architecture), but would innovate by extending it to the much more complex content typical of real-life data visualizations.
Conducting user studies to explore visual scanning patterns for typical data-reading tasks in typical visual data content
Relating observed behavior to low-level models of visual scanning
Developing a general model of visual scanning as a function of visual content type and data-reading task
Conducting experiments to validate the model
User study design
Human factors (e.g., vision, cognition, task analysis)
Basic data handling and data analysis