One of the primary goals of information visualization research, a sister field of human-computer interaction (HCI), is to generate design guidelines to promote good design. Existing guidelines that are commonly referred to in visualization design, such as, Shneiderman’s Overview First, Zoom and Filter, Details on Demand, are predominantly informed by the state of knowledge and empirical research in visualization. However, these guidelines are deliberately very general to make them broadly applicable and their applicability to specific domain problems is not straightforward. While these guidelines can be a good starting point for designing visualization tools for specific domain problems, they can be insufficient and we need to study the needs of users in these domains more carefully to design tools for their use. There are a variety of empirical research methodologies in visualization that enable us to gather user requirements and evaluate tools. These methodologies each seek to maximize one or at most two of three desirable criteria, namely, generalizability, precision, and realism. Broadly speaking, quantitative empirical studies favor precision and generalizability while qualitative approaches lend themselves to more realistic studies capturing the actual contexts of use. This dissertation focuses on the use of various contextual and qualitative methods for informing visualization design guidelines in four specific application domains.
First, we characterized the holistic review process in undergraduate college admissions in the United States through contextual interviews and observations. We identified possible leverage points for applying visualization decision-support tools within the holistic review process including the use of approaches to mitigate potential cognitive biases of the reviewers identified in the study. Second, we conducted evaluation studies in the domain of personal visualizations, that is, visualizations of personal data of participants, using (i) a think-aloud method to identify the personal insights gained by participants and usability issues with the interface and (ii) a contextual interaction log analysis study to characterize exploratory behaviors of participants. Third, we conducted a case study of a visualization system designed for monitoring participant compliance in a large-scale, longitudinal study in order to evaluate the system in context with real users, data, and tasks and to make improvements. Finally, we conducted interviews with teams of information workers working from home during the COVID-19 pandemic to present design implications for visualizations of work rhythms of team members to increase awareness and coordination in remotely-working teams.
In addition to contributing visualization design guidelines to each of the four domain-specific problems, we reflect on the application of these methods for visualization design and how they differ from their traditional counterparts used in disciplines such as HCI and the social sciences. These methods not only enable a realistic and/or richer understanding of the domain situations, but they are also customized to be less open-ended and more goal-directed when applied in the field of visualization. Further, it is essential for researchers to possess a visualization background in order to elicit requirements relevant to and in the language of visualization.