PhD Proposal: Enabling Collaborative Visual Analysis Over Heterogeneous Devices
Today's big data analytics problems require complex solutions that no longer just rely on a single user working on a standalone computer. In exploratory analyses carried through visualization, this challenge is being addressed in two main ways, (1) new techniques developed in visualization and visual analytics target helping multiple users effectively work together to develop insights, and (2) new opportunities for ubiquitous access and fluid interaction with data are offered by technological advances in displays and interactive surfaces. These advances include a wide range of devices including large displays, tabletops, handheld devices, and wearables. Synthesizing these developments can help us bridge the physical and cognitive gaps between multiple users working on the aforementioned heterogeneous devices for sensemaking through visual analysis. This can further create opportunities for ubiquitous visual analytics, where data can be visually analyzed anywhere, anytime. This has diverse applications, such as, supporting a group of meteorologists collectively monitor the atmospheric data visualized on a large display and visually explore the data on their personal devices to issue active watches and warnings, helping first responders access crisis information in real time in visual format on their smartphones to make decisions, and transforming public areas in a city into interactive spaces showing current issues of the city on large displays and supporting visual exploration of policy information through mobile devices to gather crowd opinion. Towards this vision, I introduce the phenomenon of collaborative cross-device visual analysis (C2-VA) and outline the challenges in developing user interfaces for C2-VA. This dissertation identifies C2-VA activities and design considerations for C2-VA based on four high-level challenges: (1) developing software infrastructure for multi-user activity, (2) sharing visualizations and insights across devices, (3) supporting fluid interactions in heterogeneous device spaces, (4) and promoting effective team coordination in collaborative visual analysis. To address the first challenge, the Munin and PolyChrome frameworks were developed to provide software infrastructures for collaborative activity across devices by creating shared data structures, distributing visual representations, and managing interaction events to consolidate state across devices. For the second challenge, I present cross-device interaction models elicited through a user study to share visualizations between large and small displays during visual exploration. Considering such scenarios, I explore how visualization interfaces can transform to fit smaller displays and present guidelines based on quantitative and qualitative studies of visual analysis activities. For the third, I focus on co-located collaboration in front of large displays and introduce Proxemic Lens, a multi-user interaction model that combines proxemics---implicit body movement---and gestures---explicit hand movement---for visual exploration. Finally, I present solutions for providing group awareness to promote effective collaboration, by evaluating how analysts utilize dataset coverage representations to work together. My research work currently addresses design challenges for collaborative visual analysis across large (55-72 inch) and small displays (6-9 inch) primarily in co-located settings. I propose to explore other collaboration settings and device modalities to further address the challenges of C2-VA. I will first study the roles of heterogeneous devices in visual analysis by exploring a coupling of two complementary devices---a large wall-sized display and a small wearable smart watch. I also plan to evaluate visualization designs for these devices to effectively convey the same data. This will help extend our guidelines for designing visualizations and interactions for heterogeneous devices. Finally, I plan to extend our guidelines to distributed collaborations by exploring device-independent ways to convey focus and context of each user, as well as their understanding (via annotations), to collaborators during visual analysis.
Chair: Dr. Niklas Elmqvist
Dept rep: Dr. Hector Corrada Bravo
Members: Dr. Catherine Plaisant
Dr. Jon Froehlich