Exploring 4D Flow Data in an Immersive Virtual Environment
Title | Exploring 4D Flow Data in an Immersive Virtual Environment |
Publication Type | Poster - Conference |
Year | 2017 |
Authors | Stevens, AH, Butkiewicz, T |
Conference Name | American Geophysical Union, Fall Meeting 2017 |
Volume | Abstract H51E-1323 |
Conference Dates | Dec 11 - 15 |
Publisher | AGU |
Conference Location | New Orleans, LA |
Oceanic and atmospheric simulations help us to understand and make prediction about the intricate physical processes of our dynamic planet. These simulations output models containing extremely large and complex time-varying three-dimensional (i.e., 4D) data, which poses a significant visualization challenge. Recent advances in mobile computing have ushered in a new era of affordable, high-fidelity virtual reality (VR) devices. The proliferation of these devices presents a unique opportunity to incorporate their use into scientific workflows; especially those involving geospatial data. Head-Mounted Displays (HMDs) provide a separate view of the virtual scene to each eye, which changes based on the position and orientation of the HMD in physical space to match real-world viewing. The binocular viewing contributes stereopsis (stereoscopic 3D) and the head tracking enables motion parallax, both of which convey strong depth cues to the human visual system. In addition, they provide a natural and intuitive way to change the viewing position, a task which is normally mapped to a mouse or keyboard function. Six-Degree-of-Freedom (6DOF) interaction devices report their position and rotation with respect to a reference coordinate system. The total degrees of freedom can be determined by counting the principal axes (x, y, z) that a device an be positioned along or rotated around. A traditional computer mouse, for example, only reports the relative x- and y-offsets of the mouse position on a desk or mousepad, yielding 2DOF. A device capable of 6DOF provides data about its position and orientation in 3D space. For 3D tasks, having these truly 3D input devices is critical. The Kinematic Chaining Model of Bimanual Actions proposed by Yves Guiard posits that the left and right hands form a kind of kinematic chain and the non-dominant hand forms a local, coarse frame of reference which the dominant hand can leverage for finer, more precise movements. An example of this type of asymmetric action would be painting a figurine by holding it in one hand and applying the fine details with the other. Past evaluations have found this mode of interaction to be superior over single-handed ones for a variety of tasks in virtual environments. By combining these superior viewing conditions and interaction techniques, we believe that we can improve the exploration of simulation results to more quickly and efficiently assess their validity and gain additional insight and understanding of these models. | |
URL | http://abstractsearch.agu.org/meetings/2017/FM/H51E-1323.html |