227 results on '"Liu MA"'
Search Results
2. Toward an In-Depth Analysis of Multifidelity High Performance Computing Systems
- Author
-
Shilpika Shilpika, Bethany Lusch, Murali Emani, Filippo Simini, Venkatram Vishwanath, Michael E. Papka, and Kwan-Liu Ma
- Published
- 2022
- Full Text
- View/download PDF
3. A Comparison of the Fatigue Progression of Eye-Tracked and Motion-Controlled Interaction in Immersive Space
- Author
-
Siyuan Yao, Lukas Maximilian Masopust, David Bauer, and Kwan-Liu Ma
- Subjects
Modalities ,genetic structures ,Human–computer interaction ,Computer science ,Perspective (graphical) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Eye tracking ,Augmented reality ,Interaction design ,Virtual reality ,Mobile device ,Motion (physics) - Abstract
Eye-tracking enabled virtual reality (VR) headsets have recently become more widely available. This opens up opportunities to incorporate eye gaze interaction methods in VR applications. However, studies on the fatigue-induced performance fluctuations of these new input modalities are scarce and rarely provide a direct comparison with established interaction methods. We conduct a study to compare the selection-interaction performance between commonly used handheld motion control devices and emerging eye interaction technology in VR. We investigate each interaction’s unique fatigue progression pattern in study sessions with ten minutes of continuous engagement. The results support and extend previous findings regarding the progression of fatigue in eye-tracked interaction over prolonged periods. By directly comparing gaze-with motion-controlled interaction, we put the emerging eye-trackers into perspective with the state-of-the-art interaction method for immersive space. We then discuss potential implications for future extended reality (XR) interaction design based on our findings.
- Published
- 2021
- Full Text
- View/download PDF
4. Automatic Generation of Unit Visualization-based Scrollytelling for Impromptu Data Facts Delivery
- Author
-
Wei Chen, Yingcai Wu, Junhua Lu, Yuhui Gu, Jie Wang, Kwan-Liu Ma, Honghui Mei, Hui Ye, and Xiaolong Luke Zhang
- Subjects
Creative visualization ,Information retrieval ,business.industry ,Computer science ,media_common.quotation_subject ,05 social sciences ,020207 software engineering ,Usability ,02 engineering and technology ,Human-centered computing ,Impromptu ,Visualization ,Data visualization ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Visual communication ,Use case ,business ,050107 human factors ,media_common - Abstract
Data-driven scrollytelling has become a prevalent way of visual communication because of its comprehensive delivery of perspectives derived from the data. However, creating an expressive scrollytelling story requires both data and design literacy and is time-consuming. As a result, scrollytelling has been mainly used only by professional journalists to disseminate opinions. In this paper, we present an automatic method to generate expressive scrollytelling visualization, which can present easy-to-understand data facts through a carefully arranged sequence of views. The method first enumerates data facts of a given dataset, and scores and organizes them. The facts are further assembled, sequenced into a story, with reader input taken into consideration. Finally, visual graphs, transitions, and text descriptions are generated to synthesize the scrollytelling visualization. In this way, non-professionals can easily explore and share interesting perspectives from selected data attributes and fact types. We demonstrate the effectiveness and usability of our method through both use cases and an in-lab user study.
- Published
- 2021
- Full Text
- View/download PDF
5. A Visual Analytics Approach for the Diagnosis of Heterogeneous and Multidimensional Machine Maintenance Data
- Author
-
Alden Dima, Xiaoyu Zhang, Thurston Sexton, Senthil Chandrasegaran, Michael P. Brundage, Takanori Fujiwara, and Kwan-Liu Ma
- Subjects
Clustering high-dimensional data ,Visual analytics ,Computer science ,business.industry ,Dimensionality reduction ,020207 software engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,Preventive maintenance ,Data visualization ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Use case ,Artificial intelligence ,business ,Cluster analysis ,Categorical variable ,computer - Abstract
Analysis of large, high-dimensional, and heterogeneous datasets is challenging as no one technique is suitable for visualizing and clustering such data in order to make sense of the underlying information. For instance, heterogeneous logs detailing machine repair and maintenance in an organization often need to be analyzed to diagnose errors and identify abnormal patterns, formalize root-cause analyses, and plan preventive maintenance. Such real-world datasets are also beset by issues such as inconsistent and/or missing entries. To conduct an effective diagnosis, it is important to extract and understand patterns from the data with support from analytic algorithms (e.g., finding that certain kinds of machine complaints occur more in the summer) while involving the human-in-the-loop. To address these challenges, we adopt existing techniques for dimensionality reduction (DR) and clustering of numerical, categorical, and text data dimensions, and introduce a visual analytics approach that uses multiple coordinated views to connect DR + clustering results across each kind of the data dimension stated. To help analysts label the clusters, each clustering view is supplemented with techniques and visualizations that contrast a cluster of interest with the rest of the dataset. Our approach assists analysts to make sense of machine maintenance logs and their errors. Then the gained insights help them carry out preventive maintenance. We illustrate and evaluate our approach through use cases and expert studies respectively, and discuss generalization of the approach to other heterogeneous data.
- Published
- 2021
- Full Text
- View/download PDF
6. Representing Multivariate Data by Optimal Colors to Uncover Events of Interest in Time Series Data
- Author
-
Chien-Hsun Lai, Kwan-Liu Ma, Yu-Shuen Wang, Yun-Hsuan Lien, Ding-Bang Chen, and Yu-Hsuan Lin
- Subjects
Multivariate statistics ,Artificial neural network ,Computer science ,Event (computing) ,business.industry ,media_common.quotation_subject ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,Space (commercial competition) ,Visualization ,Color changes ,020204 information systems ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,Time series ,business ,media_common - Abstract
In this paper, we present a visualization system for users to study multivariate time series data. They first identify trends or anomalies from a global view and then examine details in a local view. Specifically, we train a neural network to project high-dimensional data to a two dimensional (2D) planar space while retaining global data distances. By aligning the 2D points with a predefined color map, high-dimensional data can be represented by colors. Because perceptual color differentiation may fail to reflect data distance, we optimize perceptual color differentiation on each map region by deformation. The region with large perceptual color differentiation will expand, whereas the region with small differentiation will shrink. Since colors do not occupy any space in visualization, we convey the overview of multivariate time series data by a calendar view. Cells in the view are color-coded to represent multivariate data at different time spans. Users can observe color changes over time to identify events of interest. Afterward, they study details of an event by examining parallel coordinate plots. Cells in the calendar view and the parallel coordinate plots are dynamically linked for users to obtain insights that are barely noticeable in large datasets. The experiment results, comparisons, conducted case studies, and the user study indicate that our visualization system is feasible and effective.
- Published
- 2020
- Full Text
- View/download PDF
7. All-Digital Background Calibration for Time-Interleaved ADC Using Differential Fir Filter
- Author
-
Wei, Jiang-Bo, primary, Liu, Ma-Liang, additional, Zhu, Zhang-Ming, additional, and Yang, Yin-Tang, additional
- Published
- 2020
- Full Text
- View/download PDF
8. The Ultra-Wideband 0.5-15GHz LNA for Reconfigurable Receiver System in 28 nm CMOS
- Author
-
Hu, Zhen-Feng, primary, Liu, Ma-Liang, additional, Ding, Rui-Xue, additional, Zhu, Zhang-Ming, additional, and Yang, Yin-Tang, additional
- Published
- 2020
- Full Text
- View/download PDF
9. MELA: A Visual Analytics Tool for Studying Multifidelity HPC System Logs
- Author
-
Kwan-Liu Ma, Michael E. Papka, Venkatram Vishwanath, Murali Emani, Fnu Shilpika, and Bethany Lusch
- Subjects
Visual analytics ,Computer science ,Log data ,Component (UML) ,media_common.quotation_subject ,Fidelity ,Data mining ,Supercomputer ,Cluster analysis ,computer.software_genre ,computer ,Visualization ,media_common - Abstract
To maintain a robust and reliable supercomputing hardware system there is a critical need to understand various system events, including failures occurring in the system. Toward this goal, we analyze various system logs such as error logs, job logs and environment logs from Argonne Leadership Computing Facility's (ALCF) Theta Cray XC40 supercomputer. This log data incorporates multiple subsystem and component measurements at various fidelity levels and temporal resolutions - a very diverse and massive dataset. To effectively identify various patterns that characterize system behavior and faults over time, we have developed a visual analytics tool, MELA, to better identify patterns and glean insights from these log data.
- Published
- 2019
- Full Text
- View/download PDF
10. A Visual Analytics Framework for Analyzing Parallel and Distributed Computing Applications
- Author
-
Misbah Mubarak, Jianping Kelvin Li, Caitlin Ross, Kwan-Liu Ma, Suraj P. Kesavan, Takanori Fujiwara, Christopher D. Carothers, and Robert Ross
- Subjects
Visual analytics ,business.industry ,Computer science ,media_common.quotation_subject ,Performance tuning ,020207 software engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,Telecommunications network ,Information visualization ,Debugging ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Data analysis ,Unsupervised learning ,Artificial intelligence ,Time series ,business ,computer ,media_common - Abstract
To optimize the performance and efficiency of HPC applications, programmers and analysts often need to collect various performance metrics for each computer at different time points as well as the communication data between the computers. This results in a complex dataset that consists of multivariate time-series and communication network data, which makes debugging and performance tuning of HPC applications challenging. Automated analytical methods based on statistical analysis and unsupervised learning are often insufficient to support such tasks without the background knowledge from the application programmers. To better explore and analyze a wide spectrum of HPC datasets, effective visual data analytics techniques are needed. In this paper, we present a visual analytics framework for analyzing HPC datasets produced by parallel discrete-event simulations (PDES). Our framework leverages automated time-series analysis methods and effective visualizations to analyze both multivariate time-series and communication network data. Through several case studies for analyzing the performance of PDES, we show that our visual analytics techniques and system can be effective in reasoning multiple performance metrics, temporal behaviors of the simulation, and the communication patterns.
- Published
- 2019
- Full Text
- View/download PDF
11. Topology-Based Spectral Sparsification
- Author
-
Peter Eades, Amyra Meidiana, Jiajun Huang, Seok-Hee Hong, and Kwan-Liu Ma
- Subjects
Computer science ,Sampling (statistics) ,0102 computer and information sciences ,02 engineering and technology ,Topology ,01 natural sciences ,Graph ,Visualization ,010201 computation theory & mathematics ,Graph drawing ,Graph sampling ,Server ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
Graph sampling is often used to reduce a large graph, with challenges to ensure the sample is representative of the original graph. Spectral sparsification is a related concept that creates a sparsified version of graphs that preserves the spectrum of the original graph.We present TSS, a sampling method combining spectral sparsification with topology-based decomposition of graphs. TSS aims to improve the runtime efficiency of spectral sparsification-based sampling through a Divide-and-Conquer approach using topology-based decomposition and combine it with the superior sampling quality spectral sparsification-based sampling offers over stochastic sampling. We also present DTSS, the distributed version of TSS, aimed for further runtime gains over sequential TSS.Experiments verify that TSS produces samples of the same quality as spectral sparsification-based sampling while attaining significant runtime improvements of up to 60% on real world datasets. DTSS on 5 servers runs up to another 80% faster compared to TSS.
- Published
- 2019
- Full Text
- View/download PDF
12. Collaborative Visual Analysis with Multi-level Information Sharing Using a Wall-Size Display and See-Through HMDs
- Author
-
Issei Fujishiro, Tianchen Sun, Kwan-Liu Ma, and Yucong Ye
- Subjects
Iterative design ,business.industry ,Computer science ,Process (engineering) ,Information sharing ,05 social sciences ,Optical head-mounted display ,020207 software engineering ,02 engineering and technology ,Information sensitivity ,Data visualization ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,0501 psychology and cognitive sciences ,Augmented reality ,business ,050107 human factors - Abstract
Solving complex data analysis problems can often benefit a collaborative effort. For synchronous co-located collaboration, a well-recognized challenge is to deliver different contents to people with different privileges and different responsibilities. This challenge is becoming more obvious with the use of a shared display space such as a wall-size display. In particular, scenarios often arise that a privileged participant needs to access sensitive information that other participants are not permitted to view. This is nearly impossible to achieve with only a single display. As a result, it becomes clear that additional devices are needed to provide some of the participants the capability to access and manage certain information in a private space. In this work, we investigate incorporating optical see-through head-mounted displays (OST-HMDs) with a wall-size display to deliver sensitive information in a synchronous co-located, collaborative setting. With our prototype system, we conduct a user study to observe the collaboration styles under this unique setup. We also present the lessons learned by reflecting on the iterative design process of our prototype system.
- Published
- 2019
- Full Text
- View/download PDF
13. An Interactive System for Exploring Historical Fire Data
- Author
-
Maksim Gomov, Kwan-Liu Ma, Keshav Dasu, and Tarik Crnovrsanin
- Subjects
040101 forestry ,021110 strategic, defence & security studies ,Geospatial analysis ,business.industry ,Computer science ,Property (programming) ,Human life ,0211 other engineering and technologies ,Climate change ,04 agricultural and veterinary sciences ,02 engineering and technology ,computer.software_genre ,Data science ,Data modeling ,Visualization ,Data visualization ,0401 agriculture, forestry, and fisheries ,Use case ,business ,computer - Abstract
Wildfires cause immense costs to human life, property, and the environment. As the impact of climate change increases the frequency and severity of wildfires, a renewed effort to understand these phenomena and their catalysts has increased. In this paper, we introduce a system that couples multiple sources of data and visualization to enable analysts to study historical fire data. We show two use cases to demonstrate the effectiveness of our system.
- Published
- 2019
- Full Text
- View/download PDF
14. Interactive Spatiotemporal Visualization of Phase Space Particle Trajectories Using Distance Plots
- Author
-
Kwan-Liu Ma and Tyson Neuroth
- Subjects
business.industry ,Computer science ,Phase (waves) ,020207 software engineering ,02 engineering and technology ,01 natural sciences ,010305 fluids & plasmas ,Visualization ,Data visualization ,Phase space ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Trajectory ,Flow map ,Recurrence plot ,Biological system ,business ,Interactive visualization - Abstract
The distance plot (or unthresholded recurrence plot) has been shown to be a useful tool for analyzing spatiotemporal patterns in high-dimensional phase space trajectories. We incorporate this technique into an interactive visualization with multiple linked phase plots, and extend the distance plot to also visualize marker particle weights from particle-in-cell (PIC) simulations together with the phase space trajectories. By linking the distance plot with phase plots, one can more easily investigate the spatiotemporal patterns, and by extending the plot to visualize particle weights in conjunction with the phase space trajectories, the visualization better supports the needs of domain experts studying particle-in-cell simulations. We demonstrate our resulting visualization design using particles from an XGC Tokamak fusion simulation.
- Published
- 2019
- Full Text
- View/download PDF
15. LEVERAGING SHARED MEMORY IN THE ROSS TIME WARP SIMULATOR FOR COMPLEX NETWORK SIMULATIONS
- Author
-
Caitlin J. Ross, Christopher D. Carothers, Misbah Mubarak, Robert B. Ross, Jianping Kelvin Li, and Kwan-Liu Ma
- Published
- 2018
- Full Text
- View/download PDF
16. An Empirical Study on Perceptually Masking Privacy in Graph Visualizations
- Author
-
Jing Li, Jia-Kai Chou, Chris Bryan, and Kwan-Liu Ma
- Subjects
Information retrieval ,business.industry ,Computer science ,Perceptual Masking ,Brute-force search ,020207 software engineering ,02 engineering and technology ,Human-centered computing ,Masking (Electronic Health Record) ,Visualization ,Data visualization ,Empirical research ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,business - Abstract
Researchers such as sociologists create visualizations of multivariate node-link diagrams to present findings about the relationships in communities. Unfortunately, such visualizations can inadvertently expose the ostensibly private identities of the persons that make up the dataset. By purposely violating graph readability metrics for a small region of the graph, we conjecture that local, exposed privacy leaks may be perceptually masked from easy recognition. In particular, we consider three commonly known metrics—edge crossing, node clustering, and node-edge overlapping—as a strategy to hide leaks. We evaluate the effectiveness of violating these metrics by conducting a user study that measures subject performance at visually searching for and identifying a privacy leak. Results show that when more masking operations are applied, participants needed more time to locate the privacy leak, though exhaustive, brute force search can eventually find it. We suggest future directions on how perceptual masking can be a viable strategy, primarily where modifying the underlying network structure is unfeasible.
- Published
- 2018
- Full Text
- View/download PDF
17. Cluster-Based Visualization for Merger Tree Data: The Challenge of Missing Expectations
- Author
-
Annie Preston and Kwan-Liu Ma
- Subjects
Black box (phreaking) ,Tree (data structure) ,Computer science ,Dark matter ,Cluster (physics) ,Clutter ,Data mining ,Cluster analysis ,Representation (mathematics) ,computer.software_genre ,computer ,Visualization - Abstract
Scientific simulations are yielding increasing amounts of data; to visualize the full output from a simulation, one must first reduce clutter and obstruction. Clustering algorithms are common tools for condensing information and decreasing clutter when analyzing and visualizing simulation output. Often, simulation data have intuitive groupings. In some cases, though, such as merger trees from N-body dark matter simulations, there are limited expectations for clustering results. We investigate cluster-based visualization design for merger tree data, testing whether multidimensional encodings and opening the "black box" can allow for meaningful representation and exploration of these data.
- Published
- 2018
- Full Text
- View/download PDF
18. Visual Analysis of Simulation Uncertainty Using Cost-Effective Sampling
- Author
-
Yiran Li, Annie Preston, Kwan-Liu Ma, and Franz Sauer
- Subjects
Speedup ,010504 meteorology & atmospheric sciences ,business.industry ,Bootstrapping ,Computer science ,Sampling (statistics) ,Experimental data ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Data modeling ,Visualization ,Data visualization ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Data mining ,business ,computer ,0105 earth and related environmental sciences - Abstract
Studying large, complex simulations entails understanding their uncertainties. However, visualization tools that rapidly quantify simulation uncertainty may require precise tuning, give limited information, or struggle to disentangle uncertainty sources. We propose a fast, scalable regression-based approach that uses bootstrapping on small samples of simulation data to model the effect of uncertainty from discreteness. We test the approach on three types of simulations with unique sources of uncertainty: particles (dark matter), ensembles (ocean), and discretized flows (traffic). We create a visualization tool to facilitate this modeling, showing training data and predictions in real time. Scientists, who need to provide only modest supervision, can use our tool to quickly understand how initial conditions and parameterizations affect observable quantities, their uncertainties, and their agreement with experimental data. We show that our tool offers a speedup of several orders of magnitude over comparable uncertainty calculation approaches.
- Published
- 2018
- Full Text
- View/download PDF
19. An Organic Visual Metaphor for Public Understanding of Conditional Co-occurrences
- Author
-
Takanori Fujiwara, Kwan-Liu Ma, and Keshav Dasu
- Subjects
business.industry ,Computer science ,Metaphor ,media_common.quotation_subject ,Conditional probability ,Data science ,Hierarchical database model ,Domain (software engineering) ,Market research ,Order (exchange) ,Health care ,Affect (linguistics) ,business ,media_common - Abstract
Decisions made by domain experts, such as in healthcare and market research, are influenced by the conditional co-occurrence of different events. Learning about conditional co-occurrence is also beneficial for non-experts–the general public. By understanding the co-occurrences of diseases, it is easier to understand which diseases individuals are susceptible to. However, co-occurrence data is often complex. In order for a public understanding of conditional co-occurrence, there needs to be a simpler form to convey such complex information. We introduce an organic visual metaphor, which can provide a summary of the conditional co-occurrences within a large set of items and is accessible to the public with its organic shape. We develop a prototype application offering not only an overview for users to gain insights on how co-occurrence patterns evolve based on user-defined criteria (e.g., how do sex and age affect likelihood), but also functionality to explore the hierarchical data in-depth. We conducted two case studies with this prototype to demonstrate the effectiveness of our design.
- Published
- 2018
- Full Text
- View/download PDF
20. Exploring the Role of Sound in Augmenting Visualization to Enhance User Engagement
- Author
-
Senthil Chandrasegaran, Meng Du, Jia-Kai Chou, Kwan-Liu Ma, and Chen Ma
- Subjects
business.industry ,Computer science ,Process (engineering) ,media_common.quotation_subject ,020207 software engineering ,02 engineering and technology ,Animation ,Visualization ,03 medical and health sciences ,0302 clinical medicine ,Data visualization ,Human–computer interaction ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,User interface ,business ,Interactive visualization ,030217 neurology & neurosurgery ,media_common - Abstract
Studies on augmenting visualization with sound are typically based on the assumption that sound can be complementary and assist in data analysis tasks. While sound promotes a different sense of engagement than vision, we conjecture that by augmenting non-speech audio to a visualization can not only help enhance the users' perception of the data but also increase their engagement with the data exploration process. We have designed a preliminary user study to test users' performance and engagement while exploring in a data visualization system under two different settings: visual-only and audiovisual. For our study, we used basketball player movement data in a game and created an interactive visualization system with three linked views. We supplemented sound to the visualization to enhance the users' understanding of a team's offensive/defensive behavior. The results of our study suggest that we need to better understand the effect of sound choice and encoding before considering engagement. We also find that sound can be useful to draw novice users' attention to patterns or anomalies in the data. Finally, we propose follow-up studies with designs informed by the findings from this study.
- Published
- 2018
- Full Text
- View/download PDF
21. Toward reliable validation of HPC network simulation models
- Author
-
Misbah Mubarak, Nikhil Jain, Jens Domke, Noah Wolfe, Caitlin Ross, Kelvin Li, Abhinav Bhatele, Christopher D. Carothers, Kwan-Liu Ma, and Robert B. Ross
- Published
- 2017
- Full Text
- View/download PDF
22. In situ video encoding of floating-point volume data using special-purpose hardware for a posteriori rendering and analysis
- Author
-
Nick Leaf, Kwan-Liu Ma, and Bob Miller
- Subjects
Floating point ,Computational complexity theory ,Computer science ,business.industry ,Computation ,020207 software engineering ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Lossy compression ,Rendering (computer graphics) ,Compression ratio ,0202 electrical engineering, electronic engineering, information engineering ,A priori and a posteriori ,020201 artificial intelligence & image processing ,business ,Computer hardware ,Volume (compression) - Abstract
Scientific simulations typically store only a small fraction of computed timesteps due to storage and I/O bandwidth limitations. Previous work has demonstrated the compressibility of floating-point volume data, but such compression often comes with a tradeoff between computational complexity and the achievable compression ratio. This work demonstrates the use of special-purpose video encoding hardware on the GPU which is present but (to the best of our knowledge) completely unused in current GPU-equipped super computers such as Titan. We show that lossy encoding allows the output of far more data at sufficient quality for a posteriori rendering and analysis. We also show that the encoding can be computed in parallel to general-purpose computation due to the special-purpose hardware. Finally, we demonstrate such encoded volumes are inexpensive to decode in memory during analysis, making it unnecessary to ever store the decompressed volumes on disk.
- Published
- 2017
- Full Text
- View/download PDF
23. Aiding infection analysis and diagnosis through temporally-contextualized matrix representations
- Author
-
Kwan-Liu Ma, Soman Sen, Maksim Gomov, Nam K. Tran, Jia-Kai Chou, Jianping Kelvin Li, and Kiho Cho
- Subjects
medicine.medical_specialty ,Descriptive statistics ,Computer science ,business.industry ,Medical record ,Representation (systemics) ,Timeline ,Context (language use) ,Identification (information) ,Data visualization ,medicine ,Use case ,Intensive care medicine ,business - Abstract
Determining infections and sepsis of severely burned adults in a timely fashion allows clinicians to provide necessary treatments to critically ill patients, potentially reducing the chance of mortality. In current practice, clinicians examine large amounts of heterogeneous medical records using a spreadsheet-like representation and perform analysis of descriptive statistics to aid in their decision making for sepsis diagnosis. A more efficient approach is required to streamline such a process; accordingly, we developed an interactive visual interface for supporting quick inspection and comparison of patients' retrospective clinical trajectories. In particular, we employ a timeline representation to present entire treatment contexts for individual patients, and an aggregated matrix representation for summarizing multiple data variables of individual patients over time. This provides clinicians with a compact and intuitive way to discover the important trends, patterns, and events that occur in the context of multiple patients. We present several possible use cases identified by clinicians using our system and show that our preliminary results have the potential to greatly improve diagnostic timing and accuracy of sepsis identification in critically ill patients.
- Published
- 2017
- Full Text
- View/download PDF
24. A Visual Analytics System for Optimizing Communications in Massively Parallel Applications
- Author
-
Kwan-Liu Ma, Michael E. Papka, Preeti Malakar, Takanori Fujiwara, Venkatram Vishwanath, and Khairi Reda
- Subjects
Visual analytics ,Computer science ,business.industry ,Distributed computing ,020207 software engineering ,02 engineering and technology ,Complex network ,Network topology ,Supercomputer ,020202 computer hardware & architecture ,Data visualization ,Parallel communication ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,business ,Massively parallel - Abstract
Current and future supercomputers have tens of thousands of compute nodes interconnected with high-dimensional networks and complex network topologies for improved performance. Application developers are required to write scalable parallel programs in order to achieve high throughput on these machines. Application performance is largely determined by efficient inter-process communication. A common way to analyze and optimize performance is through profiling parallel codes to identify communication bottlenecks. However, understanding gigabytes of profiled at a is not a trivial task. In this paper, we present a visual analytics system for identifying the scalability bottlenecks and improving the communication efficiency of massively parallel applications. Visualization methods used in this system are designed to comprehend large-scale and varied communication patterns on thousands of nodes in complex networks such as the 5D torus and the dragonfly. We also present efficient rerouting and remapping algorithms that can be coupled with our interactive visual analytics design for performance optimization. We demonstrate the utility of our system with several case studies using three benchmark applications on two leading supercomputers. The mapping suggestion from our system led to 38% improvement in hop-bytes for Mini AMR application on 4,096 MPI processes.
- Published
- 2017
- Full Text
- View/download PDF
25. GraphRay: Distributed pathfinder network scaling
- Author
-
Kwan-Liu Ma, Alessio Arleo, and Oh-Hyun Kwon
- Subjects
Distributed Computing Environment ,Computer science ,Pathfinder network ,02 engineering and technology ,Binary logarithm ,Electronic mail ,Computational science ,Distributed algorithm ,020204 information systems ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Algorithm design ,Time complexity - Abstract
Pathfinder network scaling is a graph sparsification technique that has been popularly used due to its efficacy of extracting the “important” structure of a graph. However, existing algorithms to compute the pathfinder network (PFNET) of a graph have prohibitively expensive time complexity for large graphs: O(n3) for the general case and O(n2 log n) for a specific parameter setting, PFNET(r = ∞, q = n − 1), which is considered in many applications. In this paper, we introduce the first distributed technique to compute the pathfinder network with the specific parameters (r = ∞ and q = n − 1) of a large graph with millions of edges. The results of our experiments show our technique is scalable; it efficiently utilizes a parallel distributed computing environment, reducing the running times as more processing units are added.
- Published
- 2017
- Full Text
- View/download PDF
26. Visual Analytics Techniques for Exploring the Design Space of Large-Scale High-Radix Networks
- Author
-
Christopher D. Carothers, Jianping Kelvin Li, Kwan-Liu Ma, Robert B. Ross, and Misbah Mubarak
- Subjects
010302 applied physics ,Visual analytics ,business.industry ,Computer science ,Distributed computing ,020207 software engineering ,02 engineering and technology ,Network topology ,01 natural sciences ,Data science ,Visualization ,Network simulation ,Data modeling ,Data visualization ,Analytics ,0103 physical sciences ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Network performance ,business - Abstract
High-radix, low-diameter, hierarchical networks based on the Dragonfly topology are common picks for building next generation HPC systems. However, effective tools are lacking for analyzing the network performance and exploring the design choices for such emerging networks at scale. In this paper, we present visual analytics methods that couple data aggregation techniques with interactive visualizations for analyzing large-scale Dragonfly networks. We create an interactive visual analytics system based on these techniques. To facilitate effective analysis and exploration of network behaviors, our system provides intuitive, scalable visualizations that can be customized to show various traffic characteristics and correlate between different performance metrics. Using high-fidelity network simulation and HPC applications communication traces, we demonstrate the usefulness of our system with several case studies on exploring network behaviors at scale with different workloads, routing strategies, and job placement policies. Our simulations and visualizations provide valuable insights for mitigating network congestion and inter-job interference.
- Published
- 2017
- Full Text
- View/download PDF
27. A gesture system for graph visualization in virtual reality environments
- Author
-
Yi-Jheng Huang, Yun-Xuan Lin, Takanori Fujiwara, Wen-Chieh Lin, and Kwan-Liu Ma
- Subjects
business.industry ,Computer science ,020207 software engineering ,Stereoscopy ,02 engineering and technology ,Virtual reality ,Electronic mail ,Visualization ,law.invention ,Data visualization ,Graph drawing ,law ,Gesture recognition ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business ,Gesture - Abstract
As virtual reality (VR) hardware technology becomes more mature and affordable, it is timely to develop visualization applications making use of such technology. How to interact with data in an immersive 3D space is both an interesting and challenging problem, demanding more research investigations. In this paper, we present a gesture input system for graph visualization in a stereoscopic 3D space. We compare desktop mouse input with gesture input with bare hands for performing a set of tasks on graphs. Our study results indicate that users are able to effortlessly manipulate and analyze graphs using gesture input. Furthermore, the results also show that using gestures is more efficient when exploring the complicated graph.
- Published
- 2017
- Full Text
- View/download PDF
28. Privacy preserving visualization for social network data with ontology information
- Author
-
Kwan-Liu Ma, Chris Bryan, and Jia-Kai Chou
- Subjects
Information privacy ,Information retrieval ,Data anonymization ,Computer science ,Privacy software ,business.industry ,020207 software engineering ,02 engineering and technology ,Ontology (information science) ,Visualization ,World Wide Web ,Information sensitivity ,Data visualization ,Graph drawing ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,business - Abstract
Analyzing social network data helps sociologists understand the behaviors of individuals and groups as well as the relationships between them. With additional ontology information, the semantics behind the network structure can be further explored. Unfortunately, creating network visualizations with these datasets for presentation can inadvertently expose the private and sensitive information of individuals that reside in the data. To deal with this problem, we generalize conventional data anonymization models (originally designed for relational data) and formally apply them in the context of privacy preserving ontological network visualization. We use these models to identify the privacy leaks that exist in a visualization, provide graph modification actions that remove and/or perceptually minimize the effect of the identified leaks, and discuss strategies for what types of privacy actions to choose depending on the context of the leaks. We implement an ontological visualization interface with associated privacy preserving operations, and demonstrate with two case studies using real-world datasets to show that our approach can identify and solve potential privacy issues while balancing overall graph readability and utility.
- Published
- 2017
- Full Text
- View/download PDF
29. A visual analytics system for brain functional connectivity comparison across individuals, groups, and time points
- Author
-
Andrew M. McCullough, Takanori Fujiwara, Kwan-Liu Ma, Charan Ranganath, and Jia-Kai Chou
- Subjects
Visual analytics ,Computer science ,business.industry ,Functional connectivity ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Correlation ,03 medical and health sciences ,0302 clinical medicine ,Data visualization ,High complexity ,0202 electrical engineering, electronic engineering, information engineering ,A priori and a posteriori ,Data mining ,business ,computer ,030217 neurology & neurosurgery ,Curse of dimensionality - Abstract
Neuroscientists study brain functional connectivity in order to obtain a deeper understanding of how the brain functions. Current studies are mainly based on analyzing the averaged brain connectivity of a group (or groups) due to the high complexity of the collected data in terms of dimensionality, variability, and volume. While it is more desirable for the researchers to explore the potential variability between individual subjects or groups, a data analysis solution meeting this need is absent. In this paper, we present the design and capabilities of such a visual analytics system, which enables neuroscientists to visually compare the differences of brain networks between individual subjects as well as group averages, to explore a large dataset and examine sub-groups of participants that may not have been expected a priori to be of interest, to review detailed information as needed, and to manipulate the data and views to fit their analytical needs with easy interactions. We demonstrate the utility and strengths of this system with case studies using a representative functional connectivity dataset.
- Published
- 2017
- Full Text
- View/download PDF
30. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations
- Author
-
Caitlin Ross, Christopher D. Carothers, Misbah Mubarak, Philip Carns, Robert Ross, Jianping Kelvin Li, and Kwan-Liu Ma
- Published
- 2016
- Full Text
- View/download PDF
31. VIPACT: A Visualization Interface for Analyzing Calling Context Trees
- Author
-
Huu Tan Nguyen, Lai Wei, Abhinav Bhatele, Todd Gamblin, David Boehme, Martin Schulz, Kwan-Liu Ma, and Peer-Timo Bremer
- Published
- 2016
- Full Text
- View/download PDF
32. Parallel distributed, GPU-accelerated, advanced lighting calculations for large-scale volume visualization
- Author
-
Joseph A. Insley, Silvio Rizzi, Michael E. Papka, Thomas D. Uram, Venkatram Vishwanath, Mark Hereld, Kwan-Liu Ma, and Min Shih
- Subjects
Global illumination ,Computer science ,020207 software engineering ,Volume rendering ,02 engineering and technology ,GPU cluster ,Computational science ,Rendering (computer graphics) ,Computer Science::Graphics ,Data exchange ,Computer graphics (images) ,Volume visualization ,Scalability ,Computer Science::Mathematical Software ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,General-purpose computing on graphics processing units ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
The benefits of applying advanced illumination models to volume visualization have been demonstrated by many researchers. For a parallel distributed, GPU computing environment, however, there is no efficient algorithm for scalable global illumination calculations. This paper presents a parallel, data-distributed and GPU-accelerated algorithm for volume rendering with advanced lighting. Our approach features tunable soft shadows for enhancing perception of complex spatial structures and relationships. For lighting calculations, our design effectively avoids data exchange among GPUs. Performance evaluation on a GPU cluster using up to 128 GPUs shows scalable rendering performance, with both the number of GPUs and volume data size.
- Published
- 2016
- Full Text
- View/download PDF
33. In situ generated probability distribution functions for interactive post hoc visualization and analysis
- Author
-
Tyson Neuroth, Franz Sauer, Yucong Chris Ye, Aditya Konduri, Kwan-Liu Ma, Hemanth Kolla, Jacqueline H. Chen, and Giulio Borghesi
- Subjects
Data processing ,business.industry ,Computer science ,020207 software engineering ,Usability ,02 engineering and technology ,computer.software_genre ,Data type ,Visualization ,Petascale computing ,Workflow ,Data access ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Probability distribution ,Data mining ,business ,computer - Abstract
The growing power and capacity of supercomputers enable scientific simulations at extreme scale, leading to not only more accurate modeling and greater predictive ability but also massive quantities of data to analyze. New approaches to data analysis and visualization are this needed to support interactive exploration through selective data access for gaining insights into terabytes and petabytes of data. In this paper, we present an in situ data processing method for both generating probability distribution functions (PDFs) from field data and reorganizing particle data using a single spatial organization scheme. This coupling between PDFs and particles allows for the interactive post hoc exploration of both data types simultaneously. Scientists can explore trends in large-scale data through the PDFs and subsequently extract desired particle subsets for further analysis. We evaluate the usability of our in situ method using a petascale combustion simulation and demonstrate the increases in task efficiency and accuracy that the resulting workflow provides to scientists.
- Published
- 2016
- Full Text
- View/download PDF
34. Evaluation of Topology-Aware Broadcast Algorithms for Dragonfly Networks
- Author
-
Jianping Kelvin Li, Robert Ross, Christopher D. Carothers, Matthieu Dorier, Misbah Mubarak, and Kwa-Liu Ma
- Subjects
020203 distributed computing ,Interconnection ,business.industry ,Computer science ,Distributed computing ,02 engineering and technology ,Network topology ,Broadcast algorithm ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Resource management ,Algorithm design ,business ,Computer network - Abstract
Two-tiered direct network topologies such as Dragonflies have been proposed for future post-petascale and exascale machines, since they provide a high-radix, low-diameter, fast interconnection network. Such topologies call for redesigningMPI collective communication algorithms in order to attain the best performance. Yet as increasingly more applications share a machine, it is not clear how these topology-aware algorithms will react to interference with concurrent jobs accessing the same network. In this paper, we study three topology-aware broadcast algorithms, including one designed by ourselves. We evaluate their performance through event-driven simulation for small-and large-sized broadcasts (in terms of both data size and number of processes). We study the effect of different routing mechanisms on the topology-aware collective algorithms, as well as their sensitivity to network contention with other jobs. Our results show that while topology-aware algorithms dramatically reduce link utilization, their advantage in terms of latency is more limited.
- Published
- 2016
- Full Text
- View/download PDF
35. Fostering comparisons: Designing an interactive exhibit that visualizes marine animal behaviors
- Author
-
Joyce Ma, Jacqueline Chu, Jennifer Frazier, Chien-Hsin Hsueh, and Kwan-Liu Ma
- Subjects
Visual analytics ,Computer science ,business.industry ,05 social sciences ,050301 education ,020207 software engineering ,02 engineering and technology ,Visualization ,Formative assessment ,Information visualization ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,User interface ,business ,Set (psychology) ,0503 education ,Interactive visualization ,Strengths and weaknesses - Abstract
We share our challenges and lessons learned in designing our exhibit prototype that encourages museum visitors to learn about marine animal behaviors through interactive visualization and data exploration. Our intent is to have visitors draw comparisons between animal behaviors, similarly to how scientists would, to make insights and discoveries. In our efforts, we have designed a set of visual encodings around the Tagging of Pelagic Predator (TOPP) data set to create the appropriate abstractions of this rich and complex field data. We have incorporated Multiple External Representations (MERs) and tangible user interfaces (TUIs) to provide a complementary representation of the data and promote self-learning. Through the formative evaluation, we can identify a few strengths and weaknesses of our prototype design. Our evaluation results suggest that we are progressing in the right direction — we observed the public making some comparisons and inferences — but still require further design iterations to improve our visualization exhibit.
- Published
- 2016
- Full Text
- View/download PDF
36. A design study of personal bibliographic data visualization
- Author
-
Kwan-Liu Ma, Jia-Kai Chou, and Tsai-Ling Fung
- Subjects
Academic career ,Information retrieval ,Computer science ,business.industry ,020207 software engineering ,06 humanities and the arts ,02 engineering and technology ,060401 art practice, history & theory ,Data science ,Data mapping ,Visualization ,Information visualization ,Tree (data structure) ,Data visualization ,Design study ,0202 electrical engineering, electronic engineering, information engineering ,Adjacency matrix ,business ,0604 arts - Abstract
This paper presents a comparative study on personal visualizations of bibliographic data. We consider three designs for egocentric visualization: node-link diagrams, adjacency matrices, and botanical trees to depict one's academic career in terms of his/her publication records. Case studies are conducted to compare the effectiveness of resulting visualizations for conveying particular aspect of a researcher's bibliographic records. Based on our study, we find that node-link diagrams are better at revealing the overall distribution of certain attributes; adjacency matrices can convey more information with less clutter; and botanical trees are visually attractive and provide the best at a glance characterization of the mapped data, but mapping data to tree features must be carefully done to derive expressive visualization.
- Published
- 2016
- Full Text
- View/download PDF
37. An Interactive Visual Analysis Tool for Cellular Behavior Studies Using Large Collections of Microscopy Videos
- Author
-
Árpád Karsai, Kwan-Liu Ma, Jia-Kai Chou, Gang-yu Liu, Chuan Wang, Evgeny Ogorodnik, Victoria Tran, and Ying X. Liu
- Subjects
0301 basic medicine ,Focus (computing) ,Visual analytics ,Multimedia ,Computer science ,business.industry ,Feature extraction ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Visualization ,03 medical and health sciences ,Information visualization ,030104 developmental biology ,Data visualization ,Biological data visualization ,Interactive visual analysis ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,business ,computer - Abstract
This paper presents an interactive visual analysis tool created for studying collections of video data. Our driving application is cellular behavior studies that use microscopy imaging methods. The studies routinely generate large amounts of videos with various experimental conditions. It is very time-consuming for the scientists to watch each video and manually extract features-of-interest for further comparative and quantitative studies. We show that with our visualization tool, scientists are now able to conveniently observe, select and isolate, and compare and analyze the cellular behaviors from different perspectives within one framework. The tremendous time and effort saved allow scientist to focus on deriving the actual meaning behind certain observed behaviors.
- Published
- 2016
- Full Text
- View/download PDF
38. Enabling interactive scientific data visualization and analysis with see-through hmds and a large tiled display
- Author
-
Kwan-Liu Ma, Issei Fujishiro, Yucong Ye, Chuan Wang, and Ken Nagao
- Subjects
business.industry ,Computer science ,Optical head-mounted display ,Stereoscopy ,Interaction Styles ,Visualization ,law.invention ,Rendering (computer graphics) ,Information visualization ,Data visualization ,Software ,Human–computer interaction ,law ,Computer graphics (images) ,business - Abstract
Validation and exploration of the data generated by large-scale scientific simulations rely on sophisticated visualization and analysis tasks. With the advancement of supercomputing, the growing scale and complexity of the data make some of these tasks challenging, which demands new hardware and software solutions. We believe it is possible to address some of the challenges by utilizing the increasingly affordable see-through head mounted display (H-MD) devices together with a low-cost tiled HDTV display. With the tiled display to provide a high-resolution overview of the data, the user can freely choose a small area to explore and analyze using a see-through HMD in stereoscopic 3D with gesture input. During such local exploration and detail data analysis, the user can apply a newly derived visualization parameter setting to the large tiled display for a new overview. In this way, computational costs become more manageable because realtime rendering and response are only required to cover a small screen space and a subset of the data. In our current study, we focus on supporting immersive isosurface and streamline visualization and analysis of 3D flow field data. In this workshop paper, we present our preliminary design and results, and we also discuss our further development and evaluation plan.
- Published
- 2016
- Full Text
- View/download PDF
39. Integrating predictive analytics into a spatiotemporal epidemic simulation
- Author
-
Susan M. Mniszewski, Xue Wu, Chris Bryan, and Kwan-Liu Ma
- Subjects
Visual analytics ,Computer science ,business.industry ,Statistical model ,Predictive analytics ,computer.software_genre ,Workflow ,Robustness (computer science) ,Analytics ,Scalability ,Data mining ,business ,computer ,Curse of dimensionality - Abstract
The Epidemic Simulation System (EpiSimS) is a scalable, complex modeling tool for analyzing disease within the United States. Due to its high input dimensionality, time requirements, and resource constraints, simulating over the entire parameter space is unfeasible. One solution is to take a granular sampling of the input space and use simpler predictive models (emulators) in between. The quality of the implemented emulator depends on many factors: robustness, sophistication, configuration, and suitability to the input data. Visual analytics can be leveraged to provide guidance and understanding of these things to the user. In this paper, we have implemented a novel interface and workflow for emulator building and use. We introduce a workflow to build emulators, make predictions, and then analyze the results. Our prediction process first predicts temporal time series, and uses these to derive predicted spatial densities. Integrated into the EpiSimS framework, we target users who are non-experts at statistical modeling. This approach allows for a high level of analysis into the state of the built emulators and their resultant predictions. We present our workflow, models, the associated system, and evaluate the overall utility with feedback from EpiSimS scientists.
- Published
- 2015
- Full Text
- View/download PDF
40. Revealing the fog-of-war: A visualization-directed, uncertainty-aware approach for exploring high-dimensional data
- Author
-
Yang Wang and Kwan-Liu Ma
- Subjects
Clustering high-dimensional data ,Visual analytics ,Data exploration ,Computer science ,business.industry ,Dimensionality reduction ,Context (language use) ,Machine learning ,computer.software_genre ,k-nearest neighbors algorithm ,Visualization ,Data visualization ,Scatter plot ,Embedding ,Data mining ,Artificial intelligence ,business ,computer ,Uncertainty analysis - Abstract
Dimensionality Reduction (DR) is a crucial tool to facilitate high-dimensional data analysis. As the volume and the variety of features used to describe a phenomenon keeps increasing, DR has become not only desirable but paramount. However, DR can result in unreliable depictions of data. The uncertainties involved in DR may stem from the selection of methods, parameter configurations, and the constraints imposed by the user. To address these uncertainties, various means of DR quality assessment have been proposed in the literature. Nevertheless, how to optimize the trade-off between the quantification efficiency and accuracy is yet to be further studied. The purpose of this paper is to present a general technique, in the context of visual analytics, to support efficient uncertainty-aware high-dimensional data exploration. We model the uncertainty based on how well neighborhood geometries are preserved during DR. We employ approximated nearest neighbor (ANN) search algorithms to speed up the quantification process with marginal decrease in accuracy. We then visualize the quantified uncertainties in the form of augmented scatter plot. We test our technique with three real world datasets against several well-known DR techniques, and discuss possible underlying causes that lead to certain embedding patterns. Our results show that our approach is effective and beneficial for both DR assessment and user-centered data exploration.
- Published
- 2015
- Full Text
- View/download PDF
41. In situ depth maps based feature extraction and tracking
- Author
-
Yang Wang, Kenji Ono, Kwan-Liu Ma, Robert Miller, and Yucong Chris Ye
- Subjects
Search engine ,Computer simulation ,Computer science ,Interface (computing) ,Feature extraction ,Tracking (particle physics) ,Supercomputer ,Image (mathematics) ,Computational science ,Visualization - Abstract
Parallel numerical simulation is a powerful tool used by scientists to study complex problems. It has been a common practice to save the simulation output to disk and then conduct post-hoc in-depth analyses of the saved data. System I/O capabilities have not kept pace as simulations have scaled up over time, so a common approach has been to output only subsets of the data to reduce I/O. However, as we are entering the era of peta- and exa-scale computing, this sub-sampling approach is no longer acceptable because too much valuable information is lost. In situ visualization has been shown a promising approach to the data problem at extreme-scale. We present a novel in situ solution using depth maps to enable post-hoc image-based visualization and feature extraction and tracking. An interactive interface is provided to allow for fine-tuning the generation of depth maps during the course of a simulation run to better capture the features of interest. We use several applications including one actual simulation run on a Cray XE6 supercomputer to demonstrate the effectiveness of our approach.
- Published
- 2015
- Full Text
- View/download PDF
42. Fast uncertainty-driven large-scale volume feature extraction on desktop PCs
- Author
-
Jinrong Xie, Kwan-Liu Ma, and Franz Sauer
- Subjects
business.industry ,Computer science ,Feature extraction ,Scientific visualization ,Usability ,computer.software_genre ,Field (computer science) ,Hierarchical clustering ,Domain knowledge ,Data mining ,Cluster analysis ,business ,computer ,Level of detail - Abstract
The ability to efficiently and accurately extract features of interest is an extremely important tool in the field of scientific visualization as it allows researchers to isolate regions based on their domain knowledge. However, the increasing size of large-scale datasets often forces users to rely on distributed computing environments which have many drawbacks in terms of interaction and convenience. Many of the current feature extraction techniques are designed around these distributed environments. The ability to overcome the memory and bandwidth limitations of desktop PCs can broaden their usability towards large-scale applications. In this work, we present a new hybrid feature extraction technique which combines GPU-accelerated clustering with the multi-resolution advantages of supervoxels in order to handle large-scale datasets on standard desktop PCs. Furthermore, this is paired with a user-driven uncertainty-based refinement approach to enhance extraction results into a desired level of detail. We demonstrate the effectiveness and interactivity of this technique using a number of application specific examples utilizing large-scale volumetric datasets.
- Published
- 2015
- Full Text
- View/download PDF
43. Scalable visualization of discrete velocity decompositions using spatially organized histograms
- Author
-
Tyson Neuroth, Kwan-Liu Ma, Franz Sauer, Weixing Wang, and S. Ethier
- Subjects
Creative visualization ,Geospatial analysis ,Scale (ratio) ,Computer science ,business.industry ,media_common.quotation_subject ,computer.software_genre ,Data type ,Visualization ,Data visualization ,Histogram ,Scalability ,Computer vision ,Artificial intelligence ,business ,computer ,media_common - Abstract
Visualizing the velocity decomposition of a group of objects has applications to many studied data types, such as Lagrangian-based flow data or geospatial movement data. Traditional visualization techniques are often subject to a trade-off between visual clutter and loss of detail, especially in a large scale setting. The use of 2D velocity histograms can alleviate these issues. While they have been used throughout domain specific areas on a basic level, there has been very little work in the visualization community on leveraging them to perform more advanced visualization tasks. In this work, we develop an interactive system which utilizes velocity histograms to visualize the velocity decomposition of a group of objects. In addition, we extend our tool to utilize two schemes for histogram generation: an on-the-fly sampling scheme as well as an in situ scheme to maintain interactivity in extreme scale applications.
- Published
- 2015
- Full Text
- View/download PDF
44. Advanced lighting for unstructured-grid data visualization
- Author
-
Yubo Zhang, Kwan-Liu Ma, and Min Shih
- Subjects
Parallel rendering ,Data visualization ,Computer science ,business.industry ,Computer graphics (images) ,Scientific visualization ,Volume rendering ,Computer graphics lighting ,Volumetric lighting ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,Visualization ,Rendering (computer graphics) - Abstract
The benefits of using advanced illumination models in volume visualization have been demonstrated by many researchers. Interactive volume rendering incorporated with advanced lighting has been achieved with GPU acceleration for regular-grid volume data, making volume visualization even more appealing as a tool for 3D data exploration. This paper presents an interactive illumination strategy, which is specially designed and optimized for volume visualization of unstructured-grid data. The basis of the design is a partial differential equation based illumination model to simulate the light propagation, absorption, and scattering within the volumetric medium. In particular, a two-level scheme is introduced to overcome the challenges presented by unstructured grids. Test results show that the added illumination effects such as global shadowing and multiple scattering not only lead to more visually pleasing visualization, but also greatly enhance the perception of the depth information and complex spatial relationships for features of interest in the volume data. This volume visualization enhancement is introduced at a time when unstructured grids are becoming increasingly popular for a variety of scientific simulation applications.
- Published
- 2015
- Full Text
- View/download PDF
45. Spherical layout and rendering methods for immersive graph visualization
- Author
-
Kwan-Liu Ma, Chris Muelder, Oh-Hyun Kwon, and Kyungwon Lee
- Subjects
Cave automatic virtual environment ,Information visualization ,Parallel rendering ,Graph drawing ,Computer science ,business.industry ,Computer graphics (images) ,Scientific visualization ,Virtual reality ,business ,Interactive visualization ,Visualization - Abstract
While virtual reality has been researched in many ways for spatial and scientific visualizations, comparatively little has been explored for visualizations of more abstract kinds of data. In particular, stereoscopic and VR environments for graph visualization have only been applied as limited extensions to standard 2D techniques (e.g. using stereoscopy for highlighting). In this work, we explore a new, immersive approach for graph visualization, designed specifically for virtual reality environments.
- Published
- 2015
- Full Text
- View/download PDF
46. A visual analysis approach to cohort study of electronic patient records
- Author
-
Chun-Fu Wang, Kwan-Liu Ma, Jianping Li, Yu-Chuan Li, and Chih-Wei Huang
- Subjects
Computer science ,business.industry ,health care facilities, manpower, and services ,Medical record ,Critical factors ,Visual mining ,behavioral disciplines and activities ,Data science ,Visualization ,Data visualization ,health services administration ,Set (psychology) ,business ,health care economics and organizations ,Cohort study - Abstract
The ability to analyze and assimilate Electronic Medical Records (EMR) has great value to physicians, clinical researchers, and medical policy makers. Current EMR systems do not provide adequate support for fully exploiting the data. The growing size, complexity, and accessibility of EMRs demand a new set of tools for extracting knowledge of interest from the data. This paper presents an interactive visual mining solution for cohort study of EMRs. The basis of our design is multidimensional, visual aggregation of the EMRs. The resulting visualizations can help uncover hidden structures in the data, compare different patient groups, determine critical factors to a particular disease, and help direct further analyses. We introduce and demonstrate our design with case studies using EMRs of 14,567 Chronic Kidney Disease (CKD) patients.
- Published
- 2014
- Full Text
- View/download PDF
47. A system for visual analysis of radio signal data
- Author
-
Chris Muelder, Kwan-Liu Ma, and Tarik Crnovrsanin
- Subjects
Metadata ,Intelligence analysis ,Computer science ,Frequency band ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Data mining ,Duration (project management) ,Adversary ,computer.software_genre ,computer ,Signal ,Readability ,Task (project management) - Abstract
Analysis of radio transmissions is vital for military defense as it provides valuable information about enemy communication and infrastructure. One challenge to the data analysis task is that there are far too many signals for analysts to go through by hand. Even typical signal meta data (such as frequency band, duration, and geographic location) can be overwhelming. In this paper, we present a system for exploring and analyzing such radio signal meta-data. Our system incorporates several visual representations for signal data, designed for readability and ease of comparison, as well as novel algorithms for extracting and classifying consistent signal patterns. We demonstrate the effectiveness of our system using data collected from real missions with an airborne sensor platform.
- Published
- 2014
- Full Text
- View/download PDF
48. Let It Flow: A Static Method for Exploring Dynamic Graphs
- Author
-
Nathalie Henry Riche, Tara M. Madhyastha, Kwan-Liu Ma, Xiting Wang, Weiwei Cui, Shixia Liu, and Baining Guo
- Subjects
Visual analytics ,Theoretical computer science ,business.industry ,Computer science ,Closeness ,Animation ,computer.software_genre ,Visualization ,Information visualization ,Data visualization ,Graph drawing ,Data mining ,business ,computer ,Computer animation - Abstract
Research into social network analysis has shown that graph metrics, such as degree and closeness, are often used to summarize structural changes in a dynamic graph. However there have been few visual analytics approaches that have been proposed to help analysts study graph evolutions in the context of graph metrics. In this paper, we present a novel approach, called GraphFlow, to visualize dynamic graphs. In contrast to previous approaches that provide users with an animated visualization, GraphFlow offers a static flow visualization that summarizes the graph metrics of the entire graph and its evolution over time. Our solution supports the discovery of high-level patterns that are difficult to identify in an animation or in individual static representations. In addition, GraphFlow provides users with a set of interactions to create filtered views. These views allow users to investigate why a particular pattern has occurred. We showcase the versatility of GraphFlow using two different datasets and describe how it can help users gain insights into complex dynamic graphs.
- Published
- 2014
- Full Text
- View/download PDF
49. Visibility guided multimodal volume visualization
- Author
-
Lin Zheng, Kwan-Liu Ma, and Carlos D. Correa
- Subjects
business.industry ,Computer science ,Visibility (geometry) ,Context (language use) ,Visualization ,Data visualization ,Region of interest ,Computer graphics (images) ,Medical imaging ,Volume ray casting ,Computer vision ,Artificial intelligence ,business ,Interactive visualization - Abstract
With the advances in dual medical imaging, the requirements for multimodal and multifield volume visualization begin to emerge. One of the challenges in multimodal visualization is how to simplify the process of generating informative pictures from complementary data. In this paper we present an automatic technique that makes use of dual modality information, such as CT and PET, to produce effective focus+context volume visualization. With volume ray casting, per-ray visibility histograms summarize the contribution of samples along each ray to the final image. By quantifying visibility for the region of interest, indicated by the PET data, occluding tissues can be made just transparent enough to give a clear view of the features in that region while preserving some context. Unlike most previous methods relying on costly-preprocessing and tedious manual tuning, our technique achieves comparable and better results based on on-the-fly processing that still enables interactive visualization. Our work thus offers a powerful visualization technique for examining multimodal volume data. We demonstrate the technique with scenarios for the detection and diagnosis of cancer and other pathologies.
- Published
- 2013
- Full Text
- View/download PDF
50. Proper orthogonal decomposition based parallel compression for visualizing big data on the K computer
- Author
-
Toshiyuki Imamura, Chongke Bi, Kenji Ono, Kwan-Liu Ma, and Haiyuan Wu
- Subjects
Data visualization ,Parallel compression ,Parallel rendering ,Computer science ,business.industry ,Volume rendering ,Parallel computing ,Supercomputer ,business ,Interactive visualization ,Computational science ,Visualization ,Data compression - Abstract
The development of supercomputers has greatly help us to carry on large-scale computing for dealing with various problems through simulating and analyzing them. Visualization is an indispensable tool to understand the properties of the data from supercomputers. Especially, interactive visualization can help us to analyze data from various viewpoints and even to find out some local small but important features. However, it is still difficult to interactively visualize such kind of big data directly due to the slow file I/O problem and the limitation of memory size. For resolving these problems, we proposed a parallel compression method to reduce the data size with low computational cost. Furthermore, the fast linear decompression process is another merit for interactive visualization. Our method uses proper orthogonal decomposition (POD) to compress data because it can effectively extract important features from the data and the resulting compressed data can also be linearly decompressed. Our implementation achieves high parallel efficiency with a binary load-distributed approach, which is similar to the binary-swap image composition used in parallel volume rendering [2]. This approach allows us to effectively utilize all the processing nodes and reduce the interprocessor communication cost throughout the parallel compression calculations. Our test results on the K computer demonstrate superior performance of our design and implementation.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.