126 results on '"Weber GH"'
Search Results
2. A terminology for in situ visualization and analysis systems
- Author
-
Childs, H, Ahern, SD, Ahrens, J, Bauer, AC, Bennett, J, Bethel, EW, Bremer, PT, Brugger, E, Cottam, J, Dorier, M, Dutta, S, Favre, JM, Fogal, T, Frey, S, Garth, C, Geveci, B, Godoy, WF, Hansen, CD, Harrison, C, Hentschel, B, Insley, J, Johnson, CR, Klasky, S, Knoll, A, Kress, J, Larsen, M, Lofstead, J, Ma, KL, Malakar, P, Meredith, J, Moreland, K, Navrátil, P, O’Leary, P, Parashar, M, Pascucci, V, Patchett, J, Peterka, T, Petruzza, S, Podhorszki, N, Pugmire, D, Rasquin, M, Rizzi, S, Rogers, DH, Sane, S, Sauer, F, Sisneros, R, Shen, HW, Usher, W, Vickery, R, Vishwanath, V, Wald, I, Wang, R, Weber, GH, Whitlock, B, Wolf, M, Yu, H, and Ziegeler, SB
- Subjects
In situ processing ,scientific visualization ,Distributed Computing - Abstract
The term “in situ processing” has evolved over the last decade to mean both a specific strategy for visualizing and analyzing data and an umbrella term for a processing paradigm. The resulting confusion makes it difficult for visualization and analysis scientists to communicate with each other and with their stakeholders. To address this problem, a group of over 50 experts convened with the goal of standardizing terminology. This paper summarizes their findings and proposes a new terminology for describing in situ systems. An important finding from this group was that in situ systems are best described via multiple, distinct axes: integration type, proximity, access, division of execution, operation controls, and output type. This paper discusses these axes, evaluates existing systems within the axes, and explores how currently used terms relate to the axes.
- Published
- 2020
3. Fuzzy Contour Trees: Alignment and Joint Layout of Multiple Contour Trees
- Author
-
Lohfink, AP, Wetzels, F, Lukasczyk, J, Weber, GH, and Garth, C
- Subjects
Artificial Intelligence and Image Processing ,Software Engineering - Abstract
We describe a novel technique for the simultaneous visualization of multiple scalar fields, e.g. representing the members of an ensemble, based on their contour trees. Using tree alignments, a graph-theoretic concept similar to edit distance mappings, we identify commonalities across multiple contour trees and leverage these to obtain a layout that can represent all trees simultaneously in an easy-to-interpret, minimally-cluttered manner. We describe a heuristic algorithm to compute tree alignments for a given similarity metric, and give an algorithm to compute a joint layout of the resulting aligned contour trees. We apply our approach to the visualization of scalar field ensembles, discuss basic visualization and interaction possibilities, and demonstrate results on several analytic and real-world examples.
- Published
- 2020
4. State-based network similarity visualization
- Author
-
Murugesan, S, Bouchard, K, Brown, J, Kiran, M, Lurie, D, Hamann, B, and Weber, GH
- Subjects
Dynamic networks ,visual analytics ,cluster analysis ,data visualization ,time-series analysis ,Networking and Information Technology R&D ,Computation Theory and Mathematics ,Data Format ,Artificial Intelligence and Image Processing ,Software Engineering - Abstract
We introduce an approach for the interactive visual analysis of weighted, dynamic networks. These networks arise in areas such as computational neuroscience, sociology, and biology. Network analysis remains challenging due to complex time-varying network behavior. For example, edges disappear/reappear, communities grow/vanish, or overall network topology changes. Our technique, TimeSum, detects the important topological changes in graph data to abstract the dynamic network and visualize one summary representation for each temporal phase, a state. We define a network state as a graph with similar topology over a specific time interval. To enable a holistic comparison of networks, we use a difference network to depict edge and community changes. We present case studies to demonstrate that our methods are effective and useful for extracting and exploring complex dynamic behavior of networks.
- Published
- 2020
5. Automated Labeling of Electron Microscopy Images Using Deep Learning
- Author
-
Weber, GH, Ophus, C, and Ramakrishnan, L
- Subjects
Bioengineering - Abstract
Searching for scientific data requires metadata providing a relevant context. Today, generating metadata is a time and labor intensive manual process that is often neglected, and important datasets are not accessible through search. We investigate the use of machine learning to generalize metadata from a subset of labeled data, thus increasing the availability of meaningful metadata for search. Specifically, we consider electron microscopy images collected at the National Center for Electron Microscopy (NCEM) at the Lawrence Berkeley National Laboratory (LBNL) and use deep learning to discern characteristics from a small subset of labeled images and transfer labels to the entire image corpus. Relatively small training set sizes and a minimum resolution of 512\times 512 pixels required by the application domain pose unique challenges. We overcome these challenges by using a simple yet powerful convolutional network architecture that limits the number of free parameters to lower the required amount of computational power and reduce the risk of overfitting. We achieve a classification accuracy of approximately 80% in discerning between images recorded in two operating modes of the electron microscope-transmission electron microscopy (TEM) and scanning transmission electron microscopy (STEM). We use transfer learning-i.e., re-using the pre-trained convolution layers from the TEM vs. STEM classification problem-to generalize labels and achieve an accuracy of approximately 70 % despite current experiments being limited to small training data sets. We present these predictions as suggestions to domain scientists through a web-based tool to accelerate the labeling process with the goal of further validating our approach and improving the accuracy of automatically created labels.
- Published
- 2019
6. ScienceSearch: Enabling search through automatic metadata generation
- Author
-
Rodrigo, GP, Henderson, M, Weber, GH, Ophus, C, Antypas, K, and Ramakrishnan, L
- Abstract
Scientific facilities are increasingly generating and handling large amounts of data from experiments and simulations. Next-generation scientific discoveries rely on insights derived from data, especially across domain boundaries. Search capabilities are critical to enable scientists to discover datasets of interest. However, scientific datasets often lack the signals or metadata required for effective searches. Thus, we need formalized methods and systems to automatically annotate scientific datasets from the data and its surrounding context. Additionally, a search infrastructure needs to account for the scale and rate of application data volumes. In this paper, we present ScienceSearch, a system infrastructure that uses machine learning techniques to capture and learn the knowledge, context, and surrounding artifacts from data to generate metadata and enable search. Our current implementation is focused on a dataset from the National Center for Electron Microscopy (NCEM), an electron microscopy facility at Lawrence Berkeley National Laboratory sponsored by the Department of Energy which supports hundreds of users and stores millions of micrographs. In this paper, we describe a) our search infrastructure and model, b) methods for generating metadata using machine learning techniques, and c) optimizations to improve search latency, and deployment on an HPC system. We demonstrate that ScienceSearch is capable of producing valid metadata for NCEM's dataset and providing low-latency good quality search results over a scientific dataset.
- Published
- 2018
7. Hierarchical Correlation Clustering in Multiple 2D Scalar Fields
- Author
-
Liebmann, T, Weber, GH, and Scheuermann, G
- Subjects
Software Engineering ,Artificial Intelligence and Image Processing - Abstract
Sets of multiple scalar fields can be used to model many types of variation in data, such as uncertainty in measurements and simulations or time-dependent behavior of scalar quantities. Many structural properties of such fields can be explained by dependencies between different points in the scalar field. Although these dependencies can be of arbitrary complexity, correlation, i.e., the linear dependency, already provides significant structural information. Existing methods for correlation analysis are usually limited to positive correlation, handle only local dependencies, or use combinatorial approximations to this continuous problem. We present a new approach for computing and visualizing correlated regions in sets of 2-dimensional scalar fields. This paper describes the following three main contributions: (i) An algorithm for hierarchical correlation clustering resulting in a dendrogram, (ii) a generalization of topological landscapes for dendrogram visualization, and (iii) a new method for incorporating negative correlation values in the clustering and visualization. All steps are designed to preserve the special properties of correlation coefficients. The results are visualized in two linked views, one showing the cluster hierarchy as 2D landscape and the other providing a spatial context in the scalar field's domain. Different coloring and texturing schemes coupled with interactive selection support an exploratory data analysis.
- Published
- 2018
8. Measuring the Error in Approximating the Sub-Level Set Topology of Sampled Scalar Data
- Author
-
Beketayev, K, Yeliussizov, D, Morozov, D, Weber, GH, and Hamann, B
- Subjects
Computation Theory & Mathematics ,Computation Theory and Mathematics ,Artificial Intelligence and Image Processing - Abstract
This paper studies the influence of the definition of neighborhoods and methods used for creating point connectivity on topological analysis of scalar functions. It is assumed that a scalar function is known only at a finite set of points with associated function values. In order to utilize topological approaches to analyze the scalar-valued point set, it is necessary to choose point neighborhoods and, usually, point connectivity to meaningfully determine critical-point behavior for the point set. Two distances are used to measure the difference in topology when different point neighborhoods and means to define connectivity are used: (i) the bottleneck distance for persistence diagrams and (ii) the distance between merge trees. Usually, these distances define how different scalar functions are with respect to their topology. These measures, when properly adapted to point sets coupled with a definition of neighborhood and connectivity, make it possible to understand how topological characteristics depend on connectivity. Noise is another aspect considered. Five types of neighborhoods and connectivity are discussed: (i) the Delaunay triangulation; (ii) the relative neighborhood graph; (iii) the Gabriel graph; (iv) the k-nearest-neighbor (KNN) neighborhood; and (v) the Vietoris-Rips complex. It is discussed in detail how topological characterizations depend on the chosen connectivity.
- Published
- 2018
9. Web-based visual data exploration for improved radiological source detection
- Author
-
Weber, GH, Bandstra, MS, Chivers, DH, Elgammal, HH, Hendrix, V, Kua, J, Maltz, JS, Muriki, K, Ong, Y, Song, K, Quinlan, MJ, Ramakrishnan, L, and Quiter, BJ
- Subjects
visualization ,databases ,data storage and indexing ,web-based system ,data integration ,Distributed Computing ,Artificial Intelligence and Image Processing ,Computer Software - Abstract
Radiation detection can provide a reliable means of detecting radiological material. Such capabilities can help to prevent nuclear and/or radiological attacks, but reliable detection in uncontrolled surroundings requires algorithms that account for environmental background radiation. The Berkeley Data Cloud (BDC) facilitates the development of such methods by providing a framework to capture, store, analyze, and share data sets. In the era of big data, both the size and variety of data make it difficult to explore and find data sets of interest and manage the data. Thus, in the context of big data, visualization is critical for checking data consistency and validity, identifying gaps in data coverage, searching for data relevant to an analyst's use cases, and choosing input parameters for analysis. Downloading the data and exploring it on an analyst's desktop using traditional tools are no longer feasible due to the size of the data. This paper describes the design and implementation of a visualization system that addresses the problems associated with data exploration within the context of the BDC. The visualization system is based on a JavaScript front end communicating via REST with a back end web server.
- Published
- 2017
10. Parallel peak pruning for scalable SMP contour tree computation
- Author
-
Carr, HA, Weber, GH, Sewell, CM, and Ahrens, JP
- Subjects
topological analysis ,contour tree ,merge tree ,data parallel algorithms - Abstract
As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this form of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. We report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.
- Published
- 2017
11. Scientific workflows at DataWarp-speed: Accelerated data-intensive science using Nersc's burst buffer
- Author
-
Ovsyannikov, A, Romanus, M, Straalen, BV, Weber, GH, and Trebotich, D
- Abstract
Emerging exascale systems have the ability to accelerate the time-to-discovery for scientific workflows. However, as these workflows become more complex, their generated data has grown at an unprecedented rate, making I/O constraints challenging. To address this problem advanced memory hierarchies, such as burst buffers, have been proposed as intermediate layers between the compute nodes and the parallel file system. In this paper, we utilize Cray DataWarp burst buffer coupled with in-transit processing mechanisms, to demonstrate the advantages of advanced memory hierarchies in preserving traditional coupled scientific workflows. We consider in-transit workflow which couples simulation of subsurface flows with on-the-fly flow visualization. With respect to the proposed workflow, we study the performance of the Cray DataWarp Burst Buffer and provide a comparison with the Lustre parallel file system.
- Published
- 2017
12. Hierarchical spatio-temporal visual analysis of cluster evolution in electrocorticography data
- Author
-
Murugesan, S, Bouchard, K, Chang, E, Dougherty, M, Hamann, B, and Weber, GH
- Subjects
Linked Views ,Neuroinformatics ,Brain Imaging ,Electrocorticography ,Graph Visualization ,Brain Disorders ,Clinical Research ,Epilepsy ,Neurosciences ,Neurodegenerative - Abstract
We present ECoG ClusterFlow, a novel interactive visual analysis tool for the exploration of high-resolution Electrocorticography (ECoG) data. Our system detects and visualizes dynamic high-level structures, such as communities, using the time-varying spatial connectivity network derived from the high-resolution ECoG data. ECoG ClusterFlow provides a multi-scale visualization of the spatio-temporal patterns underlying the time-varying communities using two views: 1) an overview summarizing the evolution of clusters over time and 2) a hierarchical glyph-based technique that uses data aggregation and small multiples techniques to visualize the propagation of clusters in their spatial domain. ECoG ClusterFlow makes it possible 1) to compare the spatio-temporal evolution patterns across various time intervals, 2) to compare the temporal information at varying levels of granularity, and 3) to investigate the evolution of spatial patterns without occluding the spatial context information. We present case studies done in collaboration with neuroscientists on our team for both simulated and real epileptic seizure data aimed at evaluating the effectiveness of our approach.
- Published
- 2016
13. Performance Analysis, Design Considerations, and Applications of Extreme-Scale in Situ Infrastructures
- Author
-
Ayachit, U, Bauer, A, Duque, EPN, Eisenhauer, G, Ferrier, N, Gu, J, Jansen, KE, Loring, B, Lukic, Z, Menon, S, Morozov, D, O'Leary, P, Ranjan, R, Rasquin, M, Stone, CP, Vishwanath, V, Weber, GH, Whitlock, B, Wolf, M, Wu, KJ, and Bethel, EW
- Abstract
A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. This paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead, performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.
- Published
- 2016
14. Accelerating Science with the NERSC Burst Buffer Early User Program
- Author
-
Bhimji, W, Bard, D, Romanus, M, Paul, D, Ovsyannikov, A, Friesen, B, Bryson, M, Correa, J, Lockwood, GK, Tsulaia, V, Byna, S, Farrell, S, Gursoy, D, Daley, C, Beckner, V, Van Straalen, B, Trebotich, D, Tull, C, Weber, GH, Wright, NJ, Antypas, K, and Prabhat
- Published
- 2016
15. Distributed merge trees
- Author
-
Morozov, D and Weber, GH
- Subjects
Information and Computing Sciences ,Applied Computing ,topological data analysis ,feature extraction ,merge tree computation ,parallelization ,hybrid parallelization approaches ,Software Engineering - Abstract
Improved simulations and sensors are producing datasets whose increasing complexity exhausts our ability to visualize and comprehend them directly. To cope with this problem, we can detect and extract significant features in the data and use them as the basis for subsequent analysis. Topological methods are valuable in this context because they provide robust and general feature definitions. As the growth of serial computational power has stalled, data analysis is becoming increasingly dependent on massively parallel machines. To satisfy the computational demand created by complex datasets, algorithms need to effectively utilize these computer architectures. The main strength of topological methods, their emphasis on global information, turns into an obstacle during parallelization. We present two approaches to alleviate this problem. We develop a distributed representation of the merge tree that avoids computing the global tree on a single processor and lets us parallelize subsequent queries. To account for the increasing number of cores per processor, we develop a new data structure that lets us take advantage of multiple shared-memory cores to parallelize the work on a single node. Finally, we present experiments that illustrate the strengths of our approach as well as help identify future challenges. Copyright © 2013 ACM 978-1-4503-1922-5/13/ 02...$15.00.
- Published
- 2013
16. Augmented Topological Descriptors of Pore Networks for Material Science.
- Author
-
Ushizima, D, Morozov, D, Weber, GH, Bianchi, AGC, Sethian, JA, and Bethel, EW
- Subjects
Reeb graph ,persistent homology ,topological data analysis ,geometric algorithms ,segmentation ,microscopy ,Software Engineering ,Artificial Intelligence and Image Processing ,Computation Theory and Mathematics - Abstract
One potential solution to reduce the concentration of carbon dioxide in the atmosphere is the geologic storage of captured CO2 in underground rock formations, also known as carbon sequestration. There is ongoing research to guarantee that this process is both efficient and safe. We describe tools that provide measurements of media porosity, and permeability estimates, including visualization of pore structures. Existing standard algorithms make limited use of geometric information in calculating permeability of complex microstructures. This quantity is important for the analysis of biomineralization, a subsurface process that can affect physical properties of porous media. This paper introduces geometric and topological descriptors that enhance the estimation of material permeability. Our analysis framework includes the processing of experimental data, segmentation, and feature extraction and making novel use of multiscale topological analysis to quantify maximum flow through porous networks. We illustrate our results using synchrotron-based X-ray computed microtomography of glass beads during biomineralization. We also benchmark the proposed algorithms using simulated data sets modeling jammed packed bead beds of a monodispersive material.
- Published
- 2012
17. Topology-guided tessellation of quadratic elements
- Author
-
Dillard, SE, Natarajan, V, Weber, GH, Pascucci, V, and Hamann, B
- Subjects
Topology ,Reeb graphs ,quadratic surfaces ,Computation Theory And Mathematics ,Computation Theory & Mathematics ,Computation Theory and Mathematics ,Artificial Intelligence and Image Processing - Abstract
Topology-based methods have been successfully used for the analysis and visualization of piecewise-linear functions defined on triangle meshes. This paper describes a mechanism for extending these methods to piecewise-quadratic functions defined on triangulations of surfaces. Each triangular patch is tessellated into monotone regions, so that existing algorithms for computing topological representations of piecewise-linear functions may be applied directly to the piecewise-quadratic function. In particular, the tessellation is used for computing the Reeb graph, a topological data structure that provides a succinct representation of level sets of the function. © 2009 World Scientific Publishing Company.
- Published
- 2009
18. High performance multivariate visual data exploration for extremely large data
- Author
-
Rübel, O, Prabhat, Wu, K, Childs, H, Meredith, J, Geddes, CGR, Cormier-Michel, E, Ahern, S, Weber, GH, Messmer, P, Hagen, H, Hamann, B, and Bethel, EW
- Subjects
1.5 Resources and infrastructure (underpinning) - Abstract
One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system. © 2008 IEEE.
- Published
- 2008
19. Tessellation of quadratic elements
- Author
-
Dillard, SE, Natarajan, V, Weber, GH, Pascucci, V, and Hamann, B
- Subjects
Information And Computing Sciences ,Artificial Intelligence & Image Processing - Abstract
Topology-based methods have been successfully used for the analysis and visualization of piecewise-linear functions defined on triangle meshes. This paper describes a mechanism for extending these methods to piecewise-quadratic functions defined on triangulations of surfaces. Each triangular patch is tessellated into monotone regions, so that existing algorithms for computing topological representations of piecewise-linear functions may be applied directly to piecewise-quadratic functions. In particular, the tessellation is used for computing the Reeb graph, which provides a succinct representation of level sets of the function. © 2006 Springer-Verlag Berlin/Heidelberg.
- Published
- 2006
20. Parallel cell projection rendering of adaptive mesh refinement data
- Author
-
Weber, GH, Öhler, M, Kreylos, O, Shalf, JM, Bethel, EW, Hamann, B, and Scheuermann, G
- Subjects
volume rendering ,adaptive mesh refinement ,load balancing. ,multi-arid methods ,parallel rendering ,visualization ,Bioengineering ,Analytical Chemistry ,Optical Physics ,Electrical and Electronic Engineering ,Mechanical Engineering - Abstract
Adaptive mesh refinement (AMR) is a technique used in numerical simulations to automatically refine (or de-refine) certain regions of the physical domain in a finite difference calculation. AMR data consists of nested hierarchies of data grids. As AMR visualization is still a relatively unexplored topic, our work is motivated by the need to perform efficient visualization of large AMR data sets. We present a software algorithm for parallel direct volume rendering of AMR data using a cell-projection technique on several different parallel platforms. Our algorithm can use one of several different distribution methods, and we present performance results for each of these alternative approaches. By partitioning an AMR data set into blocks of constant resolution and estimating rendering costs of individual blocks using an application specific benchmark, it is possible to achieve even load balancing.
- Published
- 2003
21. Preface: Message from the ViS paper chairs and guest editors
- Author
-
Chang, R, Dwyer, T, Fujishiro, I, Isenberg, P, Franconeri, S, Qu, H, Schreck, T, Weiskopf, D, and Weber, GH
- Subjects
Artificial Intelligence and Image Processing ,Computation Theory and Mathematics ,Software Engineering ,Engineering - Published
- 2019
22. Distributed Hierarchical Contour Trees
- Author
-
Carr, HA, Rübel, O, and Weber, GH
- Published
- 2022
- Full Text
- View/download PDF
23. Computing and visualizing time-varying merge trees for high-dimensional data
- Author
-
Oesterling, P, Heine, C, Weber, GH, Morozov, D, and Scheuermann, G
- Abstract
We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree—a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.
- Published
- 2017
24. Netostat: analyzing dynamic flow patterns in high-speed networks
- Author
-
Murugesan, S, Murugesan, S, Kiran, M, Hamann, B, Weber, GH, Murugesan, S, Murugesan, S, Kiran, M, Hamann, B, and Weber, GH
- Abstract
Understanding flow traffic patterns in networks, such as the Internet or service provider networks, is crucial to improving their design and building them robustly. However, as networks grow and become more complex, it is increasingly cumbersome and challenging to study how the many flow patterns, sizes and the continually changing source-destination pairs in the network evolve with time. We present Netostat, a visualization-based network analysis tool that uses visual representation and a mathematics framework to study and capture flow patterns, using graph theoretical methods such as clustering, similarity and difference measures. Netostat generates an interactive graph of all traffic patterns in the network, to isolate key elements that can provide insights for traffic engineering. We present results for U.S. and European research networks, ESnet and GEANT, demonstrating network state changes, to identify major flow trends, potential points of failure, and bottlenecks.
- Published
- 2022
25. Preface
- Author
-
Chang, R, Dwyer, T, Fujishiro, I, Isenberg, P, Franconeri, S, Qu, H, Schreck, T, Weiskopf, D, and Weber, GH
- Subjects
Computation Theory And Mathematics ,Software Engineering ,Artificial Intelligence And Image Processing - Published
- 2019
- Full Text
- View/download PDF
26. Probiotics [LGG-BB12 or RC14-GR1] versus placebo as prophylaxis for urinary tract infection in persons with spinal cord injury [ProSCIUTTU]: a randomised controlled trial
- Author
-
Toh, SL, Lee, BB, Ryan, S, Simpson, JM, Clezy, K, Bossa, L, Rice, SA, Marial, O, Weber, GH, Kaur, J, Boswell-Ruys, CL, Goodall, S, Middleton, JW, Tuderhope, M, Kotsiou, G, Toh, SL, Lee, BB, Ryan, S, Simpson, JM, Clezy, K, Bossa, L, Rice, SA, Marial, O, Weber, GH, Kaur, J, Boswell-Ruys, CL, Goodall, S, Middleton, JW, Tuderhope, M, and Kotsiou, G
- Abstract
© 2019, The Author(s). Study design: Randomised double-blind factorial-design placebo-controlled trial. Objective: Urinary tract infections (UTIs) are common in people with spinal cord injury (SCI). UTIs are increasingly difficult to treat due to emergence of multi-resistant organisms. Probiotics are efficacious in preventing UTIs in post-menopausal women. We aimed to determine whether probiotic therapy with Lactobacillus reuteri RC-14+Lactobacillus GR-1 (RC14-GR1) and/or Lactobacillus rhamnosus GG+Bifidobacterium BB-12 (LGG-BB12) are effective in preventing UTI in people with SCI. Setting: Spinal units in New South Wales, Australia with their rural affiliations. Methods: We recruited 207 eligible participants with SCI and stable neurogenic bladder management. They were randomised to one of four arms: RC14-GR1+LGG-BB12, RC14-GR1+placebo, LGG-BB12+ placebo or double placebos for 6 months. Randomisation was stratified by bladder management type and inpatient or outpatient status. The primary outcome was time to occurrence of symptomatic UTI. Results: Analysis was based on intention to treat. Participants randomised to RC14-GR1 had a similar risk of UTI as those not on RC14-GR1 (HR 0.67; 95% CI: 0.39–1.18; P = 0.17) after allowing for pre-specified covariates. Participants randomised to LGG-BB12 also had a similar risk of UTI as those not on LGG-BB12 (HR 1.29; 95% CI: 0.74–2.25; P = 0.37). Multivariable post hoc survival analysis for RC14-GR1 only vs. the other three groups showed a potential protective effect (HR 0.46; 95% CI: 0.21–0.99; P = 0.03), but this result would need to be confirmed before clinical application. Conclusion: In this RCT, there was no effect of RC14-GR1 or LGG-BB12 in preventing UTI in people with SCI.
- Published
- 2019
27. Parallel Peak Pruning for Scalable SMP Contour Tree Computation
- Author
-
Carr, HA, Weber, GH, Sewell, CM, and Ahrens, JP
- Subjects
topological analysis ,contour tree ,data parallel algorithms ,merge tree - Abstract
© 2016 IEEE. As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this form of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. We report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.
- Published
- 2016
- Full Text
- View/download PDF
28. Postoperative pain and perioperative analgesic administration in dogs: practices, attitudes and beliefs of Queensland veterinarians
- Author
-
Weber, GH, primary, Morton, JM, additional, and Keates, H, additional
- Published
- 2012
- Full Text
- View/download PDF
29. Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration.
- Author
-
Li M, Carr H, Rubel O, Wang B, and Weber GH
- Abstract
Contour trees describe the topology of level sets in scalar fields and are widely used in topological data analysis and visualization. A main challenge of utilizing contour trees for large-scale scientific data is their computation at scale using highperformance computing. To address this challenge, recent work has introduced distributed hierarchical contour trees for distributed computation and storage of contour trees. However, effective use of these distributed structures in analysis and visualization requires subsequent computation of geometric properties and branch decomposition to support contour extraction and exploration. In this work, we introduce distributed algorithms for augmentation, hypersweeps, and branch decomposition that enable parallel computation of geometric properties, and support the use of distributed contour trees as query structures for scientific exploration. We evaluate the parallel performance of these algorithms and apply them to identify and extract important contours for scientific visualization.
- Published
- 2024
- Full Text
- View/download PDF
30. What AHRQ Learned While Working to Transform Primary Care.
- Author
-
Meyers D, Miller T, De La Mare J, Gerteis JS, Makulowich G, Weber GH, Zhan C, and Genevro J
- Subjects
- United States, Humans, United States Agency for Healthcare Research and Quality, Primary Health Care methods, Quality Improvement
- Abstract
Building on previous efforts to transform primary care, the Agency for Healthcare Research and Quality (AHRQ) launched EvidenceNOW: Advancing Heart Health in 2015. This 3-year initiative provided external quality improvement support to small and medium-size primary care practices to implement evidence-based cardiovascular care. Despite challenges, results from an independent national evaluation demonstrated that the EvidenceNOW model successfully boosted the capacity of primary care practices to improve quality of care, while helping to advance heart health. Reflecting on AHRQ's own learnings as the funder of this work, 3 key lessons emerged: (1) there will always be surprises that will require flexibility and real-time adaptation; (2) primary care transformation is about more than technology; and (3) it takes time and experience to improve care delivery and health outcomes. EvidenceNOW taught us that lasting practice transformation efforts need to be responsive to anticipated and unanticipated changes, relationship-oriented, and not tied to a specific disease or initiative. We believe these lessons argue for a national primary care extension service that provides ongoing support for practice transformation., (© 2024 Annals of Family Medicine, Inc.)
- Published
- 2024
- Full Text
- View/download PDF
31. ExTreeM: Scalable Augmented Merge Tree Computation via Extremum Graphs.
- Author
-
Lukasczyk J, Will M, Wetzels F, Weber GH, and Garth C
- Abstract
Over the last decade merge trees have been proven to support a plethora of visualization and analysis tasks since they effectively abstract complex datasets. This paper describes the ExTreeM-Algorithm: A scalable algorithm for the computation of merge trees via extremum graphs. The core idea of ExTreeM is to first derive the extremum graph G of an input scalar field f defined on a cell complex K, and subsequently compute the unaugmented merge tree of f on G instead of K; which are equivalent. Any merge tree algorithm can be carried out significantly faster on G, since K in general contains substantially more cells than G. To further speed up computation, ExTreeM includes a tailored procedure to derive merge trees of extremum graphs. The computation of the fully augmented merge tree, i.e., a merge tree domain segmentation of K, can then be performed in an optional post-processing step. All steps of ExTreeM consist of procedures with high parallel efficiency, and we provide a formal proof of its correctness. Our experiments, performed on publicly available datasets, report a speedup of up to one order of magnitude over the state-of-the-art algorithms included in the TTK and VTK-m software libraries, while also requiring significantly less memory and exhibiting excellent scaling behavior.
- Published
- 2024
- Full Text
- View/download PDF
32. Optimization and Augmentation for Data Parallel Contour Trees.
- Author
-
Carr HA, Rubel O, Weber GH, and Ahrens JP
- Subjects
- Algorithms, Computer Graphics
- Abstract
Contour trees are used for topological data analysis in scientific visualization. While originally computed with serial algorithms, recent work has introduced a vector-parallel algorithm. However, this algorithm is relatively slow for fully augmented contour trees which are needed for many practical data analysis tasks. We therefore introduce a representation called the hyperstructure that enables efficient searches through the contour tree and use it to construct a fully augmented contour tree in data parallel, with performance on average 6 times faster than the state-of-the-art parallel algorithm in the TTK topological toolkit.
- Published
- 2022
- Full Text
- View/download PDF
33. Deep Learning Segmentation of Complex Features in Atomic-Resolution Phase-Contrast Transmission Electron Microscopy Images.
- Author
-
Sadre R, Ophus C, Butko A, and Weber GH
- Abstract
Phase-contrast transmission electron microscopy (TEM) is a powerful tool for imaging the local atomic structure of materials. TEM has been used heavily in studies of defect structures of two-dimensional materials such as monolayer graphene due to its high dose efficiency. However, phase-contrast imaging can produce complex nonlinear contrast, even for weakly scattering samples. It is, therefore, difficult to develop fully automated analysis routines for phase-contrast TEM studies using conventional image processing tools. For automated analysis of large sample regions of graphene, one of the key problems is segmentation between the structure of interest and unwanted structures such as surface contaminant layers. In this study, we compare the performance of a conventional Bragg filtering method with a deep learning routine based on the U-Net architecture. We show that the deep learning method is more general, simpler to apply in practice, and produces more accurate and robust results than the conventional algorithm. We provide easily adaptable source code for all results in this paper and discuss potential applications for deep learning in fully automated TEM image analysis.
- Published
- 2021
- Full Text
- View/download PDF
34. Scalable Contour Tree Computation by Data Parallel Peak Pruning.
- Author
-
Carr HA, Weber GH, Sewell CM, Rubel O, Fasel P, and Ahrens JP
- Abstract
As data sets grow to exascale, automated data analysis and visualization are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this form of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. We report the first shared SMP algorithm for fully parallel contour tree computation, with formal guarantees of O(lg V lg t) parallel steps and O(V lg V) work for data with V samples and t contour tree supernodes, and implementations with more than 30× parallel speed up on both CPU using TBB and GPU using Thrust and up 70× speed up compared to the serial sweep and merge algorithm.
- Published
- 2021
- Full Text
- View/download PDF
35. Dynamic Nested Tracking Graphs.
- Author
-
Lukasczyk J, Garth C, Weber GH, Biedert T, Maciejewski R, and Leitte H
- Abstract
This work describes an approach for the interactive visual analysis of large-scale simulations, where numerous superlevel set components and their evolution are of primary interest. The approach first derives, at simulation runtime, a specialized Cinema database that consists of images of component groups, and topological abstractions. This database is processed by a novel graph operation-based nested tracking graph algorithm (GO-NTG) that dynamically computes NTGs for component groups based on size, overlap, persistence, and level thresholds. The resulting NTGs are in turn used in a feature-centered visual analytics framework to query specific database elements and update feature parameters, facilitating flexible post hoc analysis.
- Published
- 2020
- Full Text
- View/download PDF
36. Probiotics [LGG-BB12 or RC14-GR1] versus placebo as prophylaxis for urinary tract infection in persons with spinal cord injury [ProSCIUTTU]: a randomised controlled trial.
- Author
-
Toh SL, Lee BB, Ryan S, Simpson JM, Clezy K, Bossa L, Rice SA, Marial O, Weber GH, Kaur J, Boswell-Ruys CL, Goodall S, Middleton JW, Tuderhope M, and Kotsiou G
- Subjects
- Adult, Aged, Aged, 80 and over, Double-Blind Method, Female, Humans, Male, Middle Aged, Young Adult, Probiotics, Spinal Cord Injuries complications, Urinary Tract Infections etiology, Urinary Tract Infections prevention & control
- Abstract
Study Design: Randomised double-blind factorial-design placebo-controlled trial., Objective: Urinary tract infections (UTIs) are common in people with spinal cord injury (SCI). UTIs are increasingly difficult to treat due to emergence of multi-resistant organisms. Probiotics are efficacious in preventing UTIs in post-menopausal women. We aimed to determine whether probiotic therapy with Lactobacillus reuteri RC-14+Lactobacillus GR-1 (RC14-GR1) and/or Lactobacillus rhamnosus GG+Bifidobacterium BB-12 (LGG-BB12) are effective in preventing UTI in people with SCI., Setting: Spinal units in New South Wales, Australia with their rural affiliations., Methods: We recruited 207 eligible participants with SCI and stable neurogenic bladder management. They were randomised to one of four arms: RC14-GR1+LGG-BB12, RC14-GR1+placebo, LGG-BB12+ placebo or double placebos for 6 months. Randomisation was stratified by bladder management type and inpatient or outpatient status. The primary outcome was time to occurrence of symptomatic UTI., Results: Analysis was based on intention to treat. Participants randomised to RC14-GR1 had a similar risk of UTI as those not on RC14-GR1 (HR 0.67; 95% CI: 0.39-1.18; P = 0.17) after allowing for pre-specified covariates. Participants randomised to LGG-BB12 also had a similar risk of UTI as those not on LGG-BB12 (HR 1.29; 95% CI: 0.74-2.25; P = 0.37). Multivariable post hoc survival analysis for RC14-GR1 only vs. the other three groups showed a potential protective effect (HR 0.46; 95% CI: 0.21-0.99; P = 0.03), but this result would need to be confirmed before clinical application., Conclusion: In this RCT, there was no effect of RC14-GR1 or LGG-BB12 in preventing UTI in people with SCI.
- Published
- 2019
- Full Text
- View/download PDF
37. Brain Modulyzer: Interactive Visual Analysis of Functional Brain Connectivity.
- Author
-
Murugesan S, Bouchard K, Brown JA, Hamann B, Seeley WW, Trujillo A, and Weber GH
- Abstract
We present Brain Modulyzer, an interactive visual exploration tool for functional magnetic resonance imaging (fMRI) brain scans, aimed at analyzing the correlation between different brain regions when resting or when performing mental tasks. Brain Modulyzer combines multiple coordinated views-such as heat maps, node link diagrams and anatomical views-using brushing and linking to provide an anatomical context for brain connectivity data. Integrating methods from graph theory and analysis, e.g., community detection and derived graph measures, makes it possible to explore the modular and hierarchical organization of functional brain networks. Providing immediate feedback by displaying analysis results instantaneously while changing parameters gives neuroscientists a powerful means to comprehend complex brain structure more effectively and efficiently and supports forming hypotheses that can then be validated via statistical analysis. To demonstrate the utility of our tool, we present two case studies-exploring progressive supranuclear palsy, as well as memory encoding and retrieval.
- Published
- 2017
- Full Text
- View/download PDF
38. Multi-scale visual analysis of time-varying electrocorticography data via clustering of brain regions.
- Author
-
Murugesan S, Bouchard K, Chang E, Dougherty M, Hamann B, and Weber GH
- Subjects
- Algorithms, Cluster Analysis, Epilepsy physiopathology, Humans, Software, Brain diagnostic imaging, Brain physiopathology, Computational Biology methods, Electrocorticography methods
- Abstract
Background: There exists a need for effective and easy-to-use software tools supporting the analysis of complex Electrocorticography (ECoG) data. Understanding how epileptic seizures develop or identifying diagnostic indicators for neurological diseases require the in-depth analysis of neural activity data from ECoG. Such data is multi-scale and is of high spatio-temporal resolution. Comprehensive analysis of this data should be supported by interactive visual analysis methods that allow a scientist to understand functional patterns at varying levels of granularity and comprehend its time-varying behavior., Results: We introduce a novel multi-scale visual analysis system, ECoG ClusterFlow, for the detailed exploration of ECoG data. Our system detects and visualizes dynamic high-level structures, such as communities, derived from the time-varying connectivity network. The system supports two major views: 1) an overview summarizing the evolution of clusters over time and 2) an electrode view using hierarchical glyph-based design to visualize the propagation of clusters in their spatial, anatomical context. We present case studies that were performed in collaboration with neuroscientists and neurosurgeons using simulated and recorded epileptic seizure data to demonstrate our system's effectiveness., Conclusion: ECoG ClusterFlow supports the comparison of spatio-temporal patterns for specific time intervals and allows a user to utilize various clustering algorithms. Neuroscientists can identify the site of seizure genesis and its spatial progression during various the stages of a seizure. Our system serves as a fast and powerful means for the generation of preliminary hypotheses that can be used as a basis for subsequent application of rigorous statistical methods, with the ultimate goal being the clinical treatment of epileptogenic zones.
- Published
- 2017
- Full Text
- View/download PDF
39. Apply or Die: On the Role and Assessment of Application Papers in Visualization.
- Author
-
Weber GH, Carpendale S, Ebert D, Fisher B, Hagen H, Shneiderman B, and Ynnerman A
- Abstract
Application-oriented papers provide an important way to invigorate and cross-pollinate the visualization field, but the exact criteria for judging an application paper's merit remain an open question. This article builds on a panel at the 2016 IEEE Visualization Conference entitled "Application Papers: What Are They, and How Should They Be Evaluated?" that sought to gain a better understanding of prevalent views in the visualization community. This article surveys current trends that favor application papers, reviews the benefits and contributions of this paper type, and discusses their assessment in the review process. It concludes with recommendations to ensure that the visualization community is more inclusive to application papers.
- Published
- 2017
- Full Text
- View/download PDF
40. Visualizing nD point clouds as topological landscape profiles to guide local data analysis.
- Author
-
Oesterling P, Heine C, Weber GH, and Scheuermann G
- Subjects
- Image Enhancement methods, Reproducibility of Results, Sensitivity and Specificity, Signal Processing, Computer-Assisted, Algorithms, Computer Graphics, Image Interpretation, Computer-Assisted methods, Imaging, Three-Dimensional methods, Information Storage and Retrieval methods, Pattern Recognition, Automated methods, User-Computer Interface
- Abstract
Analyzing high-dimensional point clouds is a classical challenge in visual analytics. Traditional techniques, such as projections or axis-based techniques, suffer from projection artifacts, occlusion, and visual complexity. We propose to split data analysis into two parts to address these shortcomings. First, a structural overview phase abstracts data by its density distribution. This phase performs topological analysis to support accurate and nonoverlapping presentation of the high-dimensional cluster structure as a topological landscape profile. Utilizing a landscape metaphor, it presents clusters and their nesting as hills whose height, width, and shape reflect cluster coherence, size, and stability, respectively. A second local analysis phase utilizes this global structural knowledge to select individual clusters or point sets for further, localized data analysis. Focusing on structural entities significantly reduces visual clutter in established geometric visualizations and permits a clearer, more thorough data analysis. This analysis complements the global topological perspective and enables the user to study subspaces or geometric properties, such as shape.
- Published
- 2013
- Full Text
- View/download PDF
41. Extreme scaling of production visualization software on diverse architectures.
- Author
-
Childs H, Pugmire D, Ahern S, Whitlock B, Howison M, Weber GH, and Bethel EW
- Abstract
This article presents the results of experiments studying how the pure-parallelism paradigm scales to massive data sets, including 16,000 or more cores on trillion-cell meshes, the largest data sets published to date in the visualization literature. The findings on scaling characteristics and bottlenecks contribute to understanding how pure parallelism will perform in the future.
- Published
- 2010
- Full Text
- View/download PDF
42. Coupling visualization and data analysis for knowledge discovery from multi-dimensional scientific data.
- Author
-
Rübel O, Ahern S, Bethel EW, Biggin MD, Childs H, Cormier-Michel E, Depace A, Eisen MB, Fowlkes CC, Geddes CG, Hagen H, Hamann B, Huang MY, Keränen SV, Knowles DW, Hendriks CL, Malik J, Meredith J, Messmer P, Prabhat, Ushizima D, Weber GH, and Wu K
- Abstract
Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies -such as efficient data management- supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.
- Published
- 2010
- Full Text
- View/download PDF
43. Analyzing and tracking burning structures in lean premixed hydrogen flames.
- Author
-
Bremer PT, Weber GH, Pascucci V, Day M, and Bell JB
- Subjects
- Computer Simulation, Imaging, Three-Dimensional methods, Models, Chemical, Computer Graphics, Fires, Hot Temperature, Hydrogen chemistry, Information Storage and Retrieval methods, Rheology methods, User-Computer Interface
- Abstract
This paper presents topology-based methods to robustly extract, analyze, and track features defined as subsets of isosurfaces. First, we demonstrate how features identified by thresholding isosurfaces can be defined in terms of the Morse complex. Second, we present a specialized hierarchy that encodes the feature segmentation independent of the threshold while still providing a flexible multiresolution representation. Third, for a given parameter selection, we create detailed tracking graphs representing the complete evolution of all features in a combustion simulation over several hundred time steps. Finally, we discuss a user interface that correlates the tracking information with interactive rendering of the segmented isosurfaces enabling an in-depth analysis of the temporal behavior. We demonstrate our approach by analyzing three numerical simulations of lean hydrogen flames subject to different levels of turbulence. Due to their unstable nature, lean flames burn in cells separated by locally extinguished regions. The number, area, and evolution over time of these cells provide important insights into the impact of turbulence on the combustion process. Utilizing the hierarchy, we can perform an extensive parameter study without reprocessing the data for each set of parameters. The resulting statistics enable scientists to select appropriate parameters and provide insight into the sensitivity of the results with respect to the choice of parameters. Our method allows for the first time to quantitatively correlate the turbulence of the burning process with the distribution of burning regions, properly segmented and selected. In particular, our analysis shows that counterintuitively stronger turbulence leads to larger cell structures, which burn more intensely than expected. This behavior suggests that flames could be stabilized under much leaner conditions than previously anticipated.
- Published
- 2010
- Full Text
- View/download PDF
44. Integrating data clustering and visualization for the analysis of 3D gene expression data.
- Author
-
Rübel O, Weber GH, Huang MY, Bethel EW, Biggin MD, Fowlkes CC, Luengo Hendriks CL, Keränen SV, Eisen MB, Knowles DW, Malik J, Hagen H, and Hamann B
- Subjects
- Computer Graphics, Computer Simulation, Systems Integration, Chromosome Mapping methods, Database Management Systems, Databases, Genetic, Gene Expression Profiling methods, Models, Genetic, Multigene Family genetics, User-Computer Interface
- Abstract
The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex data sets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss 1) the integration of data clustering and visualization into one framework, 2) the application of data clustering to 3D gene expression data, 3) the evaluation of the number of clusters k in the context of 3D gene expression clustering, and 4) the improvement of overall analysis quality via dedicated postprocessing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.
- Published
- 2010
- Full Text
- View/download PDF
45. Visual exploration of three-dimensional gene expression using physical views and linked abstract views.
- Author
-
Weber GH, Rübel O, Huang MY, DePace AH, Fowlkes CC, Keränen SV, Luengo Hendriks CL, Hagen H, Knowles DW, Malik J, Biggin MD, and Hamann B
- Subjects
- Animals, Computer Simulation, Drosophila Proteins genetics, Drosophila Proteins metabolism, Embryo, Nonmammalian cytology, Embryo, Nonmammalian metabolism, Fushi Tarazu Transcription Factors genetics, Fushi Tarazu Transcription Factors metabolism, Gene Expression Regulation, Genome, Insect, Homeodomain Proteins genetics, Homeodomain Proteins metabolism, Models, Genetic, Models, Statistical, Software, Transcription Factors genetics, Transcription Factors metabolism, User-Computer Interface, Databases, Genetic, Drosophila melanogaster embryology, Gene Expression Profiling, Gene Expression Regulation, Developmental, Gene Regulatory Networks, Imaging, Three-Dimensional methods
- Abstract
During animal development, complex patterns of gene expression provide positional information within the embryo. To better understand the underlying gene regulatory networks, the Berkeley Drosophila Transcription Network Project (BDTNP) has developed methods that support quantitative computational analysis of three-dimensional (3D) gene expression in early Drosophila embryos at cellular resolution. We introduce PointCloudXplore (PCX), an interactive visualization tool that supports visual exploration of relationships between different genes' expression using a combination of established visualization techniques. Two aspects of gene expression are of particular interest: 1) gene expression patterns defined by the spatial locations of cells expressing a gene and 2) relationships between the expression levels of multiple genes. PCX provides users with two corresponding classes of data views: 1) Physical Views based on the spatial relationships of cells in the embryo and 2) Abstract Views that discard spatial information and plot expression levels of multiple genes with respect to each other. Cell Selectors highlight data associated with subsets of embryo cells within a View. Using linking, these selected cells can be viewed in multiple representations. We describe PCX as a 3D gene expression visualization tool and provide examples of how it has been used by BDTNP biologists to generate new hypotheses.
- Published
- 2009
- Full Text
- View/download PDF
46. A quantitative spatiotemporal atlas of gene expression in the Drosophila blastoderm.
- Author
-
Fowlkes CC, Hendriks CL, Keränen SV, Weber GH, Rübel O, Huang MY, Chatoor S, DePace AH, Simirenko L, Henriquez C, Beaton A, Weiszmann R, Celniker S, Hamann B, Knowles DW, Biggin MD, Eisen MB, and Malik J
- Subjects
- Animals, Blastoderm, Drosophila melanogaster metabolism, Embryo, Nonmammalian metabolism, Gene Expression Regulation, Developmental, Genes, Insect, Drosophila melanogaster genetics, Gene Regulatory Networks, Models, Genetic
- Abstract
To fully understand animal transcription networks, it is essential to accurately measure the spatial and temporal expression patterns of transcription factors and their targets. We describe a registration technique that takes image-based data from hundreds of Drosophila blastoderm embryos, each costained for a reference gene and one of a set of genes of interest, and builds a model VirtualEmbryo. This model captures in a common framework the average expression patterns for many genes in spite of significant variation in morphology and expression between individual embryos. We establish the method's accuracy by showing that relationships between a pair of genes' expression inferred from the model are nearly identical to those measured in embryos costained for the pair. We present a VirtualEmbryo containing data for 95 genes at six time cohorts. We show that known gene-regulatory interactions can be automatically recovered from this data set and predict hundreds of new interactions.
- Published
- 2008
- Full Text
- View/download PDF
47. Interactive processing and visualization of image data for biomedical and life science applications.
- Author
-
Staadt OG, Natarajan V, Weber GH, Wiley DF, and Hamann B
- Subjects
- Algorithms, Eye Diseases pathology, Gene Expression, Humans, Protein Conformation, Software Design, User-Computer Interface, Image Processing, Computer-Assisted methods, Imaging, Three-Dimensional methods, Tomography, Optical Coherence
- Abstract
Background: Applications in biomedical science and life science produce large data sets using increasingly powerful imaging devices and computer simulations. It is becoming increasingly difficult for scientists to explore and analyze these data using traditional tools. Interactive data processing and visualization tools can support scientists to overcome these limitations., Results: We show that new data processing tools and visualization systems can be used successfully in biomedical and life science applications. We present an adaptive high-resolution display system suitable for biomedical image data, algorithms for analyzing and visualization protein surfaces and retinal optical coherence tomography data, and visualization tools for 3D gene expression data., Conclusion: We demonstrated that interactive processing and visualization methods and systems can support scientists in a variety of biomedical and life science application areas concerned with massive data analysis.
- Published
- 2007
- Full Text
- View/download PDF
48. Topology-controlled volume rendering.
- Author
-
Weber GH, Dillard SE, Carr H, Pascucci V, and Hamann B
- Subjects
- Algorithms, Computer Graphics, Image Enhancement methods, Image Interpretation, Computer-Assisted methods, Imaging, Three-Dimensional methods
- Abstract
Topology provides a foundation for the development of mathematically sound tools for processing and exploration of scalar fields. Existing topology-based methods can be used to identify interesting features in volumetric data sets, to find seed sets for accelerated isosurface extraction, or to treat individual connected components as distinct entities for isosurfacing or interval volume rendering. We describe a framework for direct volume rendering based on segmenting a volume into regions of equivalent contour topology and applying separate transfer functions to each region. Each region corresponds to a branch of a hierarchical contour tree decomposition, and a separate transfer function can be defined for it. The novel contributions of our work are 1) a volume rendering framework and interface where a unique transfer function can be assigned to each subvolume corresponding to a branch of the contour tree, 2) a runtime method for adjusting data values to reflect contour tree simplifications, 3) an efficient way of mapping a spatial location into the contour tree to determine the applicable transfer function, and 4) an algorithm for hardware-accelerated direct volume rendering that visualizes the contour tree-based segmentation at interactive frame rates using graphics processing units (GPUs) that support loops and conditional branches in fragment programs.
- Published
- 2007
- Full Text
- View/download PDF
49. Three-dimensional morphology and gene expression in the Drosophila blastoderm at cellular resolution I: data acquisition pipeline.
- Author
-
Luengo Hendriks CL, Keränen SV, Fowlkes CC, Simirenko L, Weber GH, DePace AH, Henriquez C, Kaszuba DW, Hamann B, Eisen MB, Malik J, Sudar D, Biggin MD, and Knowles DW
- Subjects
- Animals, Base Sequence, DNA Primers, Drosophila melanogaster embryology, Fluorescent Dyes, RNA, Messenger genetics, Blastoderm cytology, Drosophila melanogaster genetics, Gene Expression
- Abstract
Background: To model and thoroughly understand animal transcription networks, it is essential to derive accurate spatial and temporal descriptions of developing gene expression patterns with cellular resolution., Results: Here we describe a suite of methods that provide the first quantitative three-dimensional description of gene expression and morphology at cellular resolution in whole embryos. A database containing information derived from 1,282 embryos is released that describes the mRNA expression of 22 genes at multiple time points in the Drosophila blastoderm. We demonstrate that our methods are sufficiently accurate to detect previously undescribed features of morphology and gene expression. The cellular blastoderm is shown to have an intricate morphology of nuclear density patterns and apical/basal displacements that correlate with later well-known morphological features. Pair rule gene expression stripes, generally considered to specify patterning only along the anterior/posterior body axis, are shown to have complex changes in stripe location, stripe curvature, and expression level along the dorsal/ventral axis. Pair rule genes are also found to not always maintain the same register to each other., Conclusion: The application of these quantitative methods to other developmental systems will likely reveal many other previously unknown features and provide a more rigorous understanding of developmental regulatory networks.
- Published
- 2006
- Full Text
- View/download PDF
50. Control of Escherichia coli O157:H7 with sodium metasilicate.
- Author
-
Weber GH, O'Brien JK, and Bender FG
- Subjects
- Colony Count, Microbial, Hydrogen-Ion Concentration, Lactic Acid pharmacology, Phosphates pharmacology, Temperature, Time Factors, Water Microbiology, Disinfectants pharmacology, Escherichia coli O157 drug effects, Silicates pharmacology
- Abstract
Three intervention strategies-trisodium phosphate, lactic acid, and sodium metasilicate--were examined for their in vitro antimicrobial activities in water at room temperature against a three-strain cocktail of Escherichia coli O157:H7 and a three-strain cocktail of "generic" E. coli. Both initial inhibition and recovery of injured cells were monitored. When 3.0% (wt/wt) lactic acid, pH 2.4, was inoculated with E. coli O157:H7 (approximately 6 log CFU/ml), viable microorganisms were recovered after a 20-min exposure to the acid. After 20 min in 1.0% (wt/wt) trisodium phosphate, pH 12.0, no viable E. coli O157:H7 microorganisms were detected. Exposure of E. coli O157:H7 to sodium metasilicate (5 to 10 s) at concentrations as low as 0.6%, pH 12.1, resulted in 100% inhibition with no recoverable E. coli O157:H7. No difference in inhibition profiles was detected between the E. coli O157:H7 and generic strains, suggesting that nonpathogenic strains may be used for in-plant sodium metasilicate studies.
- Published
- 2004
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.