1,094 results on '"Hadwiger M"'
Search Results
2. Lead-Time Corrected Effect on Breast Cancer Survival in Germany by Mode of Detection.
- Author
-
Schumann L, Hadwiger M, Eisemann N, and Katalinic A
- Abstract
(1) Background: Screen-detected breast cancer patients tend to have better survival than patients diagnosed with symptomatic cancer. The main driver of improved survival in screen-detected cancer is detection at earlier stage. An important bias is introduced by lead time, i.e., the time span by which the diagnosis has been advanced by screening. We examine whether there is a remaining survival difference that could be attributable to mode of detection, for example, because of higher quality of care. (2) Methods: Women with a breast cancer (BC) diagnosis in 2000-2022 were included from a population-based cancer registry from Schleswig-Holstein, Germany, which also registers the mode of cancer detection. Mammography screening was available from 2005 onwards. We compared the survival for BC detected by screening with symptomatic BC detection using Kaplan-Meier, unadjusted Cox regressions, and Cox regressions adjusted for age, grading, and UICC stage. Correction for lead time bias was carried out by assuming an exponential distribution of the period during which the tumor is asymptomatic but screen-detectable (sojourn time). We used a common estimate and two recently published estimates of sojourn times. (3) Results: The analysis included 32,169 women. Survival for symptomatic BC was lower than for screen-detected BC (hazard ratio (HR): 0.23, 95% confidence interval (CI): 0.21-0.25). Adjustment for prognostic factors and lead time bias with the commonly used sojourn time resulted in an HR of 0.84 (CI: 0.75-0.94). Using different sojourn times resulted in an HR of 0.73 to 0.90. (4) Conclusions: Survival for symptomatic BC was only one quarter of screen-detected tumors, which is obviously biased. After adjustment for lead-time bias and prognostic variables, including UICC stage, survival was 27% to 10% better for screen-detected BC, which might be attributed to BC screening. Although this result fits quite well with published results for other countries with BC screening, further sources for residual confounding (e.g., self-selection) cannot be ruled out.
- Published
- 2024
- Full Text
- View/download PDF
3. Residency Octree: A Hybrid Approach for Scalable Web-Based Multi-Volume Rendering.
- Author
-
Herzberger L, Hadwiger M, Kruger R, Sorger P, Pfister H, Groller E, and Beyer J
- Abstract
We present a hybrid multi-volume rendering approach based on a novel Residency Octree that combines the advantages of out-of-core volume rendering using page tables with those of standard octrees. Octree approaches work by performing hierarchical tree traversal. However, in octree volume rendering, tree traversal and the selection of data resolution are intrinsically coupled. This makes fine-grained empty-space skipping costly. Page tables, on the other hand, allow access to any cached brick from any resolution. However, they do not offer a clear and efficient strategy for substituting missing high-resolution data with lower-resolution data. We enable flexible mixed-resolution out-of-core multi-volume rendering by decoupling the cache residency of multi-resolution data from a resolution-independent spatial subdivision determined by the tree. Instead of one-to-one node-to-brick correspondences, each residency octree node is mapped to a set of bricks from different resolution levels. This makes it possible to efficiently and adaptively choose and mix resolutions, adapt sampling rates, and compensate for cache misses. At the same time, residency octrees support fine-grained empty-space skipping, independent of the data subdivision used for caching. Finally, to facilitate collaboration and outreach, and to eliminate local data storage, our implementation is a web-based, pure client-side renderer using WebGPU and WebAssembly. Our method is faster than prior approaches and efficient for many data channels with a flexible and adaptive choice of data resolution.
- Published
- 2024
- Full Text
- View/download PDF
4. Vortex Lens: Interactive Vortex Core Line Extraction using Observed Line Integral Convolution.
- Author
-
Rautek P, Zhang X, Woschizka B, Theussl T, and Hadwiger M
- Abstract
This paper describes a novel method for detecting and visualizing vortex structures in unsteady 2D fluid flows. The method is based on an interactive local reference frame estimation that minimizes the observed time derivative of the input flow field v(x, t). A locally optimal reference frame w(x, t) assists the user in the identification of physically observable vortex structures in Observed Line Integral Convolution (LIC) visualizations. The observed LIC visualizations are interactively computed and displayed in a user-steered vortex lens region, embedded in the context of a conventional LIC visualization outside the lens. The locally optimal reference frame is then used to detect observed critical points, where v=w, which are used to seed vortex core lines. Each vortex core line is computed as a solution of the ordinary differential equation (ODE) · w(t)=w(w(t), t), with an observed critical point as initial condition (w(t
0 ), t0 ). During integration, we enforce a strict error bound on the difference between the extracted core line and the integration of a path line of the input vector field, i.e., a solution to the ODE · v(t)=v(v(t), t). We experimentally verify that this error depends on the step size of the core line integration. This ensures that our method extracts Lagrangian vortex core lines that are the simultaneous solution of both ODEs with a numerical error that is controllable by the integration step size. We show the usability of our method in the context of an interactive system using a lens metaphor, and evaluate the results in comparison to state-of-the-art vortex core line extraction methods.- Published
- 2024
- Full Text
- View/download PDF
5. Device runtime and costs of cardiac resynchronization therapy pacemakers - a health claims data analysis
- Author
-
Hadwiger, M, Dagres, N, Hindricks, G, L'hoest, H, Marschall, U, Katalinic, A, Frielitz, FS, Hadwiger, M, Dagres, N, Hindricks, G, L'hoest, H, Marschall, U, Katalinic, A, and Frielitz, FS
- Abstract
Introduction: This study investigates the runtime and costs of biventricular defibrillators (CRT-D) and biventricular pacemakers (CRT-P). Accurate estimates of cardiac resynchronization therapy (CRT) device runtime across all manufactures are rare, especially for CRT-P.Methods: Health claims data of a large nationwide German health insurance was used to analyze CRT device runtime. We defined device runtime as the time between the date of implantation and the date of generator change or removal. The median costs for implantation, change, and removal of a CRT device were calculated accordingly.Results: In total, the data set comprises 17,826 patients. A total of 4,296 complete runtimes for CRT-D devices and 429 complete runtimes for CRT-P devices were observed. Median device runtime was 6.04 years for CRT-D devices and 8.16 years for CRT-P devices (log-rank test p<0.0001). The median cost of implantation for a CRT-D device was 14,270 EUR, and for a CRT-P device 9,349 EUR.Conclusions: Compared to CRT-P devices, CRT-D devices had a significantly shorter device runtime of about two years. Moreover, CRT-D devices were associated with higher cost. The study provides important findings that can be utilized by cost-effectiveness analyses., Einleitung: Diese Studie untersucht die Laufzeit und Kosten von biventrikulären Defibrillatoren (CRT-D) und biventrikulären Schrittmachern (CRT-P). Genaue Schätzungen der Laufzeit von Geräten für die kardiale Resynchronisationstherapie (CRT) über alle Hersteller sind selten, insbesondere für CRT-P.Methoden: Zur Analyse der CRT-Gerätelaufzeit wurden Routinedaten einer großen bundesweiten deutschen Krankenkasse verwendet. Wir definierten die Gerätelaufzeit als die Zeit zwischen dem Datum der Implantation und dem Datum des Generatorwechsels oder der Entfernung. Die medianen Kosten für Implantation, Wechsel und Entfernung eines CRT-Gerätes wurden ebenfalls berechnet.Ergebnisse: Insgesamt umfasst der Datensatz 17.826 Patienten. Es wurden insgesamt 4.296 komplette Laufzeiten für CRT-D-Geräte und 429 komplette Laufzeiten für CRT-P-Geräte beobachtet. Die mediane Geräte-Laufzeit betrug 6,04 Jahre für CRT-D-Geräte und 8,16 Jahre für CRT-P-Geräte (Log-Rank-Test p<0,0001). Die medianen Implantationskosten betrugen 14.270 EUR für ein CRT-D-Gerät und 9.349 EUR für ein CRT-P-Gerät.Fazit: Im Vergleich zu CRT-P-Geräten hatten CRT-D-Geräte eine signifikant kürzere Gerätelaufzeit von etwa zwei Jahren. Außerdem waren CRT-D-Geräte mit höheren Kosten verbunden. Die Studie liefert wichtige Ergebnisse, die in Kosten-Effektivitätsanalysen verwendet werden können.
- Published
- 2022
6. Device runtime and costs of cardiac resynchronization therapy pacemakers - a health claims data analysis
- Author
-
Hadwiger, M, Dagres, N, Hindricks, G, L'hoest, H, Marschall, U, Katalinic, A, and Frielitz, FS
- Subjects
Gerätelanglebigkeit ,Data Analysis ,Pacemaker, Artificial ,Routinedaten ,device runtime ,battery runtime ,kardiale Resynchronisationstherapie ,health claims data ,device longevity ,Defibrillators, Implantable ,Cardiac Resynchronization Therapy ,Treatment Outcome ,ddc: 610 ,Ger��telanglebigkeit ,Batterielaufzeit ,Medicine and health ,Gerätelaufzeit ,Ger��telaufzeit ,Humans ,Cardiac Resynchronization Therapy Devices - Abstract
Introduction: This study investigates the runtime and costs of biventricular defibrillators (CRT-D) and biventricular pacemakers (CRT-P). Accurate estimates of cardiac resynchronization therapy (CRT) device runtime across all manufactures are rare, especially for CRT-P. Methods: Health claims data of a large nationwide German health insurance was used to analyze CRT device runtime. We defined device runtime as the time between the date of implantation and the date of generator change or removal. The median costs for implantation, change, and removal of a CRT device were calculated accordingly. Results: In total, the data set comprises 17,826 patients. A total of 4,296 complete runtimes for CRT-D devices and 429 complete runtimes for CRT-P devices were observed. Median device runtime was 6.04 years for CRT-D devices and 8.16 years for CRT-P devices (log-rank test p, Einleitung: Diese Studie untersucht die Laufzeit und Kosten von biventrikul��ren Defibrillatoren (CRT-D) und biventrikul��ren Schrittmachern (CRT-P). Genaue Sch��tzungen der Laufzeit von Ger��ten f��r die kardiale Resynchronisationstherapie (CRT) ��ber alle Hersteller sind selten, insbesondere f��r CRT-P. Methoden: Zur Analyse der CRT-Ger��telaufzeit wurden Routinedaten einer gro��en bundesweiten deutschen Krankenkasse verwendet. Wir definierten die Ger��telaufzeit als die Zeit zwischen dem Datum der Implantation und dem Datum des Generatorwechsels oder der Entfernung. Die medianen Kosten f��r Implantation, Wechsel und Entfernung eines CRT-Ger��tes wurden ebenfalls berechnet. Ergebnisse: Insgesamt umfasst der Datensatz 17.826 Patienten. Es wurden insgesamt 4.296 komplette Laufzeiten f��r CRT-D-Ger��te und 429 komplette Laufzeiten f��r CRT-P-Ger��te beobachtet. Die mediane Ger��te-Laufzeit betrug 6,04 Jahre f��r CRT-D-Ger��te und 8,16 Jahre f��r CRT-P-Ger��te (Log-Rank-Test p Fazit: Im Vergleich zu CRT-P-Ger��ten hatten CRT-D-Ger��te eine signifikant k��rzere Ger��telaufzeit von etwa zwei Jahren. Au��erdem waren CRT-D-Ger��te mit h��heren Kosten verbunden. Die Studie liefert wichtige Ergebnisse, die in Kosten-Effektivit��tsanalysen verwendet werden k��nnen.
- Published
- 2021
7. The Enhancement of Anther Culture Efficiency in Brassica napus ssp. Oleifera Metzg. (Sinsk.) Using Low Doses of Gamma Irradiation
- Author
-
Macdonald, M. V., Hadwiger, M. A., Aslam, F. N., and Ingram, D. S.
- Published
- 1988
8. Towards an end-to-end analysis and prediction system for weather, climate, and Marine applications in the Red Sea
- Author
-
Hoteit, I. Abualnaja, Y. Afzal, S. Ait-El-Fquih, B. Akylas, T. Antony, C. Dawson, C. Asfahani, K. Brewin, R.J. Cavaleri, L. Cerovecki, I. Cornuelle, B. Desamsetti, S. Attada, R. Dasari, H. Sanchez-Garrido, J. Genevier, L. El Gharamti, M. Gittings, J.A. Gokul, E. Gopalakrishnan, G. Guo, D. Hadri, B. Hadwiger, M. Hammoud, M.A. Hendershott, M. Hittawe, M. Karumuri, A. Knio, O. Köhl, A. Kortas, S. Krokos, G. Kunchala, R. Issa, L. Lakkis, I. Langodan, S. Lermusiaux, P. Luong, T. Ma, J. Le Maitre, O. Mazloff, M. El Mohtar, S. Papadopoulos, V.P. Platt, T. Pratt, L. Raboudi, N. Racault, M.-F. Raitsos, D.E. Razak, S. Sanikommu, S. Sathyendranath, S. Sofianos, S. Subramanian, A. Sun, R. Titi, E. Toye, H. Triantafyllou, G. Tsiaras, K. Vasou, P. Viswanadhapalli, Y. Wang, Y. Yao, F. Zhan, P. Zodiatis, G.
- Abstract
The Red Sea, home to the second-longest coral reef system in the world, is a vital resource for the Kingdom of Saudi Arabia. The Red Sea provides 90% of the Kingdom’s potable water by desalinization, supporting tourism, shipping, aquaculture, and fishing industries, which together contribute about 10%–20% of the country’s GDP. All these activities, and those elsewhere in the Red Sea region, critically depend on oceanic and atmospheric conditions. At a time of mega-development projects along the Red Sea coast, and global warming, authorities are working on optimizing the harnessing of environmental resources, including renewable energy and rainwater harvesting. All these require high-resolution weather and climate information. Toward this end, we have undertaken a multipronged research and development activity in which we are developing an integrated data-driven regional coupled modeling system. The telescopically nested components include 5-km- to 600-m-resolution atmospheric models to address weather and climate challenges, 4-km- to 50-m-resolution ocean models with regional and coastal configurations to simulate and predict the general and mesoscale circulation, 4-km- to 100-m-resolution ecosystem models to simulate the biogeochemistry, and 1-km- to 50-m-resolution wave models. In addition, a complementary probabilistic transport modeling system predicts dispersion of contaminant plumes, oil spill, and marine ecosystem connectivity. Advanced ensemble data assimilation capabilities have also been implemented for accurate forecasting. Resulting achievements include significant advancement in our understanding of the regional circulation and its connection to the global climate, development, and validation of long-term Red Sea regional atmospheric–oceanic–wave reanalyses and forecasting capacities. These products are being extensively used by academia, government, and industry in various weather and marine studies and operations, environmental policies, renewable energy applications, impact assessment, flood forecasting, and more. © 2021 American Meteorological Society
- Published
- 2021
9. Real-Time Visualization of Large-Scale Geological Models With Nonlinear Feature-Preserving Levels of Detail.
- Author
-
Sicat R, Ibrahim M, Ageeli A, Mannuss F, Rautek P, and Hadwiger M
- Abstract
The rapidly growing size and complexity of 3D geological models has increased the need for level-of-detail techniques and compact encodings to facilitate interactive visualization. For large-scale hexahedral meshes, state-of-the-art approaches often employ wavelet schemes for level of detail as well as for data compression. Here, wavelet transforms serve two purposes: (1) they achieve substantial compression for data reduction; and (2) the multiresolution encoding provides levels of detail for visualization. However, in coarser detail levels, important geometric features, such as geological faults, often get too smoothed out or lost, due to linear translation-invariant filtering. The same is true for attribute features, such as discontinuities in porosity or permeability. We present a novel, integrated approach addressing both purposes above, while preserving critical data features of both model geometry and its attributes. Our first major contribution is that we completely decouple the computation of levels of detail from data compression, and perform nonlinear filtering in a high-dimensional data space jointly representing the geological model geometry with its attributes. Computing detail levels in this space enables us to jointly preserve features in both geometry and attributes. While designed in a general way, our framework specifically employs joint bilateral filters, computed efficiently on a high-dimensional permutohedral grid. For data compression, after the computation of all detail levels, each level is separately encoded with a standard wavelet transform. Our second major contribution is a compact GPU data structure for the encoded mesh and attributes that enables direct real-time GPU visualization without prior decoding.
- Published
- 2023
- Full Text
- View/download PDF
10. Ultraliser: a framework for creating multiscale, high-fidelity and geometrically realistic 3D models for in silico neuroscience.
- Author
-
Abdellah M, Cantero JJG, Guerrero NR, Foni A, Coggan JS, Calì C, Agus M, Zisis E, Keller D, Hadwiger M, Magistretti PJ, Markram H, and Schürmann F
- Subjects
- Computer Simulation, Software, Neurons
- Abstract
Ultraliser is a neuroscience-specific software framework capable of creating accurate and biologically realistic 3D models of complex neuroscientific structures at intracellular (e.g. mitochondria and endoplasmic reticula), cellular (e.g. neurons and glia) and even multicellular scales of resolution (e.g. cerebral vasculature and minicolumns). Resulting models are exported as triangulated surface meshes and annotated volumes for multiple applications in in silico neuroscience, allowing scalable supercomputer simulations that can unravel intricate cellular structure-function relationships. Ultraliser implements a high-performance and unconditionally robust voxelization engine adapted to create optimized watertight surface meshes and annotated voxel grids from arbitrary non-watertight triangular soups, digitized morphological skeletons or binary volumetric masks. The framework represents a major leap forward in simulation-based neuroscience, making it possible to employ high-resolution 3D structural models for quantification of surface areas and volumes, which are of the utmost importance for cellular and system simulations. The power of Ultraliser is demonstrated with several use cases in which hundreds of models are created for potential application in diverse types of simulations. Ultraliser is publicly released under the GNU GPL3 license on GitHub (BlueBrain/Ultraliser)., Significance: There is crystal clear evidence on the impact of cell shape on its signaling mechanisms. Structural models can therefore be insightful to realize the function; the more realistic the structure can be, the further we get insights into the function. Creating realistic structural models from existing ones is challenging, particularly when needed for detailed subcellular simulations. We present Ultraliser, a neuroscience-dedicated framework capable of building these structural models with realistic and detailed cellular geometries that can be used for simulations., (© The Author(s) 2022. Published by Oxford University Press.)
- Published
- 2023
- Full Text
- View/download PDF
11. Multivariate Probabilistic Range Queries for Scalable Interactive 3D Visualization.
- Author
-
Ageeli A, Jaspe-Villanueva A, Sicat R, Mannuss F, Rautek P, and Hadwiger M
- Abstract
Large-scale scientific data, such as weather and climate simulations, often comprise a large number of attributes for each data sample, like temperature, pressure, humidity, and many more. Interactive visualization and analysis require filtering according to any desired combination of attributes, in particular logical AND operations, which is challenging for large data and many attributes. Many general data structures for this problem are built for and scale with a fixed number of attributes, and scalability of joint queries with arbitrary attribute subsets remains a significant problem. We propose a flexible probabilistic framework for multivariate range queries that decouples all attribute dimensions via projection, allowing any subset of attributes to be queried with full efficiency. Moreover, our approach is output-sensitive, mainly scaling with the cardinality of the query result rather than with the input data size. This is particularly important for joint attribute queries, where the query output is usually much smaller than the whole data set. Additionally, our approach can split query evaluation between user interaction and rendering, achieving much better scalability for interactive visualization than the previous state of the art. Furthermore, even when a multi-resolution strategy is used for visualization, queries are jointly evaluated at the finest data granularity, because our framework does not limit query accuracy to a fixed spatial subdivision.
- Published
- 2023
- Full Text
- View/download PDF
12. Filipina mothersʼ perceptions about childbirth at home
- Author
-
Hadwiger, M. C. and Hadwiger, S. C.
- Published
- 2012
- Full Text
- View/download PDF
13. A long-term cost-effectiveness analysis of cardiac resynchronisation therapy with or without defibrillator based on health claims data.
- Author
-
Hadwiger M, Schumann L, Eisemann N, Dagres N, Hindricks G, Haug J, Wolf M, Marschall U, Katalinic A, and Frielitz FS
- Abstract
Background: In Germany, CRT devices with defibrillator capability (CRT-D) have become the predominant treatment strategy for patients with heart failure and cardiac dyssynchrony. However, according to current guidelines, most patients would also be eligible for the less expensive CRT pacemaker (CRT-P). We conducted a cost-effectiveness analysis for CRT-P devices compared to CRT-D devices from a German payer's perspective., Methods: Longitudinal health claims data from 3569 patients with de novo CRT implantation from 2014 to 2019 were used to parametrise a cohort Markov model. Model outcomes were costs and effectiveness measured in terms of life years. Transition probabilities were derived from multivariable parametric survival regression that controlled for baseline differences of CRT-D and CRT-P patients. Deterministic and probabilistic sensitivity analyses were conducted., Results: The Markov model predicted a median survival of 84 months for CRT-P patients and 92 months for CRT-D patients. In the base case, CRT-P devices incurred incremental costs of € - 13,093 per patient and 0.30 incremental life years were lost. The ICER was € 43,965 saved per life year lost. In the probabilistic sensitivity analysis, uncertainty regarding the effectiveness was observed but not regarding costs., Conclusion: This modelling study illustrates the uncertainty of the higher effectiveness of CRT-D devices compared to CRT-P devices. Given the difference in incremental costs between CRT-P and CRT-D treatment, there would be significant potential cost savings to the healthcare system if CRT-D devices were restricted to patients likely to benefit from the additional defibrillator., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
14. Survival of patients undergoing cardiac resynchronization therapy with or without defibrillator: the RESET-CRT project.
- Author
-
Hadwiger M, Dagres N, Haug J, Wolf M, Marschall U, Tijssen J, Katalinic A, Frielitz FS, and Hindricks G
- Subjects
- Cardiac Resynchronization Therapy Devices, Death, Sudden, Cardiac etiology, Humans, Risk Factors, Treatment Outcome, Cardiac Resynchronization Therapy methods, Defibrillators, Implantable, Heart Failure, Pacemaker, Artificial
- Abstract
Aims: Cardiac resynchronization therapy (CRT) is an established treatment for heart failure. There is contradictory evidence whether defibrillator capability improves prognosis in patients receiving CRT. We compared the survival of patients undergoing de novo implantation of a CRT with defibrillator (CRT-D) option and CRT with pacemaker (CRT-P) in a large health claims database., Methods and Results: Using health claims data of a major German statutory health insurance, we analysed patients with de novo CRT implantation from 2014 to 2019 without indication for defibrillator implantation for secondary prevention of sudden cardiac death. We performed age-adjusted Cox proportional hazard regression and entropy balancing to calculate weights to control for baseline imbalances. The analysis comprised 847 CRT-P and 2722 CRT-D patients. Overall, 714 deaths were recorded during a median follow-up of 2.35 years. A higher cumulative incidence of all-cause death was observed in the initial unadjusted Kaplan-Meier time-to-event analysis [hazard ratio (HR): 1.63, 95% confidence interval (CI): 1.38-1.92]. After adjustment for age, HR was 1.13 (95% CI: 0.95-1.35) and after entropy balancing 0.99 (95% CI: 0.81-1.20). No survival differences were found in different age groups. The results were robust in sensitivity analyses., Conclusion: In a large health claims database of CRT implantations performed in a contemporary setting, CRT-P treatment was not associated with inferior survival compared with CRT-D. Age differences accounted for the greatest part of the survival difference that was observed in the initial unadjusted analysis., (© The Author(s) 2022. Published by Oxford University Press on behalf of European Society of Cardiology.)
- Published
- 2022
- Full Text
- View/download PDF
15. Device runtime and costs of cardiac resynchronization therapy pacemakers - a health claims data analysis.
- Author
-
Hadwiger M, Dagres N, Hindricks G, L'hoest H, Marschall U, Katalinic A, and Frielitz FS
- Subjects
- Cardiac Resynchronization Therapy Devices, Data Analysis, Humans, Treatment Outcome, Cardiac Resynchronization Therapy, Defibrillators, Implantable, Pacemaker, Artificial
- Abstract
Introduction: This study investigates the runtime and costs of biventricular defibrillators (CRT-D) and biventricular pacemakers (CRT-P). Accurate estimates of cardiac resynchronization therapy (CRT) device runtime across all manufactures are rare, especially for CRT-P. Methods: Health claims data of a large nationwide German health insurance was used to analyze CRT device runtime. We defined device runtime as the time between the date of implantation and the date of generator change or removal. The median costs for implantation, change, and removal of a CRT device were calculated accordingly. Results: In total, the data set comprises 17,826 patients. A total of 4,296 complete runtimes for CRT-D devices and 429 complete runtimes for CRT-P devices were observed. Median device runtime was 6.04 years for CRT-D devices and 8.16 years for CRT-P devices (log-rank test p<0.0001). The median cost of implantation for a CRT-D device was 14,270 EUR, and for a CRT-P device 9,349 EUR. Conclusions: Compared to CRT-P devices, CRT-D devices had a significantly shorter device runtime of about two years. Moreover, CRT-D devices were associated with higher cost. The study provides important findings that can be utilized by cost-effectiveness analyses., Competing Interests: The authors declare that they have no competing interests., (Copyright © 2022 Hadwiger et al.)
- Published
- 2022
- Full Text
- View/download PDF
16. Hyperquadrics for Shape Analysis of 3D Nanoscale Reconstructions of Brain Cell Nuclear Envelopes
- Author
-
Agus, M., Calì, C., Morales, A. Tapia, Lehväslaiho, H. O., Magistretti, P. J., Gobbetti, E., and Hadwiger, M.
- Abstract
Shape analysis of cell nuclei is becoming increasingly important in biology and medicine. Recent results have identified that the significant variability in shape and size of nuclei has an important impact on many biological processes. Current analysis techniques involve automatic methods for detection and segmentation of histology and microscopy images, and are mostly performed in 2D. Methods for 3D shape analysis, made possible by emerging acquisition methods capable to provide nanometric-scale 3D reconstructions, are, however, still at an early stage, and often assume a simple spherical shape. We introduce here a framework for analyzing 3D nanoscale reconstructions of nuclei of brain cells (mostly neurons), obtained by semiautomatic segmentation of electron micrographs. Our method considers an implicit parametric representation customizing the hyperquadrics formulation of convex shapes. Point clouds of nuclear envelopes, extracted from image data, are fitted to our parametrized model, which is then used for performing statistical analysis and shape comparisons. We report on the preliminary analysis of a collection of 92 nuclei of brain cells obtained from a sample of the somatosensory cortex of a juvenile rat., Smart Tools and Apps for Graphics - Eurographics Italian Chapter Conference, Computer Graphics and its Applications, 115, 122, M. Agus, C. Calì, A. Tapia Morales, H. O. Lehväslaiho, P. J. Magistretti, E. Gobbetti, and M. Hadwiger
- Published
- 2018
17. Interactive Volumetric Visual Analysis of Glycogen‐derived Energy Absorption in Nanometric Brain Structures
- Author
-
Agus, M., primary, Calì, C., additional, Al‐Awami, A., additional, Gobbetti, E., additional, Magistretti, P., additional, and Hadwiger, M., additional
- Published
- 2019
- Full Text
- View/download PDF
18. The State of the Art in Visual Analysis Approaches for Ocean and Atmospheric Datasets
- Author
-
Afzal, S., primary, Hittawe, M.M., additional, Ghani, S., additional, Jamil, T., additional, Knio, O., additional, Hadwiger, M., additional, and Hoteit, I., additional
- Published
- 2019
- Full Text
- View/download PDF
19. Interactive Exploration of Physically-Observable Objective Vortices in Unsteady 2D Flow.
- Author
-
Zhang X, Hadwiger M, Theussl T, and Rautek P
- Abstract
State-of-the-art computation and visualization of vortices in unsteady fluid flow employ objective vortex criteria, which makes them independent of reference frames or observers. However, objectivity by itself, although crucial, is not sufficient to guarantee that one can identify physically-realizable observers that would perceive or detect the same vortices. Moreover, a significant challenge is that a single reference frame is often not sufficient to accurately observe multiple vortices that follow different motions. This paper presents a novel framework for the exploration and use of an interactively-chosen set of observers, of the resulting relative velocity fields, and of objective vortex structures. We show that our approach facilitates the objective detection and visualization of vortices relative to well-adapted reference frame motions, while at the same time guaranteeing that these observers are in fact physically realizable. In order to represent and manipulate observers efficiently, we make use of the low-dimensional vector space structure of the Lie algebra of physically-realizable observer motions. We illustrate that our framework facilitates the efficient choice and guided exploration of objective vortices in unsteady 2D flow, on planar as well as on spherical domains, using well-adapted reference frames.
- Published
- 2022
- Full Text
- View/download PDF
20. Probabilistic Occlusion Culling using Confidence Maps for High-Quality Rendering of Large Particle Data.
- Author
-
Ibrahim M, Rautek P, Reina G, Agus M, and Hadwiger M
- Abstract
Achieving high rendering quality in the visualization of large particle data, for example from large-scale molecular dynamics simulations, requires a significant amount of sub-pixel super-sampling, due to very high numbers of particles per pixel. Although it is impossible to super-sample all particles of large-scale data at interactive rates, efficient occlusion culling can decouple the overall data size from a high effective sampling rate of visible particles. However, while the latter is essential for domain scientists to be able to see important data features, performing occlusion culling by sampling or sorting the data is usually slow or error-prone due to visibility estimates of insufficient quality. We present a novel probabilistic culling architecture for super-sampled high-quality rendering of large particle data. Occlusion is dynamically determined at the sub-pixel level, without explicit visibility sorting or data simplification. We introduce confidence maps to probabilistically estimate confidence in the visibility data gathered so far. This enables progressive, confidence-based culling, helping to avoid wrong visibility decisions. In this way, we determine particle visibility with high accuracy, although only a small part of the data set is sampled. This enables extensive super-sampling of (partially) visible particles for high rendering quality, at a fraction of the cost of sampling all particles. For real-time performance with millions of particles, we exploit novel features of recent GPU architectures to group particles into two hierarchy levels, combining fine-grained culling with high frame rates.
- Published
- 2022
- Full Text
- View/download PDF
21. A lagrangian method for extracting eddy boundaries in the red sea and the gulf of aden
- Author
-
Friederici, Anke, Mahamadou Kele, H. T., Hoteit, I., Weinkauf, Tino, Theisel, H., Hadwiger, M., Friederici, Anke, Mahamadou Kele, H. T., Hoteit, I., Weinkauf, Tino, Theisel, H., and Hadwiger, M.
- Abstract
Mesoscale ocean eddies play a major role for both the intermixing of water and the transport of biological mass. This makes the identification and tracking of their shape, location and deformation over time highly important for a number of applications. While eddies maintain a roughly circular shape in the free ocean, the narrow basins of the Red Sea and Gulf of Aden lead to the formation of irregular eddy shapes that existing methods struggle to identify. We propose the following model: Inside an eddy, particles rotate around a common core and thereby remain at a constant distance under a certain parametrization. The transition to the more unpredictable flow on the outside can thus be identified as the eddy boundary. We apply this algorithm on a realistic simulation of the Red Sea circulation, where we are able to identify the shape of irregular eddies robustly and more coherently than previous methods. We visualize the eddies as tubes in space-time to enable the analysis of their movement and deformation over several weeks., QC 20220923Part of proceedings: ISBN 978-153866882-5
- Published
- 2018
- Full Text
- View/download PDF
22. Soziale Probleme in der hausärztlichen Versorgung - Häufigkeit, Reaktionen, Handlungsoptionen und erwünschter Unterstützungsbedarf aus der Sicht von Hausärztinnen und Hausärzten
- Author
-
Kloppe, T, Zimmermann, T, Mews, C, Tetzlaff, B, Hadwiger, M, von dem Knesebeck, O, Scherer, M, Kloppe, T, Zimmermann, T, Mews, C, Tetzlaff, B, Hadwiger, M, von dem Knesebeck, O, and Scherer, M
- Published
- 2018
23. Long-Term Effects of an Intensive Prevention Program After Acute Myocardial Infarction.
- Author
-
Osteresch R, Fach A, Frielitz FS, Meyer S, Schmucker J, Rühle S, Retzlaff T, Hadwiger M, Härle T, Elsässer A, Katalinic A, Eitel I, Hambrecht R, and Wienbergen H
- Subjects
- Aged, Angina, Unstable epidemiology, Blood Pressure, Cardiac Rehabilitation, Cholesterol, LDL, Comorbidity, Cost-Benefit Analysis, Costs and Cost Analysis, Female, Health Care Costs, Hospitalization statistics & numerical data, Humans, Hyperlipidemias epidemiology, Hyperlipidemias therapy, Male, Middle Aged, Mortality, Myocardial Infarction epidemiology, Myocardial Revascularization statistics & numerical data, Obesity epidemiology, Obesity therapy, Overweight epidemiology, Overweight therapy, Patient Education as Topic economics, Recurrence, Risk Reduction Behavior, Secondary Prevention economics, Smoking epidemiology, Smoking therapy, Smoking Cessation, Stroke epidemiology, Telemedicine economics, Telemetry economics, Telemetry methods, Telephone, Weight Loss, Exercise, Myocardial Infarction therapy, Patient Education as Topic methods, Secondary Prevention methods, Telemedicine methods
- Abstract
Effective long-term prevention after myocardial infarction (MI) is crucial to reduce recurrent events. In this study the effects of a 12-months intensive prevention program (IPP), based on repetitive contacts between non-physician "prevention assistants" and patients, were evaluated. Patients after MI were randomly assigned to the IPP versus usual care (UC). Effects of IPP on risk factor control, clinical events and costs were investigated after 24 months. In a substudy efficacy of short reinterventions after more than 24 months ("Prevention Boosts") was analyzed. IPP was associated with a significantly better risk factor control compared to UC after 24 months and a trend towards less serious clinical events (12.5% vs 20.9%, log-rank p = 0.06). Economic analyses revealed that already after 24 months cost savings due to event reduction outweighted the costs of the prevention program (costs per patient 1,070 € in IPP vs 1,170 € in UC). Short reinterventions ("Prevention Boosts") more than 24 months after MI further improved risk factor control, such as LDL cholesterol and blood pressure lowering. In conclusion, IPP was associated with numerous beneficial effects on risk factor control, clinical events and costs. The study thereby demonstrates the efficacy of preventive long-term concepts after MI, based on repetitive contacts between non-physician coworkers and patients., (Copyright © 2021 Elsevier Inc. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
24. Gesundheitsbezogene soziale Probleme in der hausärztlichen Versorgung
- Author
-
Mews, C., Kloppe, T., Tetzlaff, B., Zimmermann, T., Hadwiger, M., Scherer, M., and Von Dem Knesebeck, O.
- Subjects
Patientenanliegen ,ddc: 610 ,soziale Probleme ,610 Medical sciences ,Medicine - Abstract
Hintergrund: Häufig nehmen Patientinnen und Patienten die hausärztliche Versorgung für gesundheitliche Beschwerden in Anspruch, die in Verbindung mit sozialen Problemen stehen. Hausärztinnen und Hausärzte sehen sich daher oft mit primär nicht medizinischen Versorgungsthemen[zum vollständigen Text gelangen Sie über die oben angegebene URL], 50. Kongress für Allgemeinmedizin und Familienmedizin
- Published
- 2016
- Full Text
- View/download PDF
25. Comparative Visual Analysis of Structure-Performance Relations in Complex Bulk-Heterojunction Morphologies.
- Author
-
Aboulhassan, A., Sicat, R., Baum, D., Wodo, O., and Hadwiger, M.
- Subjects
PHOTOVOLTAIC cells ,NUCLEAR exciton model ,COMPACT spaces (Topology) ,MULTIPLE correspondence analysis (Statistics) ,TORTUOSITY - Abstract
The structure of Bulk-Heterojunction (BHJ) materials, the main component of organic photovoltaic solar cells, is very complex, and the relationship between structure and performance is still largely an open question. Overall, there is a wide spectrum of fabrication configurations resulting in different BHJ morphologies and correspondingly different performances. Current state-of-the-art methods for assessing the performance of BHJ morphologies are either based on global quantification of morphological features or simply on visual inspection of the morphology based on experimental imaging. This makes finding optimal BHJ structures very challenging. Moreover, finding the optimal fabrication parameters to get an optimal structure is still an open question. In this paper, we propose a visual analysis framework to help answer these questions through comparative visualization and parameter space exploration for local morphology features. With our approach, we enable scientists to explore multivariate correlations between local features and performance indicators of BHJ morphologies. Our framework is built on shape-based clustering of local cubical regions of the morphology that we call patches. This enables correlating the features of clusters with intuition-based performance indicators computed from geometrical and topological features of charge paths. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
26. Objective Observer-Relative Flow Visualization in Curved Spaces for Unsteady 2D Geophysical Flows.
- Author
-
Rautek P, Mlejnek M, Beyer J, Troidl J, Pfister H, Theubl T, and Hadwiger M
- Abstract
Computing and visualizing features in fluid flow often depends on the observer, or reference frame, relative to which the input velocity field is given. A desired property of feature detectors is therefore that they are objective, meaning independent of the input reference frame. However, the standard definition of objectivity is only given for Euclidean domains and cannot be applied in curved spaces. We build on methods from mathematical physics and Riemannian geometry to generalize objectivity to curved spaces, using the powerful notion of symmetry groups as the basis for definition. From this, we develop a general mathematical framework for the objective computation of observer fields for curved spaces, relative to which other computed measures become objective. An important property of our framework is that it works intrinsically in 2D, instead of in the 3D ambient space. This enables a direct generalization of the 2D computation via optimization of observer fields in flat space to curved domains, without having to perform optimization in 3D. We specifically develop the case of unsteady 2D geophysical flows given on spheres, such as the Earth. Our observer fields in curved spaces then enable objective feature computation as well as the visualization of the time evolution of scalar and vector fields, such that the automatically computed reference frames follow moving structures like vortices in a way that makes them appear to be steady.
- Published
- 2021
- Full Text
- View/download PDF
27. Cardiac Resynchronisation Therapy in Patients with Moderate to Severe Heart Failure in Germany: A Cost-Utility Analysis of the Additional Defibrillator.
- Author
-
Hadwiger M, Frielitz FS, Eisemann N, Elsner C, Dagres N, Hindricks G, and Katalinic A
- Subjects
- Cost-Benefit Analysis, Defibrillators, Humans, Quality-Adjusted Life Years, Cardiac Resynchronization Therapy, Heart Failure therapy
- Abstract
Background: Cardiac resynchronisation therapy (CRT) is a well-established form of treatment for patients with heart failure and cardiac dyssynchrony. There are two different types of CRT devices: the biventricular pacemaker (CRT-P) and the biventricular defibrillator (CRT-D). The latter is more complex but also more expensive. For the majority of patients who are eligible for CRT, both devices are appropriate according to current guidelines. The purpose of this study was to conduct a cost-utility analysis for CRT-D compared to CRT-P from a German payer's perspective., Methods: A cohort Markov-model was developed to assess average costs and quality-adjusted life-years (QALY) for CRT-D and CRT-P. The model consisted of six stages: one for the device implementation, one for the absorbing state death, and two stages ("Stable" and "Hospital") for either a CRT device or medical therapy. The time horizon was 20 years. Deterministic and probabilistic sensitivity analyses and scenario analyses were conducted., Results: The incremental cost-effectiveness ratio (ICER) of CRT-D compared with CRT-P was €24,659 per additional QALY gained. In deterministic sensitivity analysis, the survival advantage of CRT-D to CRT-P was the most influential input parameter. In the probabilistic sensitivity analysis 96% of the simulated cases were more effective but also more costly., Conclusions: Therapy with CRT-D compared to CRT-P resulted in an additional gain of QALYs, but was more expensive. In addition, the ICER was subject to uncertainty, especially due to the uncertainty in the survival benefit. A randomised controlled trial and subgroup analyses would be desirable to further inform decision making.
- Published
- 2021
- Full Text
- View/download PDF
28. Gesundheitsbezogene soziale Probleme in der hausärztlichen Versorgung
- Author
-
Mews, C, Kloppe, T, Tetzlaff, B, Zimmermann, T, Hadwiger, M, Scherer, M, von dem Knesebeck, O, Mews, C, Kloppe, T, Tetzlaff, B, Zimmermann, T, Hadwiger, M, Scherer, M, and von dem Knesebeck, O
- Published
- 2016
29. 3D cellular reconstruction of cortical glia and parenchymal morphometric analysis from Serial Block-Face Electron Microscopy of juvenile rat.
- Author
-
Calì C, Agus M, Kare K, Boges DJ, Lehväslaiho H, Hadwiger M, and Magistretti PJ
- Subjects
- Animals, Microscopy, Electron, Rats, Somatosensory Cortex cytology, Somatosensory Cortex diagnostic imaging, Astrocytes ultrastructure, Brain cytology, Brain diagnostic imaging, Imaging, Three-Dimensional, Microglia ultrastructure, Microscopy, Electron, Scanning, Neurons ultrastructure, Pericytes ultrastructure
- Abstract
With the rapid evolution in the automation of serial electron microscopy in life sciences, the acquisition of terabyte-sized datasets is becoming increasingly common. High resolution serial block-face imaging (SBEM) of biological tissues offers the opportunity to segment and reconstruct nanoscale structures to reveal spatial features previously inaccessible with simple, single section, two-dimensional images. In particular, we focussed here on glial cells, whose reconstruction efforts in literature are still limited, compared to neurons. We imaged a 750,000 cubic micron volume of the somatosensory cortex from a juvenile P14 rat, with 20 nm accuracy. We recognized a total of 186 cells using their nuclei, and classified them as neuronal or glial based on features of the soma and the processes. We reconstructed for the first time 4 almost complete astrocytes and neurons, 4 complete microglia and 4 complete pericytes, including their intracellular mitochondria, 186 nuclei and 213 myelinated axons. We then performed quantitative analysis on the three-dimensional models. Out of the data that we generated, we observed that neurons have larger nuclei, which correlated with their lesser density, and that astrocytes and pericytes have a higher surface to volume ratio, compared to other cell types. All reconstructed morphologies represent an important resource for computational neuroscientists, as morphological quantitative information can be inferred, to tune simulations that take into account the spatial compartmentalization of the different cell types., (Copyright © 2019 The Authors. Published by Elsevier Ltd.. All rights reserved.)
- Published
- 2019
- Full Text
- View/download PDF
30. A Novel Framework for Visual Detection and Exploration of Performance Bottlenecks in Organic Photovoltaic Solar Cell Materials.
- Author
-
Aboulhassan, A., Baum, D., Wodo, O., Ganapathysubramanian, B., Amassian, A., and Hadwiger, M.
- Subjects
PERFORMANCE of photovoltaic cells ,SOLAR cells ,HETEROJUNCTIONS ,DATA visualization ,BOTTLENECKS (Manufacturing) ,IMAGE segmentation - Abstract
Current characterization methods of the so-called Bulk Heterojunction (BHJ), which is the main material of Organic Photovoltaic (OPV) solar cells, are limited to the analysis of global fabrication parameters. This reduces the efficiency of the BHJ design process, since it misses critical information about the local performance bottlenecks in the morphology of the material. In this paper, we propose a novel framework that fills this gap through visual characterization and exploration of local structure-performance correlations. We also propose a formula that correlates the structural features with the performance bottlenecks. Since research into BHJ materials is highly multidisciplinary, our framework enables a visual feedback strategy that allows scientists to build intuition about the best choices of fabrication parameters. We evaluate the usefulness of our proposed system by obtaining new BHJ characterizations. Furthermore, we show that our approach could substantially reduce the turnaround time. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
31. A Method for 3D Reconstruction and Virtual Reality Analysis of Glial and Neuronal Cells.
- Author
-
Calì C, Kare K, Agus M, Veloz Castillo MF, Boges D, Hadwiger M, and Magistretti P
- Subjects
- Virtual Reality, Imaging, Three-Dimensional methods, Microscopy, Electron methods, Neuroglia cytology, Neurons cytology
- Abstract
Serial sectioning and subsequent high-resolution imaging of biological tissue using electron microscopy (EM) allow for the segmentation and reconstruction of high-resolution imaged stacks to reveal ultrastructural patterns that could not be resolved using 2D images. Indeed, the latter might lead to a misinterpretation of morphologies, like in the case of mitochondria; the use of 3D models is, therefore, more and more common and applied to the formulation of morphology-based functional hypotheses. To date, the use of 3D models generated from light or electron image stacks makes qualitative, visual assessments, as well as quantification, more convenient to be performed directly in 3D. As these models are often extremely complex, a virtual reality environment is also important to be set up to overcome occlusion and to take full advantage of the 3D structure. Here, a step-by-step guide from image segmentation to reconstruction and analysis is described in detail.
- Published
- 2019
- Full Text
- View/download PDF
32. Determinants of Frequent Attendance of Outpatient Physicians: A Longitudinal Analysis Using the German Socio-Economic Panel (GSOEP).
- Author
-
Hadwiger M, König HH, and Hajek A
- Subjects
- Adolescent, Adult, Aged, Aged, 80 and over, Ambulatory Care Facilities statistics & numerical data, Female, Germany, Humans, Longitudinal Studies, Male, Middle Aged, Outpatients, Patient Acceptance of Health Care, Socioeconomic Factors, Young Adult, Ambulatory Care statistics & numerical data, Health Services, Health Services Needs and Demand statistics & numerical data
- Abstract
There is a lack of population-based longitudinal studies which investigates the factors leading to frequent attendance of outpatient physicians. Thus, the purpose of this study was to analyze the determinants of frequent attendance using a longitudinal approach. The used dataset comprises seven waves (2002 to 2014; n = 28,574 observations; ranging from 17 to 102 years) from the nationally representative German Socio-Economic Panel (GSOEP). The number of outpatient physician visits in the last three months was used to construct the dependent variable "frequent attendance". Different cut-offs were used (top 25%; top 10%; top 5%). Variable selection was based on the "behavioral model of health care use" by Andersen. Accordingly, variables were grouped into predisposing, enabling, and need characteristics as well as health behavior, which are possible determinants of frequent attendance. Conditional fixed effects logistic regressions were used. As for predisposing characteristics, regressions showed that getting married and losing one's job increased the likelihood of frequent attendance. Furthermore, age was negatively associated with the outcome measure. Enabling characteristics were not significantly associated with the outcome measure, except for the onset of the "practice fee". Decreases in mental and physical health were associated with an increased likelihood of frequent attendance. Findings were robust across different subpopulations. The findings of this study showed that need characteristics are particularly important for the onset of frequent attendance. This might indicate that people begin to use health services frequently when medically indicated., Competing Interests: The authors declare no conflict of interest.
- Published
- 2019
- Full Text
- View/download PDF
33. Advanced algorithms in medical computer graphics
- Author
-
Klein, Jan, Bartz, D., Friman, O., Hadwiger, M., Preim, B., Ritter, F., Vilanova, A., Zachmann, G., Medical Image Analysis, and Visualization
- Abstract
Advanced algorithms and efficient visualization techniques are of major importance in intra-operative imaging and image-guided surgery. The surgical environment is characterized by a high information flow and fast decisions, requiring efficient and intuitive presentation of complex medical data and precision in the visualization results. Regions or organs that are classified as risk structures are in this context of particular interest. This paper summarizes advanced algorithms for medical visualization with special focus on risk structures such as tumors, vascular systems and white matter fiber tracts. algorithms and techniques employed in intra-operative situations or virtual and mixed reality simulations are discussed. Finally, the prototyping and software development process of medical visualization algorithms is addressed.
- Published
- 2008
34. High-quality two-level volume rendering of segmented data sets on consumer graphics hardware.
- Author
-
Hadwiger, M., Berger, C., and Hauser, H.
- Published
- 2003
- Full Text
- View/download PDF
35. Quality issues of hardware-accelerated high-quality filtering on PC graphics hardware
- Author
-
Hadwiger, M., Huaser, Helvig, Möller, Torsten, and Skala, Václav
- Subjects
hardware convolution ,hardwarová konvoluce ,graphic hardware ,texture filtering ,filtrování textur ,grafický hardware - Abstract
This paper summarizes several quality issues of an approach for high-quality filtering with arbitrary filter kernels on PC graphics hardware that has been presented previously. Since this method uses multiple rendering passes, it is prone to precision and range problems related to the limited precision and range of intermediate computations and the color buffer. This is especially crucial on consumer-level 3D graphics hardware, where usually only eight bits are stored per color component. We estimate the accumulated error of several error sources, such as filter kernel quantization and discretization, precision of intermediate computations, and precision and range of intermediate results stored in the color buffer. We also describe two approaches for improving precision at the expense of a higher number of rendering passes. The first approach preserves higher internal precision over multiple passes that are forced to store intermediate results in the less-precise color buffer. The second approach employs hierarchical summation for attaining higher overall precision by using the available number of bits in a hierarchical fashion. Additionally, we consider issues such as the order of rendering passes that is crucial for avoiding potential range problems, and a variant of hardware-accelerated high-quality filtering that is able to reduce the number of passes by four for filtering single-valued data, thus improving both performance and precision.
- Published
- 2003
36. Time Line Cell Tracking for the Approximation of Lagrangian Coherent Structures with Subgrid Accuracy
- Author
-
Kuhn, A., primary, Engelke, W., additional, Rössl, C., additional, Hadwiger, M., additional, and Theisel, H., additional
- Published
- 2013
- Full Text
- View/download PDF
37. A Process for Digitizing and Simulating Biologically Realistic Oligocellular Networks Demonstrated for the Neuro-Glio-Vascular Ensemble.
- Author
-
Coggan JS, Calì C, Keller D, Agus M, Boges D, Abdellah M, Kare K, Lehväslaiho H, Eilemann S, Jolivet RB, Hadwiger M, Markram H, Schürmann F, and Magistretti PJ
- Abstract
One will not understand the brain without an integrated exploration of structure and function, these attributes being two sides of the same coin: together they form the currency of biological computation. Accordingly, biologically realistic models require the re-creation of the architecture of the cellular components in which biochemical reactions are contained. We describe here a process of reconstructing a functional oligocellular assembly that is responsible for energy supply management in the brain and creating a computational model of the associated biochemical and biophysical processes. The reactions that underwrite thought are both constrained by and take advantage of brain morphologies pertaining to neurons, astrocytes and the blood vessels that deliver oxygen, glucose and other nutrients. Each component of this neuro-glio-vasculature ensemble (NGV) carries-out delegated tasks, as the dynamics of this system provide for each cell-type its own energy requirements while including mechanisms that allow cooperative energy transfers. Our process for recreating the ultrastructure of cellular components and modeling the reactions that describe energy flow uses an amalgam of state-of the-art techniques, including digital reconstructions of electron micrographs, advanced data analysis tools, computational simulations and in silico visualization software. While we demonstrate this process with the NGV, it is equally well adapted to any cellular system for integrating multimodal cellular data in a coherent framework.
- Published
- 2018
- Full Text
- View/download PDF
38. Culling for Extreme-Scale Segmentation Volumes: A Hybrid Deterministic and Probabilistic Approach.
- Author
-
Beyer J, Mohammed H, Agus M, Al-Awami AK, Pfister H, and Hadwiger M
- Abstract
With the rapid increase in raw volume data sizes, such as terabyte-sized microscopy volumes, the corresponding segmentation label volumes have become extremely large as well. We focus on integer label data, whose efficient representation in memory, as well as fast random data access, pose an even greater challenge than the raw image data. Often, it is crucial to be able to rapidly identify which segments are located where, whether for empty space skipping for fast rendering, or for spatial proximity queries. We refer to this process as culling. In order to enable efficient culling of millions of labeled segments, we present a novel hybrid approach that combines deterministic and probabilistic representations of label data in a data-adaptive hierarchical data structure that we call the label list tree. In each node, we adaptively encode label data using either a probabilistic constant-time access representation for fast conservative culling, or a deterministic logarithmic-time access representation for exact queries. We choose the best data structures for representing the labels of each spatial region while building the label list tree. At run time, we further employ a novel query-adaptive culling strategy. While filtering a query down the tree, we prune it successively, and in each node adaptively select the representation that is best suited for evaluating the pruned query, depending on its size. We show an analysis of the efficiency of our approach with several large data sets from connectomics, including a brain scan with more than 13 million labeled segments, and compare our method to conventional culling approaches. Our approach achieves significant reductions in storage size as well as faster query times.
- Published
- 2018
- Full Text
- View/download PDF
39. Time-Dependent Flow seen through Approximate Observer Killing Fields.
- Author
-
Hadwiger M, Mlejnek M, Theusl T, and Rautek P
- Abstract
Flow fields are usually visualized relative to a global observer, i.e., a single frame of reference. However, often no global frame can depict all flow features equally well. Likewise, objective criteria for detecting features such as vortices often use either a global reference frame, or compute a separate frame for each point in space and time. We propose the first general framework that enables choosing a smooth trade-off between these two extremes. Using global optimization to minimize specific differential geometric properties, we compute a time-dependent observer velocity field that describes the motion of a continuous field of observers adapted to the input flow. This requires developing the novel notion of an observed time derivative. While individual observers are restricted to rigid motions, overall we compute an approximate Killing field, corresponding to almost-rigid motion. This enables continuous transitions between different observers. Instead of focusing only on flow features, we furthermore develop a novel general notion of visualizing how all observers jointly perceive the input field. This in fact requires introducing the concept of an observation time, with respect to which a visualization is computed. We develop the corresponding notions of observed stream, path, streak, and time lines. For efficiency, these characteristic curves can be computed using standard approaches, by first transforming the input field accordingly. Finally, we prove that the input flow perceived by the observer field is objective. This makes derived flow features, such as vortices, objective as well.
- Published
- 2018
- Full Text
- View/download PDF
40. [Social problems in primary health care - prevalence, responses, course of action, and the need for support from a general practitioners' point of view].
- Author
-
Zimmermann T, Mews C, Kloppe T, Tetzlaff B, Hadwiger M, von dem Knesebeck O, and Scherer M
- Subjects
- Cross-Sectional Studies, Germany, Humans, Prevalence, Surveys and Questionnaires, General Practitioners psychology, Primary Health Care, Social Problems
- Abstract
Background: Very often patients utilize primary care services for health conditions related to social problems. These problems, which are not primarily medical, can severely influence the course of an illness and its treatment. Little is known about the extent to which problems like unemployment or loneliness occur in a general practice setting., Objectives: What are the most frequent health-related social problems perceived by general practitioners (GPs)? How are these problems associated with GP- or practice characteristics? How do general practitioners deal with the social problems they perceive and what kind of support do they need?, Materials and Methods: Cross-sectional, postal questionnaire survey with questions derived from "Chapter Z social problems" of the International Classification of Primary Care - 2
nd edition. The questionnaire was mailed to available GP addresses in the federal states of Hamburg (n=1,602) and Schleswig-Holstein (n=1,242)., Results: N=489 questionnaires (17.2 %) were analyzed. At least three times a week, GPs were consulted by patients with poverty/financial problems (53.4 %), work/unemployment problems (43.7 %), patients with loneliness (38.7 %) as well as partnership issues (25.5 %). Only rarely did GPs report having perceived assault/harmful event problems (0.8 %). The highest frequency of problems was encountered by practices with a high proportion of a migrant population., Conclusions: Social problems are a common issue in routine primary care. GPs in Northwestern Germany usually try to find internal solutions for social problems but also indicated further interest in institutionalized support. A possible approach to solving these issues are community-based, locally organized networks., (Copyright © 2018. Published by Elsevier GmbH.)- Published
- 2018
- Full Text
- View/download PDF
41. Abstractocyte: A Visual Tool for Exploring Nanoscale Astroglial Cells.
- Author
-
Mohammed H, Al-Awami AK, Beyer J, Cali C, Magistretti P, Pfister H, and Hadwiger M
- Subjects
- Computer Graphics, Humans, Neurons cytology, Astrocytes cytology, Connectome methods, Imaging, Three-Dimensional methods, Software
- Abstract
This paper presents Abstractocyte, a system for the visual analysis of astrocytes and their relation to neurons, in nanoscale volumes of brain tissue. Astrocytes are glial cells, i.e., non-neuronal cells that support neurons and the nervous system. The study of astrocytes has immense potential for understanding brain function. However, their complex and widely-branching structure requires high-resolution electron microscopy imaging and makes visualization and analysis challenging. Furthermore, the structure and function of astrocytes is very different from neurons, and therefore requires the development of new visualization and analysis tools. With Abstractocyte, biologists can explore the morphology of astrocytes using various visual abstraction levels, while simultaneously analyzing neighboring neurons and their connectivity. We define a novel, conceptual 2D abstraction space for jointly visualizing astrocytes and neurons. Neuroscientists can choose a specific joint visualization as a point in this space. Interactively moving this point allows them to smoothly transition between different abstraction levels in an intuitive manner. In contrast to simply switching between different visualizations, this preserves the visual context and correlations throughout the transition. Users can smoothly navigate from concrete, highly-detailed 3D views to simplified and abstracted 2D views. In addition to investigating astrocytes, neurons, and their relationships, we enable the interactive analysis of the distribution of glycogen, which is of high importance to neuroscientists. We describe the design of Abstractocyte, and present three case studies in which neuroscientists have successfully used our system to assess astrocytic coverage of synapses, glycogen distribution in relation to synapses, and astrocytic-mitochondria coverage.
- Published
- 2018
- Full Text
- View/download PDF
42. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data.
- Author
-
Ibrahim M, Wickenhauser P, Rautek P, Reina G, and Hadwiger M
- Abstract
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
- Published
- 2018
- Full Text
- View/download PDF
43. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering.
- Author
-
Hadwiger M, Al-Awami AK, Beyer J, Agus M, and Pfister H
- Abstract
Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.
- Published
- 2018
- Full Text
- View/download PDF
44. SeiVis: An Interactive Visual Subsurface Modeling Application
- Author
-
Hollt, T., primary, Freiler, W., additional, Gschwantner, F., additional, Doleisch, H., additional, Heinemann, G., additional, and Hadwiger, M., additional
- Published
- 2012
- Full Text
- View/download PDF
45. Product Quality of Parenteral Vancomycin Products in the United States
- Author
-
Nambiar, S., primary, Madurawe, R. D., additional, Zuk, S. M., additional, Khan, S. R., additional, Ellison, C. D., additional, Faustino, P. J., additional, Mans, D. J., additional, Trehy, M. L., additional, Hadwiger, M. E., additional, Boyne, M. T., additional, Biswas, K., additional, and Cox, E. M., additional
- Published
- 2012
- Full Text
- View/download PDF
46. Interactive Volume Visualization of General Polyhedral Grids
- Author
-
Muigg, P., primary, Hadwiger, M., additional, Doleisch, H., additional, and Groller, E., additional
- Published
- 2011
- Full Text
- View/download PDF
47. Multisource Reverse-time Migration and Full-waveform Inversion on a GPGPU
- Author
-
Boonyasiriwat, C., primary, Zhan, G., additional, Hadwiger, M., additional, Srinivasan, M., additional, and T. Schuster, G., additional
- Published
- 2010
- Full Text
- View/download PDF
48. A Visual Approach to Efficient Analysis and Quantification of Ductile Iron and Reinforced Sprayed Concrete
- Author
-
Fritz, L., primary, Hadwiger, M., additional, Geier, G., additional, Pittino, G., additional, and Groller, M.E., additional
- Published
- 2009
- Full Text
- View/download PDF
49. Interactive Volume Exploration for Feature Detection and Quantification in Industrial CT Data
- Author
-
Hadwiger, M., primary, Laura, F., additional, Rezk-Salama, C., additional, Hollt, T., additional, Geier, G., additional, and Pabel, T., additional
- Published
- 2008
- Full Text
- View/download PDF
50. High-Quality Multimodal Volume Rendering for Preoperative Planning of Neurosurgical Interventions
- Author
-
Beyer, J., primary, Hadwiger, M., additional, Wolfsberger, S., additional, and Buhler, K., additional
- Published
- 2007
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.