69 results on '"Buhmann JM"'
Search Results
2. Fully Automatic Registration of Electron Microscopy Images with High and Low Resolution
- Author
-
Kaynig, V, primary, Fischer, B, additional, Wepf, R, additional, and Buhmann, JM, additional
- Published
- 2007
- Full Text
- View/download PDF
3. Fundamentals of Arthroscopic Surgery Training and beyond: a reinforcement learning exploration and benchmark.
- Author
-
Ovinnikov I, Beuret A, Cavaliere F, and Buhmann JM
- Subjects
- Humans, Simulation Training methods, Reinforcement, Psychology, Education, Medical, Graduate methods, Arthroscopy education, Clinical Competence, Benchmarking
- Abstract
Purpose: This work presents FASTRL, a benchmark set of instrument manipulation tasks adapted to the domain of reinforcement learning and used in simulated surgical training. This benchmark enables and supports the design and training of human-centric reinforcement learning agents which assist and evaluate human trainees in surgical practice., Methods: Simulation tasks from the Fundamentals of Arthroscopic Surgery Training (FAST) program are adapted to the reinforcement learning setting for the purpose of training virtual agents that are capable of providing assistance and scoring to the surgical trainees. A skill performance assessment protocol is presented based on the trained virtual agents., Results: The proposed benchmark suite presents an API for training reinforcement learning agents in the context of arthroscopic skill training. The evaluation scheme based on both heuristic and learned reward functions robustly recovers the ground truth ranking on a diverse test set of human trajectories., Conclusion: The presented benchmark enables the exploration of a novel reinforcement learning-based approach to skill performance assessment and in-procedure assistance for simulated surgical training scenarios. The evaluation protocol based on the learned reward model demonstrates potential for evaluating the performance of surgical trainees in simulation., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
4. A unified generation-registration framework for improved MR-based CT synthesis in proton therapy.
- Author
-
Li X, Bellotti R, Bachtiary B, Hrbacek J, Weber DC, Lomax AJ, Buhmann JM, and Zhang Y
- Abstract
Background: The use of magnetic resonance (MR) imaging for proton therapy treatment planning is gaining attention as a highly effective method for guidance. At the core of this approach is the generation of computed tomography (CT) images from MR scans. However, the critical issue in this process is accurately aligning the MR and CT images, a task that becomes particularly challenging in frequently moving body areas, such as the head-and-neck. Misalignments in these images can result in blurred synthetic CT (sCT) images, adversely affecting the precision and effectiveness of the treatment planning., Purpose: This study introduces a novel network that cohesively unifies image generation and registration processes to enhance the quality and anatomical fidelity of sCTs derived from better-aligned MR images., Methods: The approach synergizes a generation network (G) with a deformable registration network (R), optimizing them jointly in MR-to-CT synthesis. This goal is achieved by alternately minimizing the discrepancies between the generated/registered CT images and their corresponding reference CT counterparts. The generation network employs a UNet architecture, while the registration network leverages an implicit neural representation (INR) of the displacement vector fields (DVFs). We validated this method on a dataset comprising 60 head-and-neck patients, reserving 12 cases for holdout testing., Results: Compared to the baseline Pix2Pix method with MAE 124.95 ± $\pm$ 30.74 HU, the proposed technique demonstrated 80.98 ± $\pm$ 7.55 HU. The unified translation-registration network produced sharper and more anatomically congruent outputs, showing superior efficacy in converting MR images to sCTs. Additionally, from a dosimetric perspective, the plan recalculated on the resulting sCTs resulted in a remarkably reduced discrepancy to the reference proton plans., Conclusions: This study conclusively demonstrates that a holistic MR-based CT synthesis approach, integrating both image-to-image translation and deformable registration, significantly improves the precision and quality of sCT generation, particularly for the challenging body area with varied anatomic changes between corresponding MR and CT., (© 2024 The Author(s). Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.)
- Published
- 2024
- Full Text
- View/download PDF
5. Predicting vital sign deviations during surgery from patient monitoring data: developing and validating single-stream deep learning models.
- Author
-
Dubatovka A, Nöthiger CB, Spahn DR, Buhmann JM, Roche TR, and Tscholl DW
- Subjects
- Humans, Reproducibility of Results, Deep Learning, Vital Signs, Monitoring, Intraoperative methods
- Published
- 2024
- Full Text
- View/download PDF
6. ChromaX: a fast and scalable breeding program simulator.
- Author
-
Younis OG, Turchetta M, Ariza Suarez D, Yates S, Studer B, Athanasiadis IN, Krause A, Buhmann JM, and Corinzia L
- Subjects
- Genome, Gene Library, Computer Simulation, Software, Genomics
- Abstract
Summary: ChromaX is a Python library that enables the simulation of genetic recombination, genomic estimated breeding value calculations, and selection processes. By utilizing GPU processing, it can perform these simulations up to two orders of magnitude faster than existing tools with standard hardware. This offers breeders and scientists new opportunities to simulate genetic gain and optimize breeding schemes., Availability and Implementation: The documentation is available at https://chromax.readthedocs.io. The code is available at https://github.com/kora-labs/chromax., (© The Author(s) 2023. Published by Oxford University Press.)
- Published
- 2023
- Full Text
- View/download PDF
7. Preinterventional Third-Molar Assessment Using Robust Machine Learning.
- Author
-
Carvalho JS, Lotz M, Rubi L, Unger S, Pfister T, Buhmann JM, and Stadlinger B
- Subjects
- Humans, Artificial Intelligence, Dentists, Tooth Extraction, Mandible diagnostic imaging, Mandible surgery, Professional Role, Molar, Machine Learning, Radiography, Panoramic methods, Cone-Beam Computed Tomography, Mandibular Nerve diagnostic imaging, Molar, Third diagnostic imaging, Molar, Third surgery, Tooth, Impacted surgery
- Abstract
Machine learning (ML) models, especially deep neural networks, are increasingly being used for the analysis of medical images and as a supporting tool for clinical decision-making. In this study, we propose an artificial intelligence system to facilitate dental decision-making for the removal of mandibular third molars (M3M) based on 2-dimensional orthopantograms and the risk assessment of such a procedure. A total of 4,516 panoramic radiographic images collected at the Center of Dental Medicine at the University of Zurich, Switzerland, were used for training the ML model. After image preparation and preprocessing, a spatially dependent U-Net was employed to detect and retrieve the region of the M3M and inferior alveolar nerve (IAN). Image patches identified to contain a M3M were automatically processed by a deep neural network for the classification of M3M superimposition over the IAN (task 1) and M3M root development (task 2). A control evaluation set of 120 images, collected from a different data source than the training data and labeled by 5 dental practitioners, was leveraged to reliably evaluate model performance. By 10-fold cross-validation, we achieved accuracy values of 0.94 and 0.93 for the M3M-IAN superimposition task and the M3M root development task, respectively, and accuracies of 0.9 and 0.87 when evaluated on the control data set, using a ResNet-101 trained in a semisupervised fashion. Matthew's correlation coefficient values of 0.82 and 0.75 for task 1 and task 2, evaluated on the control data set, indicate robust generalization of our model. Depending on the different label combinations of task 1 and task 2, we propose a diagnostic table that suggests whether additional imaging via 3-dimensional cone beam tomography is advisable. Ultimately, computer-aided decision-making tools benefit clinical practice by enabling efficient and risk-reduced decision-making and by supporting less experienced practitioners before the surgical removal of the M3M., Competing Interests: Declaration of Conflicting InterestsThe authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
- Published
- 2023
- Full Text
- View/download PDF
8. Weakly supervised inference of personalized heart meshes based on echocardiography videos.
- Author
-
Laumer F, Amrani M, Manduchi L, Beuret A, Rubi L, Dubatovka A, Matter CM, and Buhmann JM
- Subjects
- Humans, Echocardiography
- Abstract
Echocardiography provides recordings of the heart chamber size and function and is a central tool for non-invasive diagnosis of heart diseases. It produces high-dimensional video data with substantial stochasticity in the measurements, which frequently prove difficult to interpret. To address this challenge, we propose an automated framework to enable the inference of a high resolution personalized 4D (3D plus time) surface mesh of the cardiac structures from 2D echocardiography video data. Inferring such shape models arises as a key step towards accurate personalized simulation that enables an automated assessment of the cardiac chamber morphology and function. The proposed method is trained using only unpaired echocardiography and heart mesh videos to find a mapping between these distinct visual domains in a self-supervised manner. The resulting model produces personalized 4D heart meshes, which exhibit a high correspondence with the input echocardiography videos. Furthermore, the 4D heart meshes enable the automatic extraction of echocardiographic variables, such as ejection fraction, myocardial muscle mass, and volumetric changes of chamber volumes over time with high temporal resolution., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2022 The Author(s). Published by Elsevier B.V. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
9. Assessment of Artificial Intelligence in Echocardiography Diagnostics in Differentiating Takotsubo Syndrome From Myocardial Infarction.
- Author
-
Laumer F, Di Vece D, Cammann VL, Würdinger M, Petkova V, Schönberger M, Schönberger A, Mercier JC, Niederseer D, Seifert B, Schwyzer M, Burkholz R, Corinzia L, Becker AS, Scherff F, Brouwers S, Pazhenkottil AP, Dougoud S, Messerli M, Tanner FC, Fischer T, Delgado V, Schulze PC, Hauck C, Maier LS, Nguyen H, Surikow SY, Horowitz J, Liu K, Citro R, Bax J, Ruschitzka F, Ghadri JR, Buhmann JM, and Templin C
- Subjects
- Aged, Artificial Intelligence, Cohort Studies, Echocardiography, Female, Humans, Male, Myocardial Infarction diagnostic imaging, Takotsubo Cardiomyopathy diagnostic imaging
- Abstract
Importance: Machine learning algorithms enable the automatic classification of cardiovascular diseases based on raw cardiac ultrasound imaging data. However, the utility of machine learning in distinguishing between takotsubo syndrome (TTS) and acute myocardial infarction (AMI) has not been studied., Objectives: To assess the utility of machine learning systems for automatic discrimination of TTS and AMI., Design, Settings, and Participants: This cohort study included clinical data and transthoracic echocardiogram results of patients with AMI from the Zurich Acute Coronary Syndrome Registry and patients with TTS obtained from 7 cardiovascular centers in the International Takotsubo Registry. Data from the validation cohort were obtained from April 2011 to February 2017. Data from the training cohort were obtained from March 2017 to May 2019. Data were analyzed from September 2019 to June 2021., Exposure: Transthoracic echocardiograms of 224 patients with TTS and 224 patients with AMI were analyzed., Main Outcomes and Measures: Area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity of the machine learning system evaluated on an independent data set and 4 practicing cardiologists for comparison. Echocardiography videos of 228 patients were used in the development and training of a deep learning model. The performance of the automated echocardiogram video analysis method was evaluated on an independent data set consisting of 220 patients. Data were matched according to age, sex, and ST-segment elevation/non-ST-segment elevation (1 patient with AMI for each patient with TTS). Predictions were compared with echocardiographic-based interpretations from 4 practicing cardiologists in terms of sensitivity, specificity, and AUC calculated from confidence scores concerning their binary diagnosis., Results: In this cohort study, apical 2-chamber and 4-chamber echocardiographic views of 110 patients with TTS (mean [SD] age, 68.4 [12.1] years; 103 [90.4%] were female) and 110 patients with AMI (mean [SD] age, 69.1 [12.2] years; 103 [90.4%] were female) from an independent data set were evaluated. This approach achieved a mean (SD) AUC of 0.79 (0.01) with an overall accuracy of 74.8 (0.7%). In comparison, cardiologists achieved a mean (SD) AUC of 0.71 (0.03) and accuracy of 64.4 (3.5%) on the same data set. In a subanalysis based on 61 patients with apical TTS and 56 patients with AMI due to occlusion of the left anterior descending coronary artery, the model achieved a mean (SD) AUC score of 0.84 (0.01) and an accuracy of 78.6 (1.6%), outperforming the 4 practicing cardiologists (mean [SD] AUC, 0.72 [0.02]) and accuracy of 66.9 (2.8%)., Conclusions and Relevance: In this cohort study, a real-time system for fully automated interpretation of echocardiogram videos was established and trained to differentiate TTS from AMI. While this system was more accurate than cardiologists in echocardiography-based disease classification, further studies are warranted for clinical application.
- Published
- 2022
- Full Text
- View/download PDF
10. Automatic Detection of Atrial Fibrillation from Single-Lead ECG Using Deep Learning of the Cardiac Cycle.
- Author
-
Dubatovka A and Buhmann JM
- Abstract
Objective and Impact Statement . Atrial fibrillation (AF) is a serious medical condition that requires effective and timely treatment to prevent stroke. We explore deep neural networks (DNNs) for learning cardiac cycles and reliably detecting AF from single-lead electrocardiogram (ECG) signals. Introduction . Electrocardiograms are widely used for diagnosis of various cardiac dysfunctions including AF. The huge amount of collected ECGs and recent algorithmic advances to process time-series data with DNNs substantially improve the accuracy of the AF diagnosis. DNNs, however, are often designed as general purpose black-box models and lack interpretability of their decisions. Methods . We design a three-step pipeline for AF detection from ECGs. First, a recording is split into a sequence of individual heartbeats based on R-peak detection. Individual heartbeats are then encoded using a DNN that extracts interpretable features of a heartbeat by disentangling the duration of a heartbeat from its shape. Second, the sequence of heartbeat codes is passed to a DNN to combine a signal-level representation capturing heart rhythm. Third, the signal representations are passed to a DNN for detecting AF. Results . Our approach demonstrates a superior performance to existing ECG analysis methods on AF detection. Additionally, the method provides interpretations of the features extracted from heartbeats by DNNs and enables cardiologists to study ECGs in terms of the shapes of individual heartbeats and rhythm of the whole signals. Conclusion . By considering ECGs on two levels and employing DNNs for modelling of cardiac cycles, this work presents a method for reliable detection of AF from single-lead ECGs., Competing Interests: The authors declare that there is no conflict of interest regarding the publication of this article., (Copyright © 2022 Alina Dubatovka and Joachim M. Buhmann.)
- Published
- 2022
- Full Text
- View/download PDF
11. Natural Age-Related Slow-Wave Sleep Alterations Onset Prematurely in the Tg2576 Mouse Model of Alzheimer's Disease.
- Author
-
Kollarik S, Dias I, Moreira CG, Bimbiryte D, Miladinovic D, Buhmann JM, Baumann CR, and Noain D
- Subjects
- Mice, Animals, Mice, Transgenic, Sleep genetics, Electroencephalography, Disease Models, Animal, Plaque, Amyloid, Alzheimer Disease complications, Alzheimer Disease genetics, Sleep, Slow-Wave
- Abstract
Introduction: Sleep insufficiency or decreased quality have been associated with Alzheimer's disease (AD) already in its preclinical stages. Whether such traits are also present in rodent models of the disease has been poorly addressed, somewhat disabling the preclinical exploration of sleep-based therapeutic interventions for AD., Methods: We investigated age-dependent sleep-wake phenotype of a widely used mouse model of AD, the Tg2576 line. We implanted electroencephalography/electromyography headpieces into 6-month-old (plaque-free, n = 10) and 11-month-old (moderate plaque-burdened, n = 10) Tg2576 mice and age-matched wild-type (WT, 6 months old n = 10, 11 months old n = 10) mice and recorded vigilance states for 24 h., Results: Tg2576 mice exhibited significantly increased wakefulness and decreased non-rapid eye movement sleep over a 24-h period compared to WT mice at 6 but not at 11 months of age. Concomitantly, power in the delta frequency was decreased in 6-month old Tg2576 mice in comparison to age-matched WT controls, rendering a reduced slow-wave energy phenotype in the young mutants. Lack of genotype-related differences over 24 h in the overall sleep-wake phenotype at 11 months of age appears to be the result of changes in sleep-wake characteristics accompanying the healthy aging of WT mice., Conclusion: Therefore, our results indicate that at the plaque-free disease stage, diminished sleep quality is present in Tg2576 mice which resembles aged healthy controls, suggesting an early-onset of sleep-wake deterioration in murine AD. Whether such disturbances in the natural patterns of sleep could in turn worsen disease progression warrants further exploration., (© 2022 The Author(s). Published by S. Karger AG, Basel.)
- Published
- 2022
- Full Text
- View/download PDF
12. Self-supervised representation learning for surgical activity recognition.
- Author
-
Paysan D, Haug L, Bajka M, Oelhafen M, and Buhmann JM
- Subjects
- Humans, Supervised Machine Learning
- Abstract
Purpose: Virtual reality-based simulators have the potential to become an essential part of surgical education. To make full use of this potential, they must be able to automatically recognize activities performed by users and assess those. Since annotations of trajectories by human experts are expensive, there is a need for methods that can learn to recognize surgical activities in a data-efficient way., Methods: We use self-supervised training of deep encoder-decoder architectures to learn representations of surgical trajectories from video data. These representations allow for semi-automatic extraction of features that capture information about semantically important events in the trajectories. Such features are processed as inputs of an unsupervised surgical activity recognition pipeline., Results: Our experiments document that the performance of hidden semi-Markov models used for recognizing activities in a simulated myomectomy scenario benefits from using features extracted from representations learned while training a deep encoder-decoder network on the task of predicting the remaining surgery progress., Conclusion: Our work is an important first step in the direction of making efficient use of features obtained from deep representation learning for surgical activity recognition in settings where only a small fraction of the existing data is annotated by human domain experts and where those annotations are potentially incomplete., (© 2021. The Author(s).)
- Published
- 2021
- Full Text
- View/download PDF
13. Improving 1-year mortality prediction in ACS patients using machine learning.
- Author
-
Weichwald S, Candreva A, Burkholz R, Klingenberg R, Räber L, Heg D, Manka R, Gencer B, Mach F, Nanchen D, Rodondi N, Windecker S, Laaksonen R, Hazen SL, von Eckardstein A, Ruschitzka F, Lüscher TF, Buhmann JM, and Matter CM
- Subjects
- Humans, Machine Learning, Prognosis, Risk Assessment, Risk Factors, Stroke Volume, Ventricular Function, Left, Acute Coronary Syndrome diagnosis
- Abstract
Background: The Global Registry of Acute Coronary Events (GRACE) score is an established clinical risk stratification tool for patients with acute coronary syndromes (ACS). We developed and internally validated a model for 1-year all-cause mortality prediction in ACS patients., Methods: Between 2009 and 2012, 2'168 ACS patients were enrolled into the Swiss SPUM-ACS Cohort. Biomarkers were determined in 1'892 patients and follow-up was achieved in 95.8% of patients. 1-year all-cause mortality was 4.3% (n = 80). In our analysis we consider all linear models using combinations of 8 out of 56 variables to predict 1-year all-cause mortality and to derive a variable ranking., Results: 1.3% of 1'420'494'075 models outperformed the GRACE 2.0 Score. The SPUM-ACS Score includes age, plasma glucose, NT-proBNP, left ventricular ejection fraction (LVEF), Killip class, history of peripheral artery disease (PAD), malignancy, and cardio-pulmonary resuscitation. For predicting 1-year mortality after ACS, the SPUM-ACS Score outperformed the GRACE 2.0 Score which achieves a 5-fold cross-validated AUC of 0.81 (95% CI 0.78-0.84). Ranking individual features according to their importance across all multivariate models revealed age, trimethylamine N-oxide, creatinine, history of PAD or malignancy, LVEF, and haemoglobin as the most relevant variables for predicting 1-year mortality., Conclusions: The variable ranking and the selection for the SPUM-ACS Score highlight the relevance of age, markers of heart failure, and comorbidities for prediction of all-cause death. Before application, this score needs to be externally validated and refined in larger cohorts., Clinical Trial Registration: NCT01000701., (Published on behalf of the European Society of Cardiology. All rights reserved. © The Author(s) 2021. For permissions, please email: journals.permissions@oup.com.)
- Published
- 2021
- Full Text
- View/download PDF
14. Neural collaborative filtering for unsupervised mitral valve segmentation in echocardiography.
- Author
-
Corinzia L, Laumer F, Candreva A, Taramasso M, Maisano F, and Buhmann JM
- Subjects
- Algorithms, Echocardiography, Humans, Machine Learning, Mitral Valve diagnostic imaging, Mitral Valve Insufficiency diagnostic imaging
- Abstract
The segmentation of the mitral valve annulus and leaflets specifies a crucial first step to establish a machine learning pipeline that can support physicians in performing multiple tasks, e.g. diagnosis of mitral valve diseases, surgical planning, and intraoperative procedures. Current methods for mitral valve segmentation on 2D echocardiography videos require extensive interaction with annotators and perform poorly on low-quality and noisy videos. We propose an automated and unsupervised method for the mitral valve segmentation based on a low dimensional embedding of the echocardiography videos using neural network collaborative filtering. The method is evaluated in a collection of echocardiography videos of patients with a variety of mitral valve diseases, and additionally on an independent test cohort. It outperforms state-of-the-art unsupervised and supervised methods on low-quality videos or in the case of sparse annotation., (Copyright © 2020 Elsevier B.V. All rights reserved.)
- Published
- 2020
- Full Text
- View/download PDF
15. SPINDLE: End-to-end learning from EEG/EMG to extrapolate animal sleep scoring across experimental settings, labs and species.
- Author
-
Miladinović Đ, Muheim C, Bauer S, Spinnler A, Noain D, Bandarabadi M, Gallusser B, Krummenacher G, Baumann C, Adamantidis A, Brown SA, and Buhmann JM
- Subjects
- Animals, Computational Biology, Humans, Machine Learning, Mice, Models, Animal, Rats, Wakefulness physiology, Electroencephalography, Electromyography, Neural Networks, Computer, Signal Processing, Computer-Assisted, Sleep physiology
- Abstract
Understanding sleep and its perturbation by environment, mutation, or medication remains a central problem in biomedical research. Its examination in animal models rests on brain state analysis via classification of electroencephalographic (EEG) signatures. Traditionally, these states are classified by trained human experts by visual inspection of raw EEG recordings, which is a laborious task prone to inter-individual variability. Recently, machine learning approaches have been developed to automate this process, but their generalization capabilities are often insufficient, especially across animals from different experimental studies. To address this challenge, we crafted a convolutional neural network-based architecture to produce domain invariant predictions, and furthermore integrated a hidden Markov model to constrain state dynamics based upon known sleep physiology. Our method, which we named SPINDLE (Sleep Phase Identification with Neural networks for Domain-invariant LEearning) was validated using data of four animal cohorts from three independent sleep labs, and achieved average agreement rates of 99%, 98%, 93%, and 97% with scorings from five human experts from different labs, essentially duplicating human capability. It generalized across different genetic mutants, surgery procedures, recording setups and even different species, far exceeding state-of-the-art solutions that we tested in parallel on this task. Moreover, we show that these scored data can be processed for downstream analyzes identical to those from human-scored data, in particular by demonstrating the ability to detect mutation-induced sleep alteration. We provide to the scientific community free usage of SPINDLE and benchmarking datasets as an online server at https://sleeplearning.ethz.ch. Our aim is to catalyze high-throughput and well-standardized experimental studies in order to improve our understanding of sleep., Competing Interests: The authors have declared that no competing interests exist.
- Published
- 2019
- Full Text
- View/download PDF
16. Pipeline validation for connectivity-based cortex parcellation.
- Author
-
Gorbach NS, Tittgemeyer M, and Buhmann JM
- Subjects
- Computer Simulation, Diffusion Magnetic Resonance Imaging standards, Echo-Planar Imaging methods, Humans, Image Processing, Computer-Assisted standards, Neuroimaging standards, Reproducibility of Results, Cerebral Cortex diagnostic imaging, Diffusion Magnetic Resonance Imaging methods, Image Processing, Computer-Assisted methods, Models, Theoretical, Neuroimaging methods
- Abstract
Structural connectivity plays a dominant role in brain function and arguably lies at the core of understanding the structure-function relationship in the cerebral cortex. Connectivity-based cortex parcellation (CCP), a framework to process structural connectivity information gained from diffusion MRI and diffusion tractography, identifies cortical subunits that furnish functional inference. The underlying pipeline of algorithms interprets similarity in structural connectivity as a segregation criterion. Validation of the CCP-pipeline is critical to gain scientific reliability of the algorithmic processing steps from dMRI data to voxel grouping. In this paper we provide a proof of concept based upon a novel model validation principle that characterizes the trade-off between informativeness and robustness to assess the validity of the CCP pipeline, including diffusion tractography and clustering. We ultimately identify a pipeline of algorithms and parameter settings that tolerate more noise and extract more information from the data than their alternatives., (Copyright © 2018 Elsevier Inc. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
17. A generative model of whole-brain effective connectivity.
- Author
-
Frässle S, Lomakina EI, Kasper L, Manjaly ZM, Leff A, Pruessmann KP, Buhmann JM, and Stephan KE
- Subjects
- Bayes Theorem, Humans, Magnetic Resonance Imaging methods, Brain physiology, Connectome methods, Models, Neurological, Models, Theoretical, Nerve Net physiology
- Abstract
The development of whole-brain models that can infer effective (directed) connection strengths from fMRI data represents a central challenge for computational neuroimaging. A recently introduced generative model of fMRI data, regression dynamic causal modeling (rDCM), moves towards this goal as it scales gracefully to very large networks. However, large-scale networks with thousands of connections are difficult to interpret; additionally, one typically lacks information (data points per free parameter) for precise estimation of all model parameters. This paper introduces sparsity constraints to the variational Bayesian framework of rDCM as a solution to these problems in the domain of task-based fMRI. This sparse rDCM approach enables highly efficient effective connectivity analyses in whole-brain networks and does not require a priori assumptions about the network's connectivity structure but prunes fully (all-to-all) connected networks as part of model inversion. Following the derivation of the variational Bayesian update equations for sparse rDCM, we use both simulated and empirical data to assess the face validity of the model. In particular, we show that it is feasible to infer effective connection strengths from fMRI data using a network with more than 100 regions and 10,000 connections. This demonstrates the feasibility of whole-brain inference on effective connectivity from fMRI data - in single subjects and with a run-time below 1 min when using parallelized code. We anticipate that sparse rDCM may find useful application in connectomics and clinical neuromodeling - for example, for phenotyping individual patients in terms of whole-brain network structure., (Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
18. Semiautomatic Assessment of the Terminal Ileum and Colon in Patients with Crohn Disease Using MRI (the VIGOR++ Project).
- Author
-
Puylaert CAJ, Schüffler PJ, Naziroglu RE, Tielbeek JAW, Li Z, Makanyanga JC, Tutein Nolthenius CJ, Nio CY, Pendsé DA, Menys A, Ponsioen CY, Atkinson D, Forbes A, Buhmann JM, Fuchs TJ, Hatzakis H, van Vliet LJ, Stoker J, Taylor SA, and Vos FM
- Subjects
- Adult, Female, Humans, Male, Observer Variation, Prospective Studies, Reproducibility of Results, Severity of Illness Index, Colon diagnostic imaging, Crohn Disease diagnostic imaging, Ileum diagnostic imaging, Image Interpretation, Computer-Assisted methods, Magnetic Resonance Imaging
- Abstract
Rationale and Objectives: The objective of this study was to develop and validate a predictive magnetic resonance imaging (MRI) activity score for ileocolonic Crohn disease activity based on both subjective and semiautomatic MRI features., Materials and Methods: An MRI activity score (the "virtual gastrointestinal tract [VIGOR]" score) was developed from 27 validated magnetic resonance enterography datasets, including subjective radiologist observation of mural T2 signal and semiautomatic measurements of bowel wall thickness, excess volume, and dynamic contrast enhancement (initial slope of increase). A second subjective score was developed based on only radiologist observations. For validation, two observers applied both scores and three existing scores to a prospective dataset of 106 patients (59 women, median age 33) with known Crohn disease, using the endoscopic Crohn's Disease Endoscopic Index of Severity (CDEIS) as a reference standard., Results: The VIGOR score (17.1 × initial slope of increase + 0.2 × excess volume + 2.3 × mural T2) and other activity scores all had comparable correlation to the CDEIS scores (observer 1: r = 0.58 and 0.59, and observer 2: r = 0.34-0.40 and 0.43-0.51, respectively). The VIGOR score, however, improved interobserver agreement compared to the other activity scores (intraclass correlation coefficient = 0.81 vs 0.44-0.59). A diagnostic accuracy of 80%-81% was seen for the VIGOR score, similar to the other scores., Conclusions: The VIGOR score achieves comparable accuracy to conventional MRI activity scores, but with significantly improved reproducibility, favoring its use for disease monitoring and therapy evaluation., (Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
19. Regression DCM for fMRI.
- Author
-
Frässle S, Lomakina EI, Razi A, Friston KJ, Buhmann JM, and Stephan KE
- Subjects
- Adult, Bayes Theorem, Brain diagnostic imaging, Humans, Brain physiology, Connectome methods, Magnetic Resonance Imaging methods, Models, Neurological
- Abstract
The development of large-scale network models that infer the effective (directed) connectivity among neuronal populations from neuroimaging data represents a key challenge for computational neuroscience. Dynamic causal models (DCMs) of neuroimaging and electrophysiological data are frequently used for inferring effective connectivity but are presently restricted to small graphs (typically up to 10 regions) in order to keep model inversion computationally feasible. Here, we present a novel variant of DCM for functional magnetic resonance imaging (fMRI) data that is suited to assess effective connectivity in large (whole-brain) networks. The approach rests on translating a linear DCM into the frequency domain and reformulating it as a special case of Bayesian linear regression. This paper derives regression DCM (rDCM) in detail and presents a variational Bayesian inversion method that enables extremely fast inference and accelerates model inversion by several orders of magnitude compared to classical DCM. Using both simulated and empirical data, we demonstrate the face validity of rDCM under different settings of signal-to-noise ratio (SNR) and repetition time (TR) of fMRI data. In particular, we assess the potential utility of rDCM as a tool for whole-brain connectomics by challenging it to infer effective connection strengths in a simulated whole-brain network comprising 66 regions and 300 free parameters. Our results indicate that rDCM represents a computationally highly efficient approach with promising potential for inferring whole-brain connectivity from individual fMRI data., (Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.)
- Published
- 2017
- Full Text
- View/download PDF
20. Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation.
- Author
-
Zilly J, Buhmann JM, and Mahapatra D
- Subjects
- Humans, Logistic Models, Neural Networks, Computer, Entropy, Glaucoma diagnostic imaging, Optic Disk diagnostic imaging, Pattern Recognition, Automated methods, Unsupervised Machine Learning
- Abstract
We present a novel method to segment retinal images using ensemble learning based convolutional neural network (CNN) architectures. An entropy sampling technique is used to select informative points thus reducing computational complexity while performing superior to uniform sampling. The sampled points are used to design a novel learning framework for convolutional filters based on boosting. Filters are learned in several layers with the output of previous layers serving as the input to the next layer. A softmax logistic classifier is subsequently trained on the output of all learned filters and applied on test images. The output of the classifier is subject to an unsupervised graph cut algorithm followed by a convex hull transformation to obtain the final segmentation. Our proposed algorithm for optic cup and disc segmentation outperforms existing methods on the public DRISHTI-GS data set on several metrics., (Copyright © 2016 Elsevier Ltd. All rights reserved.)
- Published
- 2017
- Full Text
- View/download PDF
21. Is There an Association Between Pain and Magnetic Resonance Imaging Parameters in Patients With Lumbar Spinal Stenosis?
- Author
-
Burgstaller JM, Schüffler PJ, Buhmann JM, Andreisek G, Winklhofer S, Del Grande F, Mattle M, Brunner F, Karakoumis G, Steurer J, and Held U
- Subjects
- Aged, Aged, 80 and over, Female, Humans, Lumbar Vertebrae physiopathology, Magnetic Resonance Imaging methods, Male, Pain Measurement, Prospective Studies, Back Pain etiology, Lumbar Vertebrae surgery, Magnetic Resonance Imaging adverse effects, Spinal Stenosis complications, Spinal Stenosis surgery
- Abstract
Study Design: A prospective multicenter cohort study., Objective: The aim of this study was to identify an association between pain and magnetic resonance imaging (MRI) parameters in patients with lumbar spinal stenosis (LSS)., Summary of Background Data: At present, the relationship between abnormal MRI findings and pain in patients with LSS is still unclear., Methods: First, we conducted a systematic literature search. We identified relationships of relevant MRI parameters and pain in patients with LSS. Second, we addressed the study question with a thorough descriptive and graphical analysis to establish a relationship between MRI parameters and pain using data of the LSS outcome study (LSOS)., Results: In the systematic review including four papers about the associations between radiological findings in the MRI and pain, the authors of two articles reported no association and two of them did. Of the latters, only one study found a moderate correlation between leg pain measured by Visual Analog Scale (VAS) and the degree of stenosis assessed by spine surgeons. In the data of the LSOS study, we could not identify a relevant association between any of the MRI parameters and buttock, leg, and back pain, quantified by the Spinal Stenosis Measure (SSM) and the Numeric Rating Scale (NRS). Even by restricting the analysis to the level of the lumbar spine with the most prominent radiological "stenosis," no relevant association could be shown., Conclusion: Despite a thorough analysis of the data, we were not able to prove any correlation between radiological findings (MRI) and the severity of pain. There is a need for innovative "methods/techniques" to learn more about the causal relationship between radiological findings and the patients' pain-related complaints., Level of Evidence: 2.
- Published
- 2016
- Full Text
- View/download PDF
22. Active learning based segmentation of Crohns disease from abdominal MRI.
- Author
-
Mahapatra D, Vos FM, and Buhmann JM
- Subjects
- Algorithms, Entropy, Humans, Image Processing, Computer-Assisted methods, Machine Learning, Models, Statistical, Reproducibility of Results, Software, Abdomen diagnostic imaging, Crohn Disease diagnostic imaging, Diagnosis, Computer-Assisted methods, Magnetic Resonance Imaging, Problem-Based Learning methods
- Abstract
This paper proposes a novel active learning (AL) framework, and combines it with semi supervised learning (SSL) for segmenting Crohns disease (CD) tissues from abdominal magnetic resonance (MR) images. Robust fully supervised learning (FSL) based classifiers require lots of labeled data of different disease severities. Obtaining such data is time consuming and requires considerable expertise. SSL methods use a few labeled samples, and leverage the information from many unlabeled samples to train an accurate classifier. AL queries labels of most informative samples and maximizes gain from the labeling effort. Our primary contribution is in designing a query strategy that combines novel context information with classification uncertainty and feature similarity. Combining SSL and AL gives a robust segmentation method that: (1) optimally uses few labeled samples and many unlabeled samples; and (2) requires lower training time. Experimental results show our method achieves higher segmentation accuracy than FSL methods with fewer samples and reduced training effort., (Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.)
- Published
- 2016
- Full Text
- View/download PDF
23. Image-based computational quantification and visualization of genetic alterations and tumour heterogeneity.
- Author
-
Zhong Q, Rüschoff JH, Guo T, Gabrani M, Schüffler PJ, Rechsteiner M, Liu Y, Fuchs TJ, Rupp NJ, Fankhauser C, Buhmann JM, Perner S, Poyet C, Blattner M, Soldini D, Moch H, Rubin MA, Noske A, Rüschoff J, Haffner MC, Jochum W, and Wild PJ
- Subjects
- Aged, Computational Biology methods, Endometrial Neoplasms genetics, Endometrial Neoplasms metabolism, Endometrial Neoplasms pathology, Female, Humans, Immunohistochemistry, Kaplan-Meier Estimate, Male, Middle Aged, Neoplasm Staging, Neoplasms metabolism, Neoplasms pathology, Ovarian Neoplasms genetics, Ovarian Neoplasms metabolism, Ovarian Neoplasms pathology, PTEN Phosphohydrolase genetics, PTEN Phosphohydrolase metabolism, Prostatic Neoplasms genetics, Prostatic Neoplasms metabolism, Prostatic Neoplasms pathology, Receptor, ErbB-2 genetics, Receptor, ErbB-2 metabolism, Stomach Neoplasms genetics, Stomach Neoplasms metabolism, Stomach Neoplasms pathology, DNA Copy Number Variations, Genetic Heterogeneity, Genetic Predisposition to Disease genetics, In Situ Hybridization, Fluorescence methods, Mutation, Neoplasms genetics
- Abstract
Recent large-scale genome analyses of human tissue samples have uncovered a high degree of genetic alterations and tumour heterogeneity in most tumour entities, independent of morphological phenotypes and histopathological characteristics. Assessment of genetic copy-number variation (CNV) and tumour heterogeneity by fluorescence in situ hybridization (ISH) provides additional tissue morphology at single-cell resolution, but it is labour intensive with limited throughput and high inter-observer variability. We present an integrative method combining bright-field dual-colour chromogenic and silver ISH assays with an image-based computational workflow (ISHProfiler), for accurate detection of molecular signals, high-throughput evaluation of CNV, expressive visualization of multi-level heterogeneity (cellular, inter- and intra-tumour heterogeneity), and objective quantification of heterogeneous genetic deletions (PTEN) and amplifications (19q12, HER2) in diverse human tumours (prostate, endometrial, ovarian and gastric), using various tissue sizes and different scanners, with unprecedented throughput and reproducibility.
- Published
- 2016
- Full Text
- View/download PDF
24. Visual saliency-based active learning for prostate magnetic resonance imaging segmentation.
- Author
-
Mahapatra D and Buhmann JM
- Abstract
We propose an active learning (AL) approach for prostate segmentation from magnetic resonance images. Our label query strategy is inspired from the principles of visual saliency that have similar considerations for choosing the most salient region. These similarities are encoded in a graph using classification maps and low-level features. Random walks are used to identify the most informative node, which is equivalent to the label query sample in AL. To reduce computation time, a volume of interest (VOI) is identified and all subsequent analysis, such as probability map generation using semisupervised random forest classifiers and label query, is restricted to this VOI. The negative log-likelihood of the probability maps serves as the penalty cost in a second-order Markov random field cost function, which is optimized using graph cuts for prostate segmentation. Experimental results on the Medical Image Computing and Computer Assisted Intervention (MICCAI) 2012 prostate segmentation challenge show the superior performance of our approach to conventional methods using fully supervised learning.
- Published
- 2016
- Full Text
- View/download PDF
25. Crowdsourcing the creation of image segmentation algorithms for connectomics.
- Author
-
Arganda-Carreras I, Turaga SC, Berger DR, Cireşan D, Giusti A, Gambardella LM, Schmidhuber J, Laptev D, Dwivedi S, Buhmann JM, Liu T, Seyedhosseini M, Tasdizen T, Kamentsky L, Burget R, Uher V, Tan X, Sun C, Pham TD, Bas E, Uzunbas MG, Cardona A, Schindelin J, and Seung HS
- Abstract
To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This "deep learning" approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge.
- Published
- 2015
- Full Text
- View/download PDF
26. Automatic single cell segmentation on highly multiplexed tissue images.
- Author
-
Schüffler PJ, Schapiro D, Giesen C, Wang HA, Bodenmiller B, and Buhmann JM
- Subjects
- Breast Neoplasms pathology, Female, Humans, Immunohistochemistry, Breast Neoplasms diagnosis, Diagnostic Imaging methods, Flow Cytometry methods, Single-Cell Analysis
- Abstract
The combination of mass cytometry and immunohistochemistry (IHC) enables new histopathological imaging methods in which dozens of proteins and protein modifications can be visualized simultaneously in a single tissue section. The power of multiplexing combined with spatial information and quantification was recently illustrated on breast cancer tissue and was described as next-generation IHC. Robust, accurate, and high-throughput cell segmentation is crucial for the analysis of this new generation of IHC data. To this end, we propose a watershed-based cell segmentation, which uses a nuclear marker and multiple membrane markers, the latter automatically selected based on their correlation. In comparison with the state-of-the-art segmentation pipelines, which are only using a single marker for object detection, we could show that the use of multiple markers can significantly increase the segmentation power, and thus, multiplexed information should be used and not ignored during the segmentation. Furthermore, we provide a novel, user-friendly open-source toolbox for the automatic segmentation of multiplexed histopathological images., (© 2015 International Society for Advancement of Cytometry.)
- Published
- 2015
- Full Text
- View/download PDF
27. Prediction of colorectal cancer diagnosis based on circulating plasma proteins.
- Author
-
Surinova S, Choi M, Tao S, Schüffler PJ, Chang CY, Clough T, Vysloužil K, Khoylou M, Srovnal J, Liu Y, Matondo M, Hüttenhain R, Weisser H, Buhmann JM, Hajdúch M, Brenner H, Vitek O, and Aebersold R
- Subjects
- Humans, Biomarkers, Tumor blood, Clinical Laboratory Techniques methods, Colorectal Neoplasms diagnosis, Mass Spectrometry methods, Plasma chemistry
- Abstract
Non-invasive detection of colorectal cancer with blood-based markers is a critical clinical need. Here we describe a phased mass spectrometry-based approach for the discovery, screening, and validation of circulating protein biomarkers with diagnostic value. Initially, we profiled human primary tumor tissue epithelia and characterized about 300 secreted and cell surface candidate glycoproteins. These candidates were then screened in patient systemic circulation to identify detectable candidates in blood plasma. An 88-plex targeting method was established to systematically monitor these proteins in two large and independent cohorts of plasma samples, which generated quantitative clinical datasets at an unprecedented scale. The data were deployed to develop and evaluate a five-protein biomarker signature for colorectal cancer detection., (© 2015 The Authors. Published under the terms of the CC BY 4.0 license.)
- Published
- 2015
- Full Text
- View/download PDF
28. Inversion of hierarchical Bayesian models using Gaussian processes.
- Author
-
Lomakina EI, Paliwal S, Diaconescu AO, Brodersen KH, Aponte EA, Buhmann JM, and Stephan KE
- Subjects
- Algorithms, Computer Simulation, Humans, Normal Distribution, Bayes Theorem, Brain physiology, Brain Mapping methods, Magnetic Resonance Imaging methods, Models, Neurological
- Abstract
Over the past decade, computational approaches to neuroimaging have increasingly made use of hierarchical Bayesian models (HBMs), either for inferring on physiological mechanisms underlying fMRI data (e.g., dynamic causal modelling, DCM) or for deriving computational trajectories (from behavioural data) which serve as regressors in general linear models. However, an unresolved problem is that standard methods for inverting the hierarchical Bayesian model are either very slow, e.g. Markov Chain Monte Carlo Methods (MCMC), or are vulnerable to local minima in non-convex optimisation problems, such as variational Bayes (VB). This article considers Gaussian process optimisation (GPO) as an alternative approach for global optimisation of sufficiently smooth and efficiently evaluable objective functions. GPO avoids being trapped in local extrema and can be computationally much more efficient than MCMC. Here, we examine the benefits of GPO for inverting HBMs commonly used in neuroimaging, including DCM for fMRI and the Hierarchical Gaussian Filter (HGF). Importantly, to achieve computational efficiency despite high-dimensional optimisation problems, we introduce a novel combination of GPO and local gradient-based search methods. The utility of this GPO implementation for DCM and HGF is evaluated against MCMC and VB, using both synthetic data from simulations and empirical data. Our results demonstrate that GPO provides parameter estimates with equivalent or better accuracy than the other techniques, but at a fraction of the computational cost required for MCMC. We anticipate that GPO will prove useful for robust and efficient inversion of high-dimensional and nonlinear models of neuroimaging data., (Copyright © 2015. Published by Elsevier Inc.)
- Published
- 2015
- Full Text
- View/download PDF
29. Inferring causal metabolic signals that regulate the dynamic TORC1-dependent transcriptome.
- Author
-
Oliveira AP, Dimopoulos S, Busetto AG, Christen S, Dechant R, Falter L, Haghir Chehreghani M, Jozefczuk S, Ludwig C, Rudroff F, Schulz JC, González A, Soulard A, Stracka D, Aebersold R, Buhmann JM, Hall MN, Peter M, Sauer U, and Stelling J
- Subjects
- Causality, Cell Cycle, Computer Simulation, Culture Media pharmacology, Glutamic Acid metabolism, Glutamine metabolism, Metabolome, Models, Biological, Nitrogen metabolism, Probability, Proteome, RNA, Fungal genetics, Saccharomyces cerevisiae drug effects, Signal Transduction, Gene Expression Regulation, Fungal, RNA, Fungal biosynthesis, Saccharomyces cerevisiae metabolism, Saccharomyces cerevisiae Proteins metabolism, Transcription Factors metabolism, Transcriptome
- Abstract
Cells react to nutritional cues in changing environments via the integrated action of signaling, transcriptional, and metabolic networks. Mechanistic insight into signaling processes is often complicated because ubiquitous feedback loops obscure causal relationships. Consequently, the endogenous inputs of many nutrient signaling pathways remain unknown. Recent advances for system-wide experimental data generation have facilitated the quantification of signaling systems, but the integration of multi-level dynamic data remains challenging. Here, we co-designed dynamic experiments and a probabilistic, model-based method to infer causal relationships between metabolism, signaling, and gene regulation. We analyzed the dynamic regulation of nitrogen metabolism by the target of rapamycin complex 1 (TORC1) pathway in budding yeast. Dynamic transcriptomic, proteomic, and metabolomic measurements along shifts in nitrogen quality yielded a consistent dataset that demonstrated extensive re-wiring of cellular networks during adaptation. Our inference method identified putative downstream targets of TORC1 and putative metabolic inputs of TORC1, including the hypothesized glutamine signal. The work provides a basis for further mechanistic studies of nitrogen metabolism and a general computational framework to study cellular processes., (© 2015 The Authors. Published under the terms of the CC BY 4.0 license.)
- Published
- 2015
- Full Text
- View/download PDF
30. Highly multiplexed imaging of tumor tissues with subcellular resolution by mass cytometry.
- Author
-
Giesen C, Wang HA, Schapiro D, Zivanovic N, Jacobs A, Hattendorf B, Schüffler PJ, Grolimund D, Buhmann JM, Brandt S, Varga Z, Wild PJ, Günther D, and Bodenmiller B
- Subjects
- Cell Line, Epithelial Cells cytology, Epithelial Cells metabolism, Female, Gene Expression Regulation, Neoplastic physiology, Humans, Neoplasm Proteins genetics, Breast Neoplasms metabolism, Image Cytometry methods, Neoplasm Proteins metabolism
- Abstract
Mass cytometry enables high-dimensional, single-cell analysis of cell type and state. In mass cytometry, rare earth metals are used as reporters on antibodies. Analysis of metal abundances using the mass cytometer allows determination of marker expression in individual cells. Mass cytometry has previously been applied only to cell suspensions. To gain spatial information, we have coupled immunohistochemical and immunocytochemical methods with high-resolution laser ablation to CyTOF mass cytometry. This approach enables the simultaneous imaging of 32 proteins and protein modifications at subcellular resolution; with the availability of additional isotopes, measurement of over 100 markers will be possible. We applied imaging mass cytometry to human breast cancer samples, allowing delineation of cell subpopulations and cell-cell interactions and highlighting tumor heterogeneity. Imaging mass cytometry complements existing imaging approaches. It will enable basic studies of tissue heterogeneity and function and support the transition of medicine toward individualized molecularly targeted diagnosis and therapies.
- Published
- 2014
- Full Text
- View/download PDF
31. Prostate MRI segmentation using learned semantic knowledge and graph cuts.
- Author
-
Mahapatra D and Buhmann JM
- Subjects
- Databases, Factual, Decision Trees, Humans, Male, Markov Chains, Prostatic Neoplasms pathology, Semantics, Algorithms, Image Processing, Computer-Assisted methods, Magnetic Resonance Imaging methods, Prostate anatomy & histology, Prostate pathology
- Abstract
We propose a fully automated method for prostate segmentation using random forests (RFs) and graph cuts. A volume of interest (VOI) is automatically selected using supervoxel segmentation, and its subsequent classification using image features and RF classifiers. The VOIs probability map is generated using image and context features, and a second set of RF classifiers. The negative log-likelihood of the probability maps acts as the penalty cost in a second-order Markov random field cost function. Semantic information from the second set of RF classifiers is an important measure of each feature to the classification task, which contributes to formulating the smoothness cost. The cost function is optimized using graph cuts to get the final segmentation of the prostate. With average dice metric (DM) (on the training set) and DM (on the test set), our experimental results show that inclusion of the context and semantic information contributes to higher segmentation accuracy than other methods.
- Published
- 2014
- Full Text
- View/download PDF
32. Automatic Detection and Segmentation of Crohn's Disease Tissues From Abdominal MRI.
- Author
-
Mahapatra D, Schuffler PJ, Tielbeek JA, Makanyanga JC, Stoker J, Taylor SA, Vos FM, and Buhmann JM
- Abstract
We propose an information processing pipeline for segmenting parts of the bowel in abdominal magnetic resonance images that are affected with Crohn's disease. Given a magnetic resonance imaging test volume, it is first oversegmented into supervoxels and each supervoxel is analyzed to detect presence of Crohn's disease using random forest (RF) classifiers. The supervoxels identified as containing diseased tissues define the volume of interest (VOI). All voxels within the VOI are further investigated to segment the diseased region. Probability maps are generated for each voxel using a second set of RF classifiers which give the probabilities of each voxel being diseased, normal or background. The negative log-likelihood of these maps are used as penalty costs in a graph cut segmentation framework. Low level features like intensity statistics, texture anisotropy and curvature asymmetry, and high level context features are used at different stages. Smoothness constraints are imposed based on semantic information (importance of each feature to the classification task) derived from the second set of learned RF classifiers. Experimental results show that our method achieves high segmentation accuracy with Dice metric values of 0.90 ± 0.04 and Hausdorff distance of 7.3 ± 0.8 mm. Semantic information and context features are an integral part of our method and are robust to different levels of added noise.
- Published
- 2013
- Full Text
- View/download PDF
33. Dissecting psychiatric spectrum disorders by generative embedding.
- Author
-
Brodersen KH, Deserno L, Schlagenhauf F, Lin Z, Penny WD, Buhmann JM, and Stephan KE
- Subjects
- Brain blood supply, Case-Control Studies, Humans, Models, Statistical, Nerve Net blood supply, Neural Pathways blood supply, Neural Pathways pathology, Nonlinear Dynamics, Reproducibility of Results, Brain pathology, Nerve Net pathology, Schizophrenia diagnosis
- Abstract
This proof-of-concept study examines the feasibility of defining subgroups in psychiatric spectrum disorders by generative embedding, using dynamical system models which infer neuronal circuit mechanisms from neuroimaging data. To this end, we re-analysed an fMRI dataset of 41 patients diagnosed with schizophrenia and 42 healthy controls performing a numerical n-back working-memory task. In our generative-embedding approach, we used parameter estimates from a dynamic causal model (DCM) of a visual-parietal-prefrontal network to define a model-based feature space for the subsequent application of supervised and unsupervised learning techniques. First, using a linear support vector machine for classification, we were able to predict individual diagnostic labels significantly more accurately (78%) from DCM-based effective connectivity estimates than from functional connectivity between (62%) or local activity within the same regions (55%). Second, an unsupervised approach based on variational Bayesian Gaussian mixture modelling provided evidence for two clusters which mapped onto patients and controls with nearly the same accuracy (71%) as the supervised approach. Finally, when restricting the analysis only to the patients, Gaussian mixture modelling suggested the existence of three patient subgroups, each of which was characterised by a different architecture of the visual-parietal-prefrontal working-memory network. Critically, even though this analysis did not have access to information about the patients' clinical symptoms, the three neurophysiologically defined subgroups mapped onto three clinically distinct subgroups, distinguished by significant differences in negative symptom severity, as assessed on the Positive and Negative Syndrome Scale (PANSS). In summary, this study provides a concrete example of how psychiatric spectrum diseases may be split into subgroups that are defined in terms of neurophysiological mechanisms specified by a generative model of network dynamics such as DCM. The results corroborate our previous findings in stroke patients that generative embedding, compared to analyses of more conventional measures such as functional connectivity or regional activity, can significantly enhance both the interpretability and performance of computational approaches to clinical classification.
- Published
- 2013
- Full Text
- View/download PDF
34. Near-optimal experimental design for model selection in systems biology.
- Author
-
Busetto AG, Hauser A, Krummenacher G, Sunnåker M, Dimopoulos S, Ong CS, Stelling J, and Buhmann JM
- Subjects
- Animals, Models, Theoretical, Probability, Signal Transduction, Software, TOR Serine-Threonine Kinases metabolism, Research Design, Systems Biology methods
- Abstract
Motivation: Biological systems are understood through iterations of modeling and experimentation. Not all experiments, however, are equally valuable for predictive modeling. This study introduces an efficient method for experimental design aimed at selecting dynamical models from data. Motivated by biological applications, the method enables the design of crucial experiments: it determines a highly informative selection of measurement readouts and time points., Results: We demonstrate formal guarantees of design efficiency on the basis of previous results. By reducing our task to the setting of graphical models, we prove that the method finds a near-optimal design selection with a polynomial number of evaluations. Moreover, the method exhibits the best polynomial-complexity constant approximation factor, unless P = NP. We measure the performance of the method in comparison with established alternatives, such as ensemble non-centrality, on example models of different complexity. Efficient design accelerates the loop between modeling and experimentation: it enables the inference of complex mechanisms, such as those controlling central metabolic operation., Availability: Toolbox 'NearOED' available with source code under GPL on the Machine Learning Open Source Software Web site (mloss.org).
- Published
- 2013
- Full Text
- View/download PDF
35. A supervised learning approach for Crohn's disease detection using higher-order image statistics and a novel shape asymmetry measure.
- Author
-
Mahapatra D, Schueffler P, Tielbeek JA, Buhmann JM, and Vos FM
- Subjects
- Adult, Aged, Colon pathology, Diagnosis, Differential, Female, Humans, Imaging, Three-Dimensional methods, Male, Middle Aged, Reproducibility of Results, Sensitivity and Specificity, Young Adult, Crohn Disease diagnosis, Crohn Disease pathology, Image Processing, Computer-Assisted methods, Image Processing, Computer-Assisted statistics & numerical data, Magnetic Resonance Imaging methods, Magnetic Resonance Imaging statistics & numerical data
- Abstract
Increasing incidence of Crohn's disease (CD) in the Western world has made its accurate diagnosis an important medical challenge. The current reference standard for diagnosis, colonoscopy, is time-consuming and invasive while magnetic resonance imaging (MRI) has emerged as the preferred noninvasive procedure over colonoscopy. Current MRI approaches assess rate of contrast enhancement and bowel wall thickness, and rely on extensive manual segmentation for accurate analysis. We propose a supervised learning method for the identification and localization of regions in abdominal magnetic resonance images that have been affected by CD. Low-level features like intensity and texture are used with shape asymmetry information to distinguish between diseased and normal regions. Particular emphasis is laid on a novel entropy-based shape asymmetry method and higher-order statistics like skewness and kurtosis. Multi-scale feature extraction renders the method robust. Experiments on real patient data show that our features achieve a high level of accuracy and perform better than two competing methods.
- Published
- 2013
- Full Text
- View/download PDF
36. Variational Bayesian mixed-effects inference for classification studies.
- Author
-
Brodersen KH, Daunizeau J, Mathys C, Chumbley JR, Buhmann JM, and Stephan KE
- Subjects
- Humans, Magnetic Resonance Imaging, Models, Neurological, Algorithms, Bayes Theorem, Brain physiology, Brain Mapping methods, Image Interpretation, Computer-Assisted methods
- Abstract
Multivariate classification algorithms are powerful tools for predicting cognitive or pathophysiological states from neuroimaging data. Assessing the utility of a classifier in application domains such as cognitive neuroscience, brain-computer interfaces, or clinical diagnostics necessitates inference on classification performance at more than one level, i.e., both in individual subjects and in the population from which these subjects were sampled. Such inference requires models that explicitly account for both fixed-effects (within-subjects) and random-effects (between-subjects) variance components. While models of this sort are standard in mass-univariate analyses of fMRI data, they have not yet received much attention in multivariate classification studies of neuroimaging data, presumably because of the high computational costs they entail. This paper extends a recently developed hierarchical model for mixed-effects inference in multivariate classification studies and introduces an efficient variational Bayes approach to inference. Using both synthetic and empirical fMRI data, we show that this approach is equally simple to use as, yet more powerful than, a conventional t-test on subject-specific sample accuracies, and computationally much more efficient than previous sampling algorithms and permutation tests. Our approach is independent of the type of underlying classifier and thus widely applicable. The present framework may help establish mixed-effects inference as a future standard for classification group analyses., (Copyright © 2013 Elsevier Inc. All rights reserved.)
- Published
- 2013
- Full Text
- View/download PDF
37. TMARKER: A free software toolkit for histopathological cell counting and staining estimation.
- Author
-
Schüffler PJ, Fuchs TJ, Ong CS, Wild PJ, Rupp NJ, and Buhmann JM
- Abstract
Background: Histological tissue analysis often involves manual cell counting and staining estimation of cancerous cells. These assessments are extremely time consuming, highly subjective and prone to error, since immunohistochemically stained cancer tissues usually show high variability in cell sizes, morphological structures and staining quality. To facilitate reproducible analysis in clinical practice as well as for cancer research, objective computer assisted staining estimation is highly desirable., Methods: We employ machine learning algorithms as randomized decision trees and support vector machines for nucleus detection and classification. Superpixels as segmentation over the tissue image are classified into foreground and background and thereafter into malignant and benign, learning from the user's feedback. As a fast alternative without nucleus classification, the existing color deconvolution method is incorporated., Results: Our program TMARKER connects already available workflows for computational pathology and immunohistochemical tissue rating with modern active learning algorithms from machine learning and computer vision. On a test dataset of human renal clear cell carcinoma and prostate carcinoma, the performance of the used algorithms is equivalent to two independent pathologists for nucleus detection and classification., Conclusion: We present a novel, free and operating system independent software package for computational cell counting and staining estimation, supporting IHC stained tissue analysis in clinic and for research. Proprietary toolboxes for similar tasks are expensive, bound to specific commercial hardware (e.g. a microscope) and mostly not quantitatively validated in terms of performance and reproducibility. We are confident that the presented software package will proof valuable for the scientific community and we anticipate a broader application domain due to the possibility to interactively learn models for new image types.
- Published
- 2013
- Full Text
- View/download PDF
38. Semi-supervised and active learning for automatic segmentation of Crohn's disease.
- Author
-
Mahapatra D, Schüffler PJ, Tielbeek JA, Vos FM, and Buhmann JM
- Subjects
- Humans, Image Enhancement methods, Models, Biological, Reproducibility of Results, Sensitivity and Specificity, Algorithms, Artificial Intelligence, Crohn Disease pathology, Image Interpretation, Computer-Assisted methods, Magnetic Resonance Imaging methods, Pattern Recognition, Automated methods
- Abstract
Our proposed method combines semi supervised learning (SSL) and active learning (AL) for automatic detection and segmentation of Crohn's disease (CD) from abdominal magnetic resonance (MR) images. Random forest (RF) classifiers are used due to fast SSL classification and capacity to interpret learned knowledge. Query samples for AL are selected by a novel information density weighted approach using context information, semantic knowledge and labeling uncertainty. Experimental results show that our proposed method combines the advantages of SSL and AL, and with fewer samples achieves higher classification and segmentation accuracy over fully supervised methods.
- Published
- 2013
- Full Text
- View/download PDF
39. Decoding the perception of pain from fMRI using multivariate pattern analysis.
- Author
-
Brodersen KH, Wiech K, Lomakina EI, Lin CS, Buhmann JM, Bingel U, Ploner M, Stephan KE, and Tracey I
- Subjects
- Adult, Female, Humans, Magnetic Resonance Imaging, Male, Young Adult, Brain physiology, Brain Mapping methods, Image Interpretation, Computer-Assisted methods, Pain Perception physiology
- Abstract
Pain is known to comprise sensory, cognitive, and affective aspects. Despite numerous previous fMRI studies, however, it remains open which spatial distribution of activity is sufficient to encode whether a stimulus is perceived as painful or not. In this study, we analyzed fMRI data from a perceptual decision-making task in which participants were exposed to near-threshold laser pulses. Using multivariate analyses on different spatial scales, we investigated the predictive capacity of fMRI data for decoding whether a stimulus had been perceived as painful. Our analysis yielded a rank order of brain regions: during pain anticipation, activity in the periaqueductal gray (PAG) and orbitofrontal cortex (OFC) afforded the most accurate trial-by-trial discrimination between painful and non-painful experiences; whereas during the actual stimulation, primary and secondary somatosensory cortex, anterior insula, dorsolateral and ventrolateral prefrontal cortex, and OFC were most discriminative. The most accurate prediction of pain perception from the stimulation period, however, was enabled by the combined activity in pain regions commonly referred to as the 'pain matrix'. Our results demonstrate that the neural representation of (near-threshold) pain is spatially distributed and can be best described at an intermediate spatial scale. In addition to its utility in establishing structure-function mappings, our approach affords trial-by-trial predictions and thus represents a step towards the goal of establishing an objective neuronal marker of pain perception., (Copyright © 2012 Elsevier Inc. All rights reserved.)
- Published
- 2012
- Full Text
- View/download PDF
40. Unsupervised modeling of cell morphology dynamics for time-lapse microscopy.
- Author
-
Zhong Q, Busetto AG, Fededa JP, Buhmann JM, and Gerlich DW
- Subjects
- HeLa Cells, Humans, Image Processing, Computer-Assisted instrumentation, Microscopy, Fluorescence instrumentation, RNA Interference, Time-Lapse Imaging instrumentation, Cell Division physiology, Image Processing, Computer-Assisted methods, Microscopy, Fluorescence methods, Models, Biological, Pattern Recognition, Automated methods, Time-Lapse Imaging methods
- Abstract
Analysis of cellular phenotypes in large imaging data sets conventionally involves supervised statistical methods, which require user-annotated training data. This paper introduces an unsupervised learning method, based on temporally constrained combinatorial clustering, for automatic prediction of cell morphology classes in time-resolved images. We applied the unsupervised method to diverse fluorescent markers and screening data and validated accurate classification of human cell phenotypes, demonstrating fully objective data labeling in image-based systems biology.
- Published
- 2012
- Full Text
- View/download PDF
41. Generic comparison of protein inference engines.
- Author
-
Claassen M, Reiter L, Hengartner MO, Buhmann JM, and Aebersold R
- Subjects
- Animals, Bacterial Proteins analysis, Caenorhabditis elegans, Caenorhabditis elegans Proteins analysis, Leptospira interrogans, Schizosaccharomyces, Tandem Mass Spectrometry, Proteomics methods, Search Engine
- Abstract
Protein identifications, instead of peptide-spectrum matches, constitute the biologically relevant result of shotgun proteomics studies. How to appropriately infer and report protein identifications has triggered a still ongoing debate. This debate has so far suffered from the lack of appropriate performance measures that allow us to objectively assess protein inference approaches. This study describes an intuitive, generic and yet formal performance measure and demonstrates how it enables experimentalists to select an optimal protein inference strategy for a given collection of fragment ion spectra. We applied the performance measure to systematically explore the benefit of excluding possibly unreliable protein identifications, such as single-hit wonders. Therefore, we defined a family of protein inference engines by extending a simple inference engine by thousands of pruning variants, each excluding a different specified set of possibly unreliable identifications. We benchmarked these protein inference engines on several data sets representing different proteomes and mass spectrometry platforms. Optimally performing inference engines retained all high confidence spectral evidence, without posterior exclusion of any type of protein identifications. Despite the diversity of studied data sets consistently supporting this rule, other data sets might behave differently. In order to ensure maximal reliable proteome coverage for data sets arising in other studies we advocate abstaining from rigid protein inference rules, such as exclusion of single-hit wonders, and instead consider several protein inference approaches and assess these with respect to the presented performance measure in the specific application context.
- Published
- 2012
- Full Text
- View/download PDF
42. Computational modeling for assessment of IBD: to be or not to be?
- Author
-
Vos FM, Tielbeek JA, Naziroglu RE, Li Z, Schueffler P, Mahapatra D, Wiebel A, Lavini C, Buhmann JM, Hege HC, Stoker J, and van Vliet LJ
- Subjects
- C-Reactive Protein metabolism, Colon pathology, Contrast Media, Humans, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, Reproducibility of Results, Time Factors, Computer Simulation, Inflammatory Bowel Diseases pathology, Models, Biological
- Abstract
The grading of inflammatory bowel disease (IBD) severity is important to determine the proper treatment strategy and to quantify the response to treatment. Traditionally, ileocolonoscopy is considered the reference standard for assessment of IBD. However, the procedure is invasive and requires extensive bowel preparation. Magnetic resonance imaging (MRI) has become an important tool for determining the presence of disease activity. Unfortunately, only moderate interobserver agreement is reported for most of the radiological severity measures. There is a clear demand for automated evaluation of MR images in Crohn's disease (CD). This paper aims to introduce a preliminary suite of fundamental tools for assessment of CD severity. It involves procedures for image analysis, classification and visualization to predict the colonoscopy disease scores.
- Published
- 2012
- Full Text
- View/download PDF
43. A seven-marker signature and clinical outcome in malignant melanoma: a large-scale tissue-microarray study with two independent patient cohorts.
- Author
-
Meyer S, Fuchs TJ, Bosserhoff AK, Hofstädter F, Pauer A, Roth V, Buhmann JM, Moll I, Anagnostou N, Brandner JM, Ikenberg K, Moch H, Landthaler M, Vogt T, and Wild PJ
- Subjects
- Adult, Aged, Antigens, CD20 metabolism, Cell Line, Tumor, Cells, Cultured, Cohort Studies, Cyclooxygenase 2 metabolism, Female, Humans, Immunohistochemistry statistics & numerical data, Kaplan-Meier Estimate, Male, Melanoma pathology, Melanoma therapy, Middle Aged, PTEN Phosphohydrolase metabolism, Prognosis, Proportional Hazards Models, Purine-Nucleoside Phosphorylase metabolism, Skin Neoplasms pathology, Skin Neoplasms therapy, Treatment Outcome, bcl-2-Associated X Protein metabolism, bcl-X Protein metabolism, beta Catenin metabolism, Biomarkers, Tumor metabolism, Melanoma metabolism, Skin Neoplasms metabolism, Tissue Array Analysis methods
- Abstract
Background: Current staging methods such as tumor thickness, ulceration and invasion of the sentinel node are known to be prognostic parameters in patients with malignant melanoma (MM). However, predictive molecular marker profiles for risk stratification and therapy optimization are not yet available for routine clinical assessment., Methods and Findings: Using tissue microarrays, we retrospectively analyzed samples from 364 patients with primary MM. We investigated a panel of 70 immunohistochemical (IHC) antibodies for cell cycle, apoptosis, DNA mismatch repair, differentiation, proliferation, cell adhesion, signaling and metabolism. A marker selection procedure based on univariate Cox regression and multiple testing correction was employed to correlate the IHC expression data with the clinical follow-up (overall and recurrence-free survival). The model was thoroughly evaluated with two different cross validation experiments, a permutation test and a multivariate Cox regression analysis. In addition, the predictive power of the identified marker signature was validated on a second independent external test cohort (n=225). A signature of seven biomarkers (Bax, Bcl-X, PTEN, COX-2, loss of β-Catenin, loss of MTAP, and presence of CD20 positive B-lymphocytes) was found to be an independent negative predictor for overall and recurrence-free survival in patients with MM. The seven-marker signature could also predict a high risk of disease recurrence in patients with localized primary MM stage pT1-2 (tumor thickness ≤2.00 mm). In particular, three of these markers (MTAP, COX-2, Bcl-X) were shown to offer direct therapeutic implications., Conclusions: The seven-marker signature might serve as a prognostic tool enabling physicians to selectively triage, at the time of diagnosis, the subset of high recurrence risk stage I-II patients for adjuvant therapy. Selective treatment of those patients that are more likely to develop distant metastatic disease could potentially lower the burden of untreatable metastatic melanoma and revolutionize the therapeutic management of MM.
- Published
- 2012
- Full Text
- View/download PDF
44. Anisotropic ssTEM image segmentation using dense correspondence across sections.
- Author
-
Laptev D, Vezhnevets A, Dwivedi S, and Buhmann JM
- Subjects
- Algorithms, Anisotropy, Brain Mapping methods, Diagnostic Imaging methods, Humans, Image Processing, Computer-Assisted, Microscopy, Electron methods, Neuroanatomy methods, Reproducibility of Results, Brain pathology, Microscopy, Electron, Transmission methods
- Abstract
Connectomics based on high resolution ssTEM imagery requires reconstruction of the neuron geometry from histological slides. We present an approach for the automatic membrane segmentation in anisotropic stacks of electron microscopy brain tissue sections. The ambiguities in neuronal segmentation of a section are resolved by using the context from the neighboring sections. We find the global dense correspondence between the sections by SIFT Flow algorithm, evaluate the features of the corresponding pixels and use them to perform the segmentation. Our method is 3.6 and 6.4% more accurate in two different accuracy metrics than the algorithm with no context from other sections.
- Published
- 2012
- Full Text
- View/download PDF
45. Computational pathology: challenges and promises for tissue analysis.
- Author
-
Fuchs TJ and Buhmann JM
- Subjects
- Algorithms, Automation, Diagnostic Imaging methods, Diagnostic Imaging standards, Genomics, Humans, Neoplasms pathology, Prognosis, Proteomics, Software Design, Survival Analysis, Image Interpretation, Computer-Assisted methods, Image Interpretation, Computer-Assisted standards, Pathology, Clinical methods
- Abstract
The histological assessment of human tissue has emerged as the key challenge for detection and treatment of cancer. A plethora of different data sources ranging from tissue microarray data to gene expression, proteomics or metabolomics data provide a detailed overview of the health status of a patient. Medical doctors need to assess these information sources and they rely on data driven automatic analysis tools. Methods for classification, grouping and segmentation of heterogeneous data sources as well as regression of noisy dependencies and estimation of survival probabilities enter the processing workflow of a pathology diagnosis system at various stages. This paper reports on state-of-the-art of the design and effectiveness of computational pathology workflows and it discusses future research directions in this emergent field of medical informatics and diagnostic machine learning., (Copyright © 2011 Elsevier Ltd. All rights reserved.)
- Published
- 2011
- Full Text
- View/download PDF
46. Generative embedding for model-based classification of fMRI data.
- Author
-
Brodersen KH, Schofield TM, Leff AP, Ong CS, Lomakina EI, Buhmann JM, and Stephan KE
- Subjects
- Adult, Aged, Bayes Theorem, Brain pathology, Databases, Factual, Humans, Male, Middle Aged, Models, Neurological, Nervous System Diseases diagnosis, Nervous System Diseases physiopathology, Pattern Recognition, Automated, Principal Component Analysis, Reproducibility of Results, Speech Perception, Algorithms, Aphasia physiopathology, Brain physiopathology, Computational Biology methods, Magnetic Resonance Imaging
- Abstract
Decoding models, such as those underlying multivariate classification algorithms, have been increasingly used to infer cognitive or clinical brain states from measures of brain activity obtained by functional magnetic resonance imaging (fMRI). The practicality of current classifiers, however, is restricted by two major challenges. First, due to the high data dimensionality and low sample size, algorithms struggle to separate informative from uninformative features, resulting in poor generalization performance. Second, popular discriminative methods such as support vector machines (SVMs) rarely afford mechanistic interpretability. In this paper, we address these issues by proposing a novel generative-embedding approach that incorporates neurobiologically interpretable generative models into discriminative classifiers. Our approach extends previous work on trial-by-trial classification for electrophysiological recordings to subject-by-subject classification for fMRI and offers two key advantages over conventional methods: it may provide more accurate predictions by exploiting discriminative information encoded in 'hidden' physiological quantities such as synaptic connection strengths; and it affords mechanistic interpretability of clinical classifications. Here, we introduce generative embedding for fMRI using a combination of dynamic causal models (DCMs) and SVMs. We propose a general procedure of DCM-based generative embedding for subject-wise classification, provide a concrete implementation, and suggest good-practice guidelines for unbiased application of generative embedding in the context of fMRI. We illustrate the utility of our approach by a clinical example in which we classify moderately aphasic patients and healthy controls using a DCM of thalamo-temporal regions during speech processing. Generative embedding achieves a near-perfect balanced classification accuracy of 98% and significantly outperforms conventional activation-based and correlation-based methods. This example demonstrates how disease states can be detected with very high accuracy and, at the same time, be interpreted mechanistically in terms of abnormalities in connectivity. We envisage that future applications of generative embedding may provide crucial advances in dissecting spectrum disorders into physiologically more well-defined subgroups.
- Published
- 2011
- Full Text
- View/download PDF
47. Model-based feature construction for multivariate decoding.
- Author
-
Brodersen KH, Haiss F, Ong CS, Jung F, Tittgemeyer M, Buhmann JM, Weber B, and Stephan KE
- Subjects
- Animals, Rats, Brain physiology, Computer Simulation, Models, Neurological
- Abstract
Conventional decoding methods in neuroscience aim to predict discrete brain states from multivariate correlates of neural activity. This approach faces two important challenges. First, a small number of examples are typically represented by a much larger number of features, making it hard to select the few informative features that allow for accurate predictions. Second, accuracy estimates and information maps often remain descriptive and can be hard to interpret. In this paper, we propose a model-based decoding approach that addresses both challenges from a new angle. Our method involves (i) inverting a dynamic causal model of neurophysiological data in a trial-by-trial fashion; (ii) training and testing a discriminative classifier on a strongly reduced feature space derived from trial-wise estimates of the model parameters; and (iii) reconstructing the separating hyperplane. Since the approach is model-based, it provides a principled dimensionality reduction of the feature space; in addition, if the model is neurobiologically plausible, decoding results may offer a mechanistically meaningful interpretation. The proposed method can be used in conjunction with a variety of modelling approaches and brain data, and supports decoding of either trial or subject labels. Moreover, it can supplement evidence-based approaches for model-based decoding and enable structural model selection in cases where Bayesian model selection cannot be applied. Here, we illustrate its application using dynamic causal modelling (DCM) of electrophysiological recordings in rodents. We demonstrate that the approach achieves significant above-chance performance and, at the same time, allows for a neurobiological interpretation of the results., (Copyright © 2010 Elsevier Inc. All rights reserved.)
- Published
- 2011
- Full Text
- View/download PDF
48. Proteome coverage prediction for integrated proteomics datasets.
- Author
-
Claassen M, Aebersold R, and Buhmann JM
- Subjects
- Bacterial Proteins metabolism, Databases, Protein, Mass Spectrometry methods, Models, Biological, Proteome metabolism, Bacterial Proteins analysis, Leptospira interrogans metabolism, Proteome analysis, Proteomics methods
- Abstract
Comprehensive characterization of a proteome defines a fundamental goal in proteomics. In order to maximize proteome coverage for a complex protein mixture, i.e., to identify as many proteins as possible, various different fractionation experiments are typically performed and the individual fractions are subjected to mass spectrometric analysis. The resulting data are integrated into large and heterogeneous datasets. Proteome coverage prediction refers to the task of extrapolating the number of protein discoveries by future measurements conditioned on a sequence of already performed measurements. Proteome coverage prediction at an early stage enables experimentalists to design and plan efficient proteomics studies. To date, there does not exist any method that reliably predicts proteome coverage from integrated datasets. We present a generalized hierarchical Pitman-Yor process model that explicitly captures the redundancy within integrated datasets. The accuracy of our approach for proteome coverage prediction is assessed by applying it to an integrated proteomics dataset for the bacterium L. interrogans. The proposed procedure outperforms ad hoc extrapolation methods and prediction methods designed for non-integrated datasets. Furthermore, the maximally achievable proteome coverage is estimated for the experimental setup underlying the L. interrogans dataset. We discuss the implications of our results for determining rational stop criteria and their influence on the design of efficient and reliable proteomics studies.
- Published
- 2011
- Full Text
- View/download PDF
49. Infinite mixture-of-experts model for sparse survival regression with application to breast cancer.
- Author
-
Raman S, Fuchs TJ, Wild PJ, Dahl E, Buhmann JM, and Roth V
- Subjects
- Bayes Theorem, Breast Neoplasms diagnosis, Cluster Analysis, Cohort Studies, Computer Simulation, Databases, Factual, Female, Humans, Kaplan-Meier Estimate, Markov Chains, Monte Carlo Method, Prognosis, Proportional Hazards Models, Reproducibility of Results, Breast Neoplasms mortality, Models, Statistical, Regression Analysis
- Abstract
Background: We present an infinite mixture-of-experts model to find an unknown number of sub-groups within a given patient cohort based on survival analysis. The effect of patient features on survival is modeled using the Cox's proportionality hazards model which yields a non-standard regression component. The model is able to find key explanatory factors (chosen from main effects and higher-order interactions) for each sub-group by enforcing sparsity on the regression coefficients via the Bayesian Group-Lasso., Results: Simulated examples justify the need of such an elaborate framework for identifying sub-groups along with their key characteristics versus other simpler models. When applied to a breast-cancer dataset consisting of survival times and protein expression levels of patients, it results in identifying two distinct sub-groups with different survival patterns (low-risk and high-risk) along with the respective sets of compound markers., Conclusions: The unified framework presented here, combining elements of cluster and feature detection for survival analysis, is clearly a powerful tool for analyzing survival patterns within a patient group. The model also demonstrates the feasibility of analyzing complex interactions which can contribute to definition of novel prognostic compound markers.
- Published
- 2010
- Full Text
- View/download PDF
50. Fully automatic stitching and distortion correction of transmission electron microscope images.
- Author
-
Kaynig V, Fischer B, Müller E, and Buhmann JM
- Subjects
- Models, Theoretical, Image Processing, Computer-Assisted methods, Microscopy, Electron, Transmission methods
- Abstract
In electron microscopy, a large field of view is commonly captured by taking several images of a sample region and then by stitching these images together. Non-linear lens distortions induced by the electromagnetic lenses of the microscope render a seamless stitching with linear transformations impossible. This problem is aggravated by large CCD cameras, as they are commonly in use nowadays. We propose a new calibration method based on ridge regression that compensates non-linear lens distortions, while ensuring that the geometry of the image is preserved. Our method estimates the distortion correction from overlapping image areas using automatically extracted correspondence points. Therefore, the estimation of the correction transform does not require any special calibration samples. We evaluate our method on simulated ground truth data as well as on real electron microscopy data. Our experiments demonstrate that the lens calibration robustly corrects large distortions with an average stitching error exceeding 10 pixels to sub-pixel accuracy within two iteration steps., (Copyright 2010 Elsevier Inc. All rights reserved.)
- Published
- 2010
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.