35 results
Search Results
2. Workflows to Driving High-Performance Interactive Supercomputing for Urgent Decision Making
- Abstract
Interactive urgent computing is a small but growing user of supercomputing resources. However there are numerous technical challenges that must be overcome to make supercomputers fully suited to the wide range of urgent workloads which could benefit from the computational power delivered by such instruments. An important question is how to connect the different components of an urgent workload; namely the users, the simulation codes, and external data sources, together in a structured and accessible manner. In this paper we explore the role of workflows from both the perspective of marshalling and control of urgent workloads, and at the individual HPC machine level. Ultimately requiring two workflow systems, by using a space weather prediction urgent use-cases, we explore the benefit that these two workflow systems provide especially when one exploits the flexibility enabled by them interoperating., QC 20230515
- Published
- 2022
- Full Text
- View/download PDF
3. A survey of HPC algorithms and frameworks for large-scale gradient-based nonlinear optimization
- Abstract
Large-scale numerical optimization problems arise from many fields and have applications in both industrial and academic contexts. Finding solutions to such optimization problems efficiently requires algorithms that are able to leverage the increasing parallelism available in modern computing hardware. In this paper, we review previous work on parallelizing algorithms for nonlinear optimization. To introduce the topic, the paper starts by giving an accessible introduction to nonlinear optimization and high-performance computing. This is followed by a survey of previous work on parallelization and utilization of high-performance computing hardware for nonlinear optimization algorithms. Finally, we present a number of optimization software libraries and how they are able to utilize parallel computing today. This study can serve as an introduction point for researchers interested in nonlinear optimization or high-performance computing, as well as provide ideas and inspiration for future work combining these topics., QC 20230220
- Published
- 2022
- Full Text
- View/download PDF
4. Rethinking Computer-Aided Architectural Design (CAAD) - From Generative Algorithms and Architectural Intelligence to Environmental Design and Ambient Intelligence
- Abstract
Computer-Aided Architectural Design (CAAD) finds its historical precedents in technological enthusiasm for generative algorithms and architectural intelligence. Current developments in Artificial Intelligence (AI) and paradigms in Machine Learning (ML) bring new opportunities for creating innovative digital architectural tools, but in practice this is not happening. CAAD enthusiasts revisit generative algorithms, while professional architects and urban designers remain reluctant to use software that automatically generates architecture and cities. This paper looks at the history of CAAD and digital tools for Computer Aided Design (CAD), Building Information Modeling (BIM) and Geographic Information Systems (GIS) in order to reflect on the role of AI in future digital tools and professional practices. Architects and urban designers have diagrammatic knowledge and work with design problems on symbolic level. The digital tools gradually evolved from CAD to BIM software with symbolical architectural elements. The BIM software works like CAAD (CAD systems for Architects) or digital board for drawing and delivers plans, sections and elevations, but without AI. AI has the capability to process data and interact with designers. The AI in future digital tools for CAAD and Computer-Aided Urban Design (CAUD) can link to big data and develop ambient intelligence. Architects and urban designers can harness the benefits of analytical ambient intelligent AIs in creating environmental designs, not only for shaping buildings in isolated virtual cubicles. However there is a need to prepare frameworks for communication between AIs and professional designers. If the cities of the future integrate spatially analytical AI, are to be made smart or even ambient intelligent, AI should be applied to improving the lives of inhabitants and help with their daily living and sustainability., QC 20220518
- Published
- 2022
- Full Text
- View/download PDF
5. Scale-covariant and scale-invariant Gaussian derivative networks
- Abstract
This paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade. By sharing the learnt parameters between multiple scale channels, and by using the transformation properties of the scale-space primitives under scaling transformations, the resulting network becomes provably scale covariant. By in addition performing max pooling over the multiple scale channels, a resulting network architecture for image classification also becomes provably scale invariant. We investigate the performance of such networks on the MNISTLargeScale dataset, which contains rescaled images from original MNIST over a factor of 4 concerning training data and over a factor of 16 concerning testing data. It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not spanned by the training data., Part of proceedings: ISBN 978-3-030-75548-5Not duplicate with DiVA 1505585QC 20210317, Scale-space theory for covariant and invariant visual perception
- Published
- 2021
- Full Text
- View/download PDF
6. Scale-covariant and scale-invariant Gaussian derivative networks
- Abstract
This paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade. By sharing the learnt parameters between multiple scale channels, and by using the transformation properties of the scale-space primitives under scaling transformations, the resulting network becomes provably scale covariant. By in addition performing max pooling over the multiple scale channels, a resulting network architecture for image classification also becomes provably scale invariant. We investigate the performance of such networks on the MNISTLargeScale dataset, which contains rescaled images from original MNIST over a factor of 4 concerning training data and over a factor of 16 concerning testing data. It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not spanned by the training data., Part of proceedings: ISBN 978-3-030-75548-5Not duplicate with DiVA 1505585QC 20210317, Scale-space theory for covariant and invariant visual perception
- Published
- 2021
- Full Text
- View/download PDF
7. Scale-covariant and scale-invariant Gaussian derivative networks
- Abstract
This paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade. By sharing the learnt parameters between multiple scale channels, and by using the transformation properties of the scale-space primitives under scaling transformations, the resulting network becomes provably scale covariant. By in addition performing max pooling over the multiple scale channels, a resulting network architecture for image classification also becomes provably scale invariant. We investigate the performance of such networks on the MNISTLargeScale dataset, which contains rescaled images from original MNIST over a factor of 4 concerning training data and over a factor of 16 concerning testing data. It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not spanned by the training data., Part of proceedings: ISBN 978-3-030-75548-5Not duplicate with DiVA 1505585QC 20210317, Scale-space theory for covariant and invariant visual perception
- Published
- 2021
- Full Text
- View/download PDF
8. Scale-covariant and scale-invariant Gaussian derivative networks
- Abstract
This paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade. By sharing the learnt parameters between multiple scale channels, and by using the transformation properties of the scale-space primitives under scaling transformations, the resulting network becomes provably scale covariant. By in addition performing max pooling over the multiple scale channels, a resulting network architecture for image classification also becomes provably scale invariant. We investigate the performance of such networks on the MNISTLargeScale dataset, which contains rescaled images from original MNIST over a factor of 4 concerning training data and over a factor of 16 concerning testing data. It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not spanned by the training data., Part of proceedings: ISBN 978-3-030-75548-5Not duplicate with DiVA 1505585QC 20210317, Scale-space theory for covariant and invariant visual perception
- Published
- 2021
- Full Text
- View/download PDF
9. Scale-covariant and scale-invariant Gaussian derivative networks
- Abstract
This paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade. By sharing the learnt parameters between multiple scale channels, and by using the transformation properties of the scale-space primitives under scaling transformations, the resulting network becomes provably scale covariant. By in addition performing max pooling over the multiple scale channels, a resulting network architecture for image classification also becomes provably scale invariant. We investigate the performance of such networks on the MNISTLargeScale dataset, which contains rescaled images from original MNIST over a factor of 4 concerning training data and over a factor of 16 concerning testing data. It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not spanned by the training data., Part of proceedings: ISBN 978-3-030-75548-5Not duplicate with DiVA 1505585QC 20210317, Scale-space theory for covariant and invariant visual perception
- Published
- 2021
- Full Text
- View/download PDF
10. Explanation-Based Weakly-Supervised Learning of Visual Relations with Graph Networks
- Abstract
Visual relationship detection is fundamental for holistic image understanding. However, the localization and classification of (subject, predicate, object) triplets remain challenging tasks, due to the combinatorial explosion of possible relationships, their long-tailed distribution in natural images, and an expensive annotation process. This paper introduces a novel weakly-supervised method for visual relationship detection that relies on minimal image-level predicate labels. A graph neural network is trained to classify predicates in images from a graph representation of detected objects, implicitly encoding an inductive bias for pairwise relations. We then frame relationship detection as the explanation of such a predicate classifier, i.e. we obtain a complete relation by recovering the subject and object of a predicted predicate. We present results comparable to recent fully- and weakly-supervised methods on three diverse and challenging datasets: HICO-DET for human-object interaction, Visual Relationship Detection for generic object-to-object relations, and UnRel for unusual triplets; demonstrating robustness to non-comprehensive annotations and good few-shot generalization., Part of ISBN 9783030586034QC 20210323
- Published
- 2020
- Full Text
- View/download PDF
11. Explanation-Based Weakly-Supervised Learning of Visual Relations with Graph Networks
- Abstract
Visual relationship detection is fundamental for holistic image understanding. However, the localization and classification of (subject, predicate, object) triplets remain challenging tasks, due to the combinatorial explosion of possible relationships, their long-tailed distribution in natural images, and an expensive annotation process. This paper introduces a novel weakly-supervised method for visual relationship detection that relies on minimal image-level predicate labels. A graph neural network is trained to classify predicates in images from a graph representation of detected objects, implicitly encoding an inductive bias for pairwise relations. We then frame relationship detection as the explanation of such a predicate classifier, i.e. we obtain a complete relation by recovering the subject and object of a predicted predicate. We present results comparable to recent fully- and weakly-supervised methods on three diverse and challenging datasets: HICO-DET for human-object interaction, Visual Relationship Detection for generic object-to-object relations, and UnRel for unusual triplets; demonstrating robustness to non-comprehensive annotations and good few-shot generalization., Part of ISBN 9783030586034QC 20210323
- Published
- 2020
- Full Text
- View/download PDF
12. Explanation-Based Weakly-Supervised Learning of Visual Relations with Graph Networks
- Abstract
Visual relationship detection is fundamental for holistic image understanding. However, the localization and classification of (subject, predicate, object) triplets remain challenging tasks, due to the combinatorial explosion of possible relationships, their long-tailed distribution in natural images, and an expensive annotation process. This paper introduces a novel weakly-supervised method for visual relationship detection that relies on minimal image-level predicate labels. A graph neural network is trained to classify predicates in images from a graph representation of detected objects, implicitly encoding an inductive bias for pairwise relations. We then frame relationship detection as the explanation of such a predicate classifier, i.e. we obtain a complete relation by recovering the subject and object of a predicted predicate. We present results comparable to recent fully- and weakly-supervised methods on three diverse and challenging datasets: HICO-DET for human-object interaction, Visual Relationship Detection for generic object-to-object relations, and UnRel for unusual triplets; demonstrating robustness to non-comprehensive annotations and good few-shot generalization., Part of ISBN 9783030586034QC 20210323
- Published
- 2020
- Full Text
- View/download PDF
13. Explanation-Based Weakly-Supervised Learning of Visual Relations with Graph Networks
- Abstract
Visual relationship detection is fundamental for holistic image understanding. However, the localization and classification of (subject, predicate, object) triplets remain challenging tasks, due to the combinatorial explosion of possible relationships, their long-tailed distribution in natural images, and an expensive annotation process. This paper introduces a novel weakly-supervised method for visual relationship detection that relies on minimal image-level predicate labels. A graph neural network is trained to classify predicates in images from a graph representation of detected objects, implicitly encoding an inductive bias for pairwise relations. We then frame relationship detection as the explanation of such a predicate classifier, i.e. we obtain a complete relation by recovering the subject and object of a predicted predicate. We present results comparable to recent fully- and weakly-supervised methods on three diverse and challenging datasets: HICO-DET for human-object interaction, Visual Relationship Detection for generic object-to-object relations, and UnRel for unusual triplets; demonstrating robustness to non-comprehensive annotations and good few-shot generalization., Part of ISBN 9783030586034QC 20210323
- Published
- 2020
- Full Text
- View/download PDF
14. Explanation-Based Weakly-Supervised Learning of Visual Relations with Graph Networks
- Abstract
Visual relationship detection is fundamental for holistic image understanding. However, the localization and classification of (subject, predicate, object) triplets remain challenging tasks, due to the combinatorial explosion of possible relationships, their long-tailed distribution in natural images, and an expensive annotation process. This paper introduces a novel weakly-supervised method for visual relationship detection that relies on minimal image-level predicate labels. A graph neural network is trained to classify predicates in images from a graph representation of detected objects, implicitly encoding an inductive bias for pairwise relations. We then frame relationship detection as the explanation of such a predicate classifier, i.e. we obtain a complete relation by recovering the subject and object of a predicted predicate. We present results comparable to recent fully- and weakly-supervised methods on three diverse and challenging datasets: HICO-DET for human-object interaction, Visual Relationship Detection for generic object-to-object relations, and UnRel for unusual triplets; demonstrating robustness to non-comprehensive annotations and good few-shot generalization., Part of ISBN 9783030586034QC 20210323
- Published
- 2020
- Full Text
- View/download PDF
15. Detection of Ischemic Infarct Core in Non-contrast Computed Tomography
- Abstract
Fast diagnosis is of critical importance for stroke treatment. In clinical routine, a non-contrast computed tomography scan (NCCT) is typically acquired immediately to determine whether the stroke is ischemic or hemorrhagic and plan therapy accordingly. In case of ischemia, early signs of infarction may appear due to increased water uptake. These signs may be subtle, especially if observed only shortly after symptom onset, but hold the potential to provide a crucial first assessment of the location and extent of the infarction. In this paper, we train a deep neural network to predict the infarct core from NCCT in an image-to-image fashion. To facilitate exploitation of anatomic correspondences, learning is carried out in the standardized coordinate system of a brain atlas to which all images are deformably registered. Apart from binary infarct core masks, perfusion maps such as cerebral blood volume and flow are employed as additional training targets to enrich the physiologic information available to the model. This extension is demonstrated to substantially improve the predictions of our model, which is trained on a data set consisting of 141 cases. It achieves a higher volumetric overlap (statistically significant,) of the predicted core with the reference mask as well as a better localization, although significance could not be shown for the latter. Agreement with human and automatic assessment of affected ASPECTS regions is likewise improved, measured as an increase of the area under the receiver operating characteristic curve from 72.7% to 75.1% and 71.9% to 83.5%, respectively., QC 20210322
- Published
- 2020
- Full Text
- View/download PDF
16. Multi-GPU acceleration of the iPIC3D implicit particle-in-cell code
- Abstract
iPIC3D is a widely used massively parallel Particle-in-Cell code for the simulation of space plasmas. However, its current implementation does not support execution on multiple GPUs. In this paper, we describe the porting of iPIC3D particle mover to GPUs and the optimization steps to increase the performance and parallel scaling on multiple GPUs. We analyze the strong scaling of the mover on two GPU clusters and evaluate its performance and acceleration. The optimized GPU version which uses pinned memory and asynchronous data prefetching outperform their corresponding CPU versions by 5−10× on two different systems equipped with NVIDIA K80 and V100 GPUs., QC 20200914
- Published
- 2019
- Full Text
- View/download PDF
17. A mixture-of-experts model for vehicle prediction using an online learning approach
- Abstract
Predicting future motion of other vehicles or, more generally, the development of traffic situations, is an essential step towards secure, context-aware automated driving. On the one hand, human drivers are able to anticipate driving situations continuously based on the currently perceived behavior of other traffic participants while incorporating prior experience. On the other hand, the most successful data-driven prediction models are typically trained on large amounts of recorded data before deployment achieving remarkable results. In this paper, we present a mixture-of-experts online learning model encapsulating both ideas. Our system learns at run time to choose between several models, which have been previously trained offline, based on the current situational context. We show that our model is able to improve over the offline models already after a short ramp-up phase. We evaluate our system on real world driving data., QC 20200330
- Published
- 2019
- Full Text
- View/download PDF
18. Scale-covariant and scale-invariant Gaussian derivative networks
- Abstract
This paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade. By sharing the learnt parameters between multiple scale channels, and by using the transformation properties of the scale-space primitives under scaling transformations, the resulting network becomes provably scale covariant. By in addition performing max pooling over the multiple scale channels, or other permutation-invariant pooling over scales, a resulting network architecture for image classification also becomes provably scale invariant. We investigate the performance of such networks on the MNIST Large Scale dataset, which contains rescaled images from the original MNISTdataset over a factor of 4 concerning training data and over a factor of 16 concerning testing data. It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not spanned by the training data., QC 20211021, Scale-space theory for covariant and invariant visual perception
- Published
- 2022
- Full Text
- View/download PDF
19. Scale-covariant and scale-invariant Gaussian derivative networks
- Abstract
This paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade. By sharing the learnt parameters between multiple scale channels, and by using the transformation properties of the scale-space primitives under scaling transformations, the resulting network becomes provably scale covariant. By in addition performing max pooling over the multiple scale channels, or other permutation-invariant pooling over scales, a resulting network architecture for image classification also becomes provably scale invariant. We investigate the performance of such networks on the MNIST Large Scale dataset, which contains rescaled images from the original MNISTdataset over a factor of 4 concerning training data and over a factor of 16 concerning testing data. It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not spanned by the training data., QC 20211021, Scale-space theory for covariant and invariant visual perception
- Published
- 2022
- Full Text
- View/download PDF
20. Scale-covariant and scale-invariant Gaussian derivative networks
- Abstract
This paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade. By sharing the learnt parameters between multiple scale channels, and by using the transformation properties of the scale-space primitives under scaling transformations, the resulting network becomes provably scale covariant. By in addition performing max pooling over the multiple scale channels, or other permutation-invariant pooling over scales, a resulting network architecture for image classification also becomes provably scale invariant. We investigate the performance of such networks on the MNIST Large Scale dataset, which contains rescaled images from the original MNISTdataset over a factor of 4 concerning training data and over a factor of 16 concerning testing data. It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not spanned by the training data., QC 20211021, Scale-space theory for covariant and invariant visual perception
- Published
- 2022
- Full Text
- View/download PDF
21. Scale-covariant and scale-invariant Gaussian derivative networks
- Abstract
This paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade. By sharing the learnt parameters between multiple scale channels, and by using the transformation properties of the scale-space primitives under scaling transformations, the resulting network becomes provably scale covariant. By in addition performing max pooling over the multiple scale channels, or other permutation-invariant pooling over scales, a resulting network architecture for image classification also becomes provably scale invariant. We investigate the performance of such networks on the MNIST Large Scale dataset, which contains rescaled images from the original MNISTdataset over a factor of 4 concerning training data and over a factor of 16 concerning testing data. It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not spanned by the training data., QC 20211021, Scale-space theory for covariant and invariant visual perception
- Published
- 2022
- Full Text
- View/download PDF
22. Scale-covariant and scale-invariant Gaussian derivative networks
- Abstract
This paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade. By sharing the learnt parameters between multiple scale channels, and by using the transformation properties of the scale-space primitives under scaling transformations, the resulting network becomes provably scale covariant. By in addition performing max pooling over the multiple scale channels, or other permutation-invariant pooling over scales, a resulting network architecture for image classification also becomes provably scale invariant. We investigate the performance of such networks on the MNIST Large Scale dataset, which contains rescaled images from the original MNISTdataset over a factor of 4 concerning training data and over a factor of 16 concerning testing data. It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not spanned by the training data., QC 20211021, Scale-space theory for covariant and invariant visual perception
- Published
- 2022
- Full Text
- View/download PDF
23. Fast Electromagnetic Field Pattern Calculation with Fourier Neural Operators
- Abstract
Calculating the field pattern arising from an array of radiating sources is a central problem in Computational ElectroMagnetics (CEM) and a critical operation for designing and developing antenna systems. Yet, it is a computationally expensive operation when using traditional numerical approaches, including finite-difference in the time and spectral domains. To address this issue, we develop a new data-driven surrogate model for fast and accurate calculation of the field radiation pattern. The method is based on the Fourier Neural Operator (FNO) technique. We show that we achieve a performance improvement of 31x when compared to the performance of the Meep CEM solver when running on a desktop laptop CPU at the cost of a small accuracy loss., Part of ISBN 9783031360206QC 20230919
- Published
- 2023
- Full Text
- View/download PDF
24. Brain-like Combination of Feedforward and Recurrent Network Components Achieves Prototype Extraction and Robust Pattern Recognition
- Abstract
Associative memory has been a prominent candidate for the computation performed by the massively recurrent neocortical networks. Attractor networks implementing associative memory have offered mechanistic explanation for many cognitive phenomena. However, attractor memory models are typically trained using orthogonal or random patterns to avoid interference between memories, which makes them unfeasible for naturally occurring complex correlated stimuli like images. We approach this problem by combining a recurrent attractor network with a feedforward network that learns distributed representations using an unsupervised Hebbian-Bayesian learning rule. The resulting network model incorporates many known biological properties: unsupervised learning, Hebbian plasticity, sparse distributed activations, sparse connectivity, columnar and laminar cortical architecture, etc. We evaluate the synergistic effects of the feedforward and recurrent network components in complex pattern recognition tasks on the MNIST handwritten digits dataset. We demonstrate that the recurrent attractor component implements associative memory when trained on the feedforward-driven internal (hidden) representations. The associative memory is also shown to perform prototype extraction from the training data and make the representations robust to severely distorted input. We argue that several aspects of the proposed integration of feedforward and recurrent computations are particularly attractive from a machine learning perspective., QC 20230621
- Published
- 2023
- Full Text
- View/download PDF
25. Breaking Down the Parallel Performance of GROMACS, a High-Performance Molecular Dynamics Software
- Abstract
GROMACS is one of the most widely used HPC software packages using the Molecular Dynamics (MD) simulation technique. In this work, we quantify GROMACS parallel performance using different configurations, HPC systems, and FFT libraries (FFTW, Intel MKL FFT, and FFT PACK). We break down the cost of each GROMACS computational phase and identify non-scalable stages, such as MPI communication during the 3D FFT computation when using a large number of processes. We show that the Particle-Mesh Ewald phase and the 3D FFT calculation significantly impact the GROMACS performance. Finally, we discuss performance opportunities with a particular interest in developing GROMACS for the FFT calculations., QC 20230515
- Published
- 2023
- Full Text
- View/download PDF
26. Distributed Objective Function Evaluation for Optimization of Radiation Therapy Treatment Plans
- Abstract
The modern workflow for radiation therapy treatment planning involves mathematical optimization to determine optimal treatment machine parameters for each patient case. The optimization problems can be computationally expensive, requiring iterative optimization algorithms to solve. In this work, we investigate a method for distributing the calculation of objective functions and gradients for radiation therapy optimization problems across computational nodes. We test our approach on the TROTS dataset--- which consists of optimization problems from real clinical patient cases---using the IPOPT optimization solver in a leader/follower type approach for parallelization. We show that our approach can utilize multiple computational nodes efficiently, with a speedup of approximately 2-3.5 times compared to the serial version., QC 20230523
- Published
- 2023
- Full Text
- View/download PDF
27. Notes on Percolation Analysis of Sampled Scalar Fields
- Abstract
Percolation analysis is used to explore the connectivity of randomly connected infinite graphs. In the finite case, a closely related percolation function captures the relative volume of the largest connected component in a scalar field’s superlevel set. While prior work has shown that random scalar fields with little spatial correlation yield a sharp transition in this function, little is known about its behavior on real data. In this work, we explore how different characteristics of a scalar field—such as its histogram or degree of structure—influence the shape of the percolation function. We estimate the critical value and transition width of the percolation function, and propose a corresponding normalization scheme that relates these values to known results on infinite graphs. In our experiments, we find that percolation analysis can be used to analyze the degree of structure in Gaussian random fields. On a simulated turbulent duct flow data set we observe that the critical values are stable and consistent across time. Our normalization scheme indeed aids comparison between data sets and relation to infinite graphs., QC 20221005
- Published
- 2021
- Full Text
- View/download PDF
28. Notes on Percolation Analysis of Sampled Scalar Fields
- Abstract
Percolation analysis is used to explore the connectivity of randomly connected infinite graphs. In the finite case, a closely related percolation function captures the relative volume of the largest connected component in a scalar field’s superlevel set. While prior work has shown that random scalar fields with little spatial correlation yield a sharp transition in this function, little is known about its behavior on real data. In this work, we explore how different characteristics of a scalar field—such as its histogram or degree of structure—influence the shape of the percolation function. We estimate the critical value and transition width of the percolation function, and propose a corresponding normalization scheme that relates these values to known results on infinite graphs. In our experiments, we find that percolation analysis can be used to analyze the degree of structure in Gaussian random fields. On a simulated turbulent duct flow data set we observe that the critical values are stable and consistent across time. Our normalization scheme indeed aids comparison between data sets and relation to infinite graphs., QC 20221005
- Published
- 2021
- Full Text
- View/download PDF
29. Notes on Percolation Analysis of Sampled Scalar Fields
- Abstract
Percolation analysis is used to explore the connectivity of randomly connected infinite graphs. In the finite case, a closely related percolation function captures the relative volume of the largest connected component in a scalar field’s superlevel set. While prior work has shown that random scalar fields with little spatial correlation yield a sharp transition in this function, little is known about its behavior on real data. In this work, we explore how different characteristics of a scalar field—such as its histogram or degree of structure—influence the shape of the percolation function. We estimate the critical value and transition width of the percolation function, and propose a corresponding normalization scheme that relates these values to known results on infinite graphs. In our experiments, we find that percolation analysis can be used to analyze the degree of structure in Gaussian random fields. On a simulated turbulent duct flow data set we observe that the critical values are stable and consistent across time. Our normalization scheme indeed aids comparison between data sets and relation to infinite graphs., QC 20221005
- Published
- 2021
- Full Text
- View/download PDF
30. Notes on Percolation Analysis of Sampled Scalar Fields
- Abstract
Percolation analysis is used to explore the connectivity of randomly connected infinite graphs. In the finite case, a closely related percolation function captures the relative volume of the largest connected component in a scalar field’s superlevel set. While prior work has shown that random scalar fields with little spatial correlation yield a sharp transition in this function, little is known about its behavior on real data. In this work, we explore how different characteristics of a scalar field—such as its histogram or degree of structure—influence the shape of the percolation function. We estimate the critical value and transition width of the percolation function, and propose a corresponding normalization scheme that relates these values to known results on infinite graphs. In our experiments, we find that percolation analysis can be used to analyze the degree of structure in Gaussian random fields. On a simulated turbulent duct flow data set we observe that the critical values are stable and consistent across time. Our normalization scheme indeed aids comparison between data sets and relation to infinite graphs., QC 20221005
- Published
- 2021
- Full Text
- View/download PDF
31. Notes on Percolation Analysis of Sampled Scalar Fields
- Abstract
Percolation analysis is used to explore the connectivity of randomly connected infinite graphs. In the finite case, a closely related percolation function captures the relative volume of the largest connected component in a scalar field’s superlevel set. While prior work has shown that random scalar fields with little spatial correlation yield a sharp transition in this function, little is known about its behavior on real data. In this work, we explore how different characteristics of a scalar field—such as its histogram or degree of structure—influence the shape of the percolation function. We estimate the critical value and transition width of the percolation function, and propose a corresponding normalization scheme that relates these values to known results on infinite graphs. In our experiments, we find that percolation analysis can be used to analyze the degree of structure in Gaussian random fields. On a simulated turbulent duct flow data set we observe that the critical values are stable and consistent across time. Our normalization scheme indeed aids comparison between data sets and relation to infinite graphs., QC 20221005
- Published
- 2021
- Full Text
- View/download PDF
32. Orthogonal Mixture of Hidden Markov Models
- Abstract
Mixtures of Hidden Markov Models (MHMM) are widely used for clustering of sequential data, by letting each cluster correspond to a Hidden Markov Model (HMM). Expectation Maximization (EM) is the standard approach for learning the parameters of an MHMM. However, due to the non-convexity of the objective function, EM can converge to poor local optima. To tackle this problem, we propose a novel method, the Orthogonal Mixture of Hidden Markov Models (oMHMM), which aims to direct the search away from candidate solutions that include very similar HMMs, since those do not fully exploit the power of the mixture model. The directed search is achieved by including a penalty in the objective function that favors higher orthogonality between the transition matrices of the HMMs. Experimental results on both simulated and real-world datasets show that the oMHMM consistently finds equally good or better local optima than the standard EM for an MHMM; for some datasets, the clustering performance is significantly improved by our novel oMHMM (up to 55 percentage points w.r.t. the v-measure). Moreover, the oMHMM may also decrease the computational cost substantially, reducing the number of iterations down to a fifth of those required by MHMM using standard EM., QC 20211203Conference ISBN 978-3-030-67658-2; 978-3-030-67657-5
- Published
- 2021
- Full Text
- View/download PDF
33. Brain-Like Approaches to Unsupervised Learning of Hidden Representations - A Comparative Study
- Abstract
Unsupervised learning of hidden representations has been one of the most vibrant research directions in machine learning in recent years. In this work we study the brain-like Bayesian Confidence Propagating Neural Network (BCPNN) model, recently extended to extract sparse distributed high-dimensional representations. The usefulness and class-dependent separability of the hidden representations when trained on MNIST and Fashion-MNIST datasets is studied using an external linear classifier and compared with other unsupervised learning methods that include restricted Boltzmann machines and autoencoders., Part of proceedings: ISBN 978-3-030-86383-8, QC 20230118
- Published
- 2021
- Full Text
- View/download PDF
34. Decoupling Inherent Risk and Early Cancer Signs in Image-Based Breast Cancer Risk Models
- Abstract
The ability to accurately estimate risk of developing breast cancer would be invaluable for clinical decision-making. One promising new approach is to integrate image-based risk models based on deep neural networks. However, one must take care when using such models, as selection of training data influences the patterns the network will learn to identify. With this in mind, we trained networks using three different criteria to select the positive training data (i.e. images from patients that will develop cancer): an inherent risk model trained on images with no visible signs of cancer, a cancer signs model trained on images containing cancer or early signs of cancer, and a conflated model trained on all images from patients with a cancer diagnosis. We find that these three models learn distinctive features that focus on different patterns, which translates to contrasts in performance. Short-term risk is best estimated by the cancer signs model, whilst long-term risk is best estimated by the inherent risk model. Carelessly training with all images conflates inherent risk with early cancer signs, and yields sub-optimal estimates in both regimes. As a consequence, conflated models may lead physicians to recommend preventative action when early cancer signs are already visible., QC 20210323
- Published
- 2020
- Full Text
- View/download PDF
35. Sequence Disambiguation with Synaptic Traces in Associative Neural Networks
- Abstract
Among the abilities that a sequence processing network should possess sequence disambiguation, that is, the ability to let temporal context information influence the evolution of the network dynamics, is one of the most important. In this work we propose an instance of the Bayesian Confidence Propagation Neural Network (BCPNN) that learns sequences with probabilistic associative learning and is able to disambiguate sequences with the use of synaptic traces (low pass filtered versions of the activity). We describe first how the BCPNN achieves both sequence recall and sequence learning from temporal input. Our main result is that the BCPNN network equipped with dynamical memory in the form of synaptic traces is capable of solving the sequence disambiguation problem in a reliable way. We characterize the relationship between the sequence disambiguation capabilities of the network and its dynamical parameters. Furthermore, we show that the inclusion of an additional fast synaptic trace greatly increases the network disambiguation capabilities., QC 20200330
- Published
- 2019
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.