14,165 results
Search Results
2. The latest release of the lava flows simulation model SCIARA: First application to Mt Etna (Italy) and solution of the anisotropic flow direction problem on an ideal surface.
- Author
-
Spataro, William, Avolio, Maria V., Lupiano, Valeria, Trunfio, Giuseppe A., Rongo, Rocco, and D’Ambrosio, Donato
- Subjects
LAVA flows ,CELLULAR automata ,ANISOTROPY - Abstract
Abstract: This paper presents the latest developments of the deterministic Macroscopic Cellular Automata model SCIARA for simulating lava flows. A Bingham-like rheology has been introduced for the first time as part of the Minimization Algorithm of the Differences, which is applied for computing lava outflows from the generic cell towards its neighbours. The hexagonal cellular space adopted in the previous releases of the model for mitigating the anisotropic flow direction problem has been replaced by a–Moore neighbourhood–square one, nevertheless by producing an even better solution for the anisotropic effect. Furthermore, many improvements have been introduced concerning the important modelling aspect of lava cooling. The model has been tested with encouraging results by considering both a real case of study, the 2006 lava flows at Mt Etna (Italy), and an ideal surface, namely a 5°inclined plane, in order to evaluate the magnitude of the anisotropic effect. As a matter of fact, notwithstanding a preliminary calibration, the model demonstrated to be more accurate than its predecessors, providing the best results ever obtained on the simulation of the considered real case of study. Eventually, experiments performed on the inclined plane have pointed out how this release of SCIARA does not present the typical anisotropic problem of deterministic Cellular Automata models for fluids on ideal surfaces. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
3. Genetic fuzzy systems applied to model local winds.
- Author
-
Agüera, A., de la Rosa, J.J.G., Ramiro, J.G., and Palomares, J.C.
- Subjects
GENETICS ,CLIMATOLOGY ,GENETIC algorithms ,POPULATION - Abstract
Abstract: The wind climate measured in a point is usually described as the result of a regional wind climate forced by local effects derived from topography, roughness and obstacles in the surrounding area. This paper presents a method that allows to use fuzzy logic to generate the local wind conditions caused by these geographic elements. The fuzzy systems proposed in this work are specifically designed to modify a regional wind frequency rose attending to the terrain slopes in each direction. In order to optimize these fuzzy systems, Genetic Algorithms will act improving an initial population and, eventually, selecting the one which produce the best approximation to the real measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
4. Data mining and integration for predicting significant meteorological phenomena.
- Author
-
Bartok, Juraj, Habala, Ondrej, Bednar, Peter, Gazak, Martin, and Hluchý, Ladislav
- Subjects
DATA mining ,METEOROLOGY ,RADAR indicators ,FOG - Abstract
Abstract: This paper describes the planned contribution of the project Data Mining Meteo (DMM) to the research of parametrized models and methods for detection and prediction of significant meteorological phenomena, especially fog and low cloud cover. The project is expected to cover methods for integration of distributed meteorological data necessary for running the prediction models, training models and then mining the data in order to be able to efficiently and quickly predict even randomly occurring phenomena. We present the methods and technologies we will use for integration of the input data, distributed on different vendors’ servers. The meteorological detection and prediction methods are based on statistical and climatological methods combined with knowledge discovery — data mining of meteorological data (SYNOP, METAR messages, weather radar imagery, “raw” meteorological data from stations, satellite imagery and results of common meteorological prediction models). [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
5. Hybrid modelling of crowd simulation.
- Author
-
Xiong, Muzhou, Lees, Michael, Cai, Wentong, Zhou, Suiping, and Low, Malcolm Yoke Hean
- Subjects
HYBRID systems ,MACROFUNGI ,METHODOLOGY ,SIMULATION methods & models - Abstract
Abstract: Macroscopic and microscopic modeling have become mainstream methodologies for crowd simulation in dynamic environments. The two models make a trade-off between efficiency and accuracy, but neither of them is able to achieve both goals at the same time. With the aim of achieving both efficiency and accuracy, a hybrid modelling method is proposed in this paper for crowd simulation. This paper illustrates how the two types of models co-exist in a single simulation and work collaboratively. A case study for this method is also conducted, the simulation result of which shows that the proposed method can not only benefit from the macroscopic model by improving the simulation efficiency, but also obtain a fine-grained simulation result by adopting the microscopic model. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
6. On the use of discrete adjoints in goal error estimation for shallow water equations.
- Author
-
Rauser, Florian, Riehme, Jan, Leppkes, Klaus, Korn, Peter, and Naumann, Uwe
- Subjects
WATER ,FLUID dynamics ,GEOPHYSICS ,HYDROSTATIC leveling - Abstract
Abstract: Goal oriented dual weight error estimation has been used in context of computational fluid dynamics for several years. The adaptation of this method to geophysical models is the subject of this paper. A differentiation-enabled prototype of the NAG Fortran compiler is used to generate a discrete adjoint version of such a geophysical model and allows to compute the required goal sensitivities. Numerical results are presented for a shallow water configuration of the Icosahedral Non-hydrostatic General Circulation Model (ICON). A special treatment of the underlying linear solver is discussed yielding improved scalability of this approach and a significant reduction in memory consumption and runtime. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
7. Dynamic monitoring framework for the SOA execution environment.
- Author
-
Żmuda, Daniel, Psiuk, Marek, and Zieliński, Krzysztof
- Subjects
SERVICE-oriented architecture (Computer science) ,COMPUTER science ,APPLICATION software ,PERFORMANCE - Abstract
Abstract: The paper analyses the challenges involved in constructing a dynamic monitoring framework compliant with the requirements of SOA application monitoring. The specification of these requirements provides a starting point for our multilayer framework architecture. We describe its Monitoring Scenario and Instrumentation layers in detail. The approach aims at runtime monitoring of container-based SOA execution environments. The Instrumentation layer exploits interceptor sockets, on top of which a powerful attribute-based reconfigurable monitoring service is constructed. The proposed solution is application-agnostic and can be used for enterprise and computational applications. We present a case study which further explains the functionality of the system. A prototype implementation for OSGi containers is also briefly described. Preliminary performance evaluation results are outlined and discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
8. Runtime sparse matrix format selection.
- Author
-
Armstrong, Warren and Rendell, Alistair P.
- Subjects
MATRICES (Mathematics) ,COMPUTER storage device industry ,EMIGRATION & immigration ,ALGORITHMS - Abstract
Abstract: There exist many storage formats for the in-memory representation of sparse matrices. Choosing the format that yields the quickest processing of any given sparse matrix requires considering the exact non-zero structure of the matrix, as well as the current execution environment. Each of these factors can change at runtime. The matrix structure can vary as computation progresses, while the environment can change due to varying system load, the live migration of jobs across a heterogeneous cluster, etc. This paper describes an algorithm that learns at runtime how to map sparse matrices onto the format which provides the quickest sparse matrix-vector product calculation, and which can adapt to the hardware platform changing underfoot. We show multiplication times reduced by over 10% compared with the best non-adaptive format selection. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
9. From BSP routines to high-performance ones: Formal verification of a transformation case.
- Author
-
Fortin, Jean and Gava, Frédéric
- Subjects
ALGORITHMS ,SEMANTICS ,COMPUTER networks ,SOURCE code - Abstract
Abstract: Paderborn’s and Oxford’s BSPLib are C libraries supporting the development of Bulk-Synchronous Parallel (BSP) algorithms. The BSP model allows an estimation of the execution time, avoids deadlocks and non-determinism. A natural semantics of the classical BSP communication routines have been given and used to certify a classical numerical computation using the Coq proof assistant. In this paper, we present a semantics that emphasises the highperformance primitives and is here used to formally verify (using Coq) a simple function of optimisation of the source code that transforms classical BSP routines to their high-performance versions. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
10. Ridge regression ensemble for toxicity prediction.
- Author
-
Budka, Marcin and Gabrys, Bogdan
- Subjects
REGRESSION analysis ,QSAR models ,TOXICITY testing ,ANIMALS - Abstract
Abstract: Traditional methods of assessing chemical toxicity of various compounds require tests on animals, which raises ethical concerns and is expensive. Current legislation may lead to a further increase of demand for laboratory animals in the next years. As a result, automatically generated predictions using Quantitative Structure–Activity Relationship (QSAR) modelling approaches appear as an attractive alternative. Due to sparsity of the chemical space, making this kind of predictions is however a difficult task. In this paper we propose a purely data–driven, rigorous and universal methodology of QSAR modelling, based on ensemble of relatively simple ridge regressors trained in various subspaces of the chemical space, selected using an iterative optimization procedure. The model described has been developed without using any domain knowledge and has been evaluated within the Environmental Toxicity Prediction Challenge CADASTER 2009, which has attracted over 100 participants from 25 countries. The presented approach was chosen as one of the First–Pass Winners, with predictive power non-significantly different to the highest ranked method, developed by the experts in the area of QSAR modelling and toxicology. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
11. Applicability of Pattern-based sparse matrix representation for real applications.
- Author
-
Belgin, Mehmet, Back, Godmar, and Ribbens, Calvin J.
- Subjects
MATRICES (Mathematics) ,KERNEL functions ,PERFORMANCE - Abstract
Abstract: Pattern-based representation (PBR) is a novel sparse matrix representation that reduces the index overhead for many matrices without zero-filling and without requiring the identification of dense matrix blocks. The PBR analyzer identifies recurring block nonzero patterns, represents the submatrix consisting of all blocks of this pattern in block coordinate format, and generates custom matrix-vector multiplication kernels for that submatrix. In this way, PBR expresses matrix structure in terms of specialized inner loops, thereby creating locality for repeating structure via the instruction cache, and reducing the amount of index data that must be fetched from memory. In this paper we evaluate the applicability of PBR by testing it on a large set of matrices from the University of Florida sparse matrix collection. We analyze PBR’s suitability for a wide range of problems and identify underlying problem and matrix characteristics that suggest good performance with PBR. We find that PBR is especially promising for problems with underlying 2D/3D geometry. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
12. Star-shaped polyhedron point location with orthogonal walk algorithm.
- Author
-
Soukal, Roman and Kolingerová, Ivana
- Subjects
POLYHEDRA ,ALGORITHMS ,ORTHOGONALIZATION ,GEOMETRY - Abstract
Abstract: The point location problem is one of the most frequent tasks in computational geometry. The walking algorithms are one of the most popular solutions for finding an element in a mesh which contains a query point. Despite their suboptimal complexity, the walking algorithms are very popular because they do not require any additional memory and their implementation is simple. This paper describes the modifications of two walking algorithms for point location on a surface of a star-shaped polyhedron, a generalization of the Remembering Stochastic walk algorithm for a star-shaped polyhedron and a modification of the planar Orthogonal walk algorithm. The latter uses spherical coordinates to transfer the spatial point location problem to the planar point location problem. This way, the problem can be solved by the traditional planar algorithms. Along with the modifications, the paper proposes new methods for finding a proper starting triangle for the walking process with or without preprocessing. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
13. Efficient design of exponential-Krylov integrators for large scale computing.
- Author
-
Tokman, M. and Loffeld, J.
- Subjects
INTEGRATORS ,COMPUTER systems ,APPLICATION software ,LITERATURE - Abstract
Abstract: As a result of recent resurgence of interest in exponential integrators a number of such methods have been introduced in the literature. However, questions of what constitutes an efficient exponential method and how these techniques compare with commonly used schemes remain to be fully investigated. In this paper we consider exponentialKrylov integrators in the context of large scale applications and discuss what design principles need to be considered in construction of an efficient method of this type. Since the Krylov projections constitute the primary computational cost of an exponential integrator we demonstrate how an exponential-Krylov method can be structured to minimize the total number of Krylov projections per time step and the number of Krylov vectors each of the projections requires. We present numerical experiments that validate and illustrate these arguments. In addition, we compare exponential methods with commonly used implicit schemes to demonstrate their competitiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
14. The Deflated Relaxed Incomplete Cholesky CG method for use in a real-time ship simulator.
- Author
-
van’t Wout, E., van Gijzen, M.B., Ditzel, A., van der Ploeg, A., and Vuik, C.
- Subjects
CONJUGATE gradient methods ,EQUATIONS ,ARBITRARY constants ,MATRICES (Mathematics) - Abstract
Abstract: Ship simulators are used for training purposes and therefore have to calculate realistic wave patterns around the moving ship in real time. We consider a wave model that is based on the variational Boussinesq formulation, which results in a set of partial differential equations. Discretization of these equations gives a large system of linear equations, that has to be solved each time-step. The requirement of real-time simulations necessitates a fast linear solver. In this paper we study the combination of the Relaxed Incomplete Cholesky preconditioner and subdomain deflation to accelerate the Conjugate Gradient method. We show that the success of this approach depends on the relaxation parameter. For low values of the relaxation parameter, e.g. the standard IC preconditioner, the deflation method is quite successfull. This is not the case for large values of the relaxation parameter, such as the Modified IC preconditioner. We give a theoretical explanation for this difference by considering the spectrum of the preconditioned and deflated matrices. Computational results for the wave model illustrate the expected convergence behavior of the Deflated Relaxed Incomplete Cholesky CG method. We also present promising results for the combination of the deflation method and the inherently parallel block-RIC preconditioner. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
15. Modeling a hybrid reactive-deliberative architecture towards realizing overall dynamic behavior of an AUV.
- Author
-
Das, S.K., Shome, S.N., Nandy, S., and Pal, D.
- Subjects
COMPUTER architecture ,METHODOLOGY ,MATLAB (Bangladesh) - Abstract
Abstract: Among all existing computational architecture adopted for controlling the behavior of Autonomous Underwater Vehicles (AUVs), the combined deliberative-reactive methodology is the most effective and significant approach towards behavioral control of the vehicle. Much work has been put into it and is available with literature. However, little work has been done in the scope of modeling the system with a view towards simulating and analyzing the dynamic behavior of the system as governed by the hybrid control architecture. This attempt is quite significant at the design stage, wherein fault-diagnosis can be easily done and rectified for. The aim of this paper is to present such a model for the adopted architecture and simulate the dynamic behavior of the system. Discussion regarding the logical organization and integrity between various modules has been presented, including abstraction between device layer and the controlling sub-systems. Overall dynamic behavior of the system has been realized through a hybrid finite system machine (FSM), thereby exhibiting the essential combination between a continuous reactive layer and discrete event-based deliberative sub-system. The required modeling of FSM and control-subsystems has been done with Stateflow/Simulink from Matlab. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
16. A dynamic aggregate model for the simulation of short term power fluctuations.
- Author
-
De Tommasi, Luciano, Gibescu, Madeleine, and Brand, Arno J.
- Subjects
FLUCTUATIONS (Physics) ,STOCHASTIC models ,WIND turbines ,WIND power plants - Abstract
Abstract: An important aspect related to wind energy integration into the electrical power system is the fluctuation of the generated power due to the stochastic variations of the wind speed across the area where wind turbines are installed. Simulation models are useful tools to evaluate the impact of the wind power on the power system stability and on the power quality. Aggregate models reduce the simulation time required by detailed dynamic models of multiturbine systems. In this paper, a new behavioral model representing the aggregate contribution of several variable-speed-pitchcontrolled wind turbines is introduced. It is particularly suitable for the simulation of short term power fluctuations due to wind turbulence, where steady-state models are not applicable. The model relies on the output rescaling of a single turbine dynamic model. The single turbine output is divided into its steady state and dynamic components, which are then multiplied by different scaling factors. The smoothing effect due to wind incoherence at different locations inside a wind farm is taken into account by filtering the steady state power curve by means of Gaussian filter as well as applying a proper damping on the dynamic part. The model has been developed to be one of the building-blocks of a model of a large electrical system, therefore a significant reduction of simulation time has been pursued. Comparison against a full model obtained by repeating a detailed single turbine model, shows that a proper trade-off between accuracy and computational speed has been achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
17. A massively parallel semi-Lagrangian algorithm for solving the transport equation.
- Author
-
Russell Manson, J., Wang, Dali, Wallis, Steve G, Page, Richard, and Laielli, Michael J
- Subjects
LAGRANGIAN functions ,ENGINEERING ,WEATHER forecasting ,ALGEBRA - Abstract
Abstract: The scalar transport equation underpins many models employed in science, engineering, technology and business. Application areas include, but are not restricted to, pollution transport, weather forecasting, video analysis and encoding (the optical flow equation), options and stock pricing (the Black-Scholes equation) and spatially explicit ecological models. Unfortunately finding numerical solutions to this equation which are fast and accurate is not trivial. Moreover, finding such numerical algorithms that can be implemented on high performance computer architectures efficiently is challenging. In this paper the authors describe a massively parallel algorithm for solving the advection portion of the transport equation. We present an approach here which is different to that used in most transport models and which we have tried and tested for various scenarios. The approach employs an intelligent domain decomposition based on the vector field of the system equations and thus automatically partitions the computational domain into algorithmically autonomous regions. The solution of a classic pure advection transport problem is shown to be conservative, monotonic and highly accurate at large time steps. Additionally we demonstrate that the algorithm is highly efficient for high performance computer architectures and thus offers a route towards massively parallel application. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
18. Towards high-quality, untangled meshes via a force-directed graph embedding approach.
- Author
-
Bhowmick, Sanjukta and Shontz, Suzanne M.
- Subjects
PARTIAL differential equations ,MATRICES (Mathematics) ,TETRAHEDRAL coordinates ,ALGORITHMS - Abstract
Abstract: High quality meshes are crucial for the solution of partial differential equations (PDEs) via the finite element method (or other PDE solvers). The accuracy of the PDE solution, and the stability and conditioning of the stiffness matrix depend upon the mesh quality. In addition, the mesh must be untangled in order for the finite element method to generate physically valid solutions. Tangled meshes, i.e., those with inverted mesh elements, are sometimes generated via large mesh deformations or in the mesh generation process. Traditional techniques for untangling such meshes are based on geometry and/or optimization. Optimization-based mesh untangling techniques first untangle the mesh and then smoothe the resulting untangled mesh in order to obtain high quality meshes; such techniques require the solution of two optimization problems. In this paper, we study how to modify a physical, force-directed method based upon the Fruchterman-Reingold (FR) graph layout algorithm so that it can be used for untangling. The objectives of aesthetic graph layout, such as minimization of edge intersections and near equalization of edge lengths, follow the goals of mesh untangling and generating good quality elements, respectively. Therefore, by using the force-directed method, we can achieve both steps of mesh untangling and mesh smoothing in one operation. We compare the effectiveness of our method with that of the optimization-based mesh untangling method in and implemented in Mesquite by untangling a suite of unstructured triangular, quadrilateral, and tetrahedral finite element volume meshes. The results show that the force-directed method is substantially faster than the Mesquite mesh untangling method without sacrificing much in terms of mesh quality for the majority of the test cases we consider in this paper. The force-directed mesh untangling method demonstrates the most promise on convex geometric domains. Further modifications will be made to the method to improve its ability to untangle meshes on non-convex domains. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
19. Characterizing sparse preconditioner performance for the support vector machine kernel.
- Author
-
Chatterjee, Anirban, Fermoyle, Kelly, and Raghavan, Padma
- Subjects
SPARSE matrices ,KERNEL functions ,SUPPORT vector machines ,MATRICES (Mathematics) - Abstract
Abstract: A two-class Support Vector Machine (SVM) classifier finds a hyperplane that separates two classes of data with the maximum margin. In a first learning phase, SVM involves the construction and solution of a primal-dual interiorpoint optimization problem. Each iteration of the interior-point method (IPM) requires a sparse linear system solution, which dominates the execution time of SVM learning. Solving this linear system can often be computationally expensive depending on the conditioning of the matrix. Consequently, preconditioned linear systems can lead to improved SVM performance while maintaining the classification accuracy. In this paper, we seek to characterize the role of preconditioning schemes for enhancing the SVM classifier performance. We compare and report on the solution time, convergence, and number of Newton iterations of the iterior-point method and classification accuracy of the SVM for 6 well-accepted preconditioning schemes and datasets chosen from well-known machine learning repositories. In particular, we introduce -IPM that sparsifies the linear system at each iteration of the IPM. Our results indicate that on average the Jacobi and SSOR preconditioners perform 10.01 times better than other preconditioning schemes for IPM and 8.83 times better for -IPM. Also, across all datasets Jacobi and SSOR perform between 2 to 30 times better than other schemes in both IPM and -IPM. Moreover, -IPM obtains a speedup over IPM performance on average by 1.25 and as much as 2 times speedup in the best case. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
20. MEGSOR iterative method for the triangle element solution of 2D Poisson equations.
- Author
-
Sulaiman, Jumat, Hasan, Mohammad Khatim, Othman, Mohamed, and Abdul Karimd, Samsul Arffin
- Subjects
POISSON'S equation ,ITERATIVE methods (Mathematics) ,NUMERICAL analysis ,COMPUTATIONAL complexity - Abstract
Abstract: In previous studies of finite difference approaches, the 4 Point-Modified Explicit Group (MEG) iterative method with or without a weighted parameter, , has been pointed out to be much faster as compared to the existing four point block iterative methods. The main characteristic of the MEG iterative method is to reduce computational complexity compared to the full-sweep or half-sweep methods. Due to the effectiveness of this method, the primary goal of this paper is to demonstrate the use of the 4 Point- Modified Explicit Group (MEG) iterative method together with a weighted parameter, namely 4 Point-MEGSOR. The effectiveness of this method has been shown via results of numerical experiments, which are recorded and analyzed, show that that the 4 Point-MEGSOR iterative scheme is superior as compared with the existing four point block schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
21. Data sonification of volcano seismograms and Sound/Timbre reconstruction of ancient musical instruments with Grid infrastructures.
- Author
-
Avanzo, Salvatore, Barbera, Roberto, De Mattia, Francesco, La Rocca, Giuseppe, Sorrentino, Mariapaola, and Vicinanza, Domenico
- Subjects
SEISMOGRAMS ,MUSICAL instruments ,GRID computing ,HUMANITIES - Abstract
Abstract: Recently, the scenario of international collaboration in scientific research has swiftly evolved with the gradual but impressive deployment of high bandwidth networks and Grid infrastructures towards what is currently called e-Science. So far, several scientific domains, such as Life Sciences, High Energy Physics, Computational Chemistry, Earth Science, etc. are benefiting of e-Infrastructures to tackle new global challenges, particularly those that have high societal and economic impact, with a truly multidisciplinary approach. Much less has been done, however, in the field of Humanities. In this paper we present some use cases of how the EU funded e-Infrastructures have been used to support both the Cultural Heritage community in reconstructing the sound of ancient musical instruments and the scientists belonging to the Earth Science domain to better understand the behavior of volcanoes close to eruptions translating the patterns of volcanic seismograms into a set of hearable sound waves. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
22. A CA Randomizers Based on parallel CAs with Balanced Rules.
- Author
-
Maleki, Farhad, Mohades, Ali, Shiri, M.E., and Bijari, Afsane
- Subjects
CELLULAR automata ,CRYPTOGRAPHY ,QUANTITATIVE research ,RULES - Abstract
Abstract: In this paper a class of cellular automata rules is defined and proposed to be used in CA randomizers. The quality of the proposed rules is shown by study of symmetric rules of radius one and two. In addition, a non-uniform CA randomizer is constructed with the proposed symmetric rules of radius two. The high quality of the generated random numbers are shown by a battery of statistical tests. Moreover, it is shown that the proposed CA randomizer is more secure against cryptanalysis attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
23. Toward a parallel solver for generalized complex symmetric eigenvalue problems.
- Author
-
Schabauer, Hannes, Pacher, Christoph, Sunderland, Andrew G., and Gansterer, Wilfried N.
- Subjects
EIGENVALUES ,MATRICES (Mathematics) ,HERMITIAN forms ,PERFORMANCE - Abstract
Abstract: Methods for numerically solving generalized complex symmetric (non-Hermitian) eigenvalue problems (EVPs) serially and in parallel are investigated. This research is motivated by two observations: Firstly, the conventional approach for solving such problems serially, as implemented, e.g., in zggev (LAPACK), is to treat complex symmetric problems as general complex and therefore does not exploit the structural properties. Secondly, there is currently no parallel solver for dense (generalized or standard) non-Hermitian EVPs in ScaLAPACK. The approach presented in this paper especially aims at exploiting the structural properties present in complex symmetric EVPs and at investigating the potential trade-offs between performance improvements and loss of numerical accuracy due to instabilities. For the serial case, a complete reduction based solver for computing eigenvalues of the generalized complex symmetric EVP has been designed, implemented, and is evaluated in terms of numerical accuracy as well as in terms of runtime performance. It is shown that the serial solver achieves a speedup of up to 43 compared to zggev (LAPACK), although at the cost of a reduced accuracy. Furthermore, the major parts of this reduction based solver have been parallelized based on ScaLAPACK and MPI. Their scaling behavior is evaluated on a cluster utilizing up to 1024 cores. Moreover, the parallel codes developed achieve encouraging parallel speedups comparable to the ones of ScaLAPACK routines for the complex Hermitian EVP. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
24. High performance individual-oriented simulation using complex models.
- Author
-
Solar, Roberto, Suppi, Remo, and Luque, Emilio
- Subjects
HIGH performance computing ,COMPUTER systems ,SCHOOLS ,FISHES - Abstract
Abstract: Computational simulation has been used as a powerful tool to represent the dynamical behavior of systems based on complex analytic models. These types of models have two main drawbacks: (a) limitations due to the degree of abstraction needed to simulate them, (b) high computing power to simulate a heavily simplified models. The computing power available today can overcome these limitations to perform quicker simulations of complex models that are closer to reality. In this paper, the experiments and performance analysis of a distributed simulation for a complex individual oriented model (fish schools) are presented. The development of the fish school simulator includes the possibility of working with large models that include large numbers of fish (>10
6 of individuals), predators and obstacles in the simulated world. [ABSTRACT FROM AUTHOR]- Published
- 2010
- Full Text
- View/download PDF
25. A vector model for routing queries in web search engines.
- Author
-
Oyarzun, Mauricio S., Gonzalez, Senen, Mendoza, Marcelo, Ferrarotti, Flavio, Chacon, Max, and Marin, Mauricio
- Subjects
WEB search engines ,INFORMATION retrieval ,ROUTING (Computer network management) ,RECEPTIONISTS - Abstract
Abstract: This paper proposes a method for reducing the number of search nodes involved in the solution of queries arriving to a Web search engine. The method is applied by the query receptionist machine during situations of sudden peaks in query trafic to reduce the load on the search nodes. The experimental evaluation based on actual traces from users of a major search engine, shows that the proposed method outperforms alternative strategies. This is more evident for systems composed of a large number of search nodes which indicates that the method is also more scalable than the alternative strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
26. Computationally efficient algorithm for the estimation of the intimamedia thickness of the common carotid artery.
- Author
-
Turcza, Paweł, Zieliński, Tomasz P., Kwater, Aleksander, and Grodzicki, Tomasz
- Subjects
BURGERS' equation ,MYOCARDIAL infarction ,ALGORITHMS ,CAROTID artery - Abstract
Abstract: In this paper we propose a new computationally efficient algorithm for automatic estimation of intima and media boundary of common carotid artery (CCA) based on ultrasound (USG) image analysis. The intima-media thickness (IMT), equal to distance between intima and media boundary, is typically used to predict cardiovascular events like myocardial infarction and stroke. At present, it is measured by a physician making manual segmentation of an USG image. Automatic estimation of IMT, freeing physicians from this tedious work, should speed the process and assure higher reproducibility. The proposed algorithm involves USG image denoising by means of a nonlinear diffusion filter working in semi-explicit scheme and iterative image segmentation by a modified active contour method with suitable energy function. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
27. Mobile iris recognition systems: An emerging biometric technology.
- Author
-
Kang, Jin-Suk
- Subjects
BIOMETRY ,CELL phones ,POCKET computers ,WIRELESS communications - Abstract
Abstract: Wireless communications has matured from a curiosity to a serious business tool. Personal digital assistants have totally replaced daytimers and notepads. But security has been lacking or weak, making such automation untrustworthy for critical applications. By adding strong security and authentication, these tools will facilitate trustworthy electronic methods for commerce, financial transactions, medical data, even prescriptions. In this paper, considering the limited computing power of mobile and portable devices, a simple but efficient pre-processing method is introduced for iris localization for such iris images. An iris database (http://chungbuk.ac.kr/Iris/index.html) with such consideration is created for this paper. The proposed iris pre-processing method implements the following steps: (a) Automatic segmentation for pupil region, (b) Helper data extraction and pupil detection, etc (c) Eyelids detection and feature matching. Experiment results show that the proposed iris pre-processing method is performing well and stable across different iris databases. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
28. Design and implementation of a CUDA-compatible GPU-based core for gapped BLAST algorithm.
- Author
-
Ling, Cheng and Benkrid, Khaled
- Subjects
GRAPHICS processing units ,COMPUTER architecture ,PENTIUM (Microprocessor) ,COMPUTER systems - Abstract
Abstract: This paper presents the first ever reported implementation of the Gapped Basic Local Alignment Search Tool (Gapped BLAST) for biological sequence alignment, with the Two-Hit method, on CUDA (compute unified device architecture)-compatible Graphic Processing Units (GPUs). The latter have recently emerged as relatively low cost and easy to program high performance platforms for general purpose computing. Our Gapped BLAST implementation on an NVIDIA Geforce 8800 GTX GPU is up to 2.7x quicker than the most optimized CPU-based implementation, namely NCBI BLAST, running on a Pentium4 3.4 GHz desktop computer with 2GB RAM. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
29. A proposal for autotuning linear algebra routines on multicore platforms.
- Author
-
Cuenca, Javier, García, Luis P., and Giménez, Domingo
- Subjects
LINEAR algebra ,MULTICORE processors ,COMPUTERS ,COMPILERS (Computer programs) - Abstract
Abstract: The use of an OpenMP compiler optimized for the corresponding multicore system is a good option, but it is possible in a system to have access to more than one compiler and different compilers can appropriately optimize different parts of the code. In this paper we present a proposal for an autotuning system for linear algebra routines that decides the best compiler for each situation, as well as other parameter values, as, for example, the number of threads to generate. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
30. Improvement of parallelization efficiency of batch pattern BP training algorithm using Open MPI.
- Author
-
Turchenko, Volodymyr, Grandinetti, Lucio, Bosilca, George, and Dongarra, Jack J.
- Subjects
ALGORITHMS ,PERCEPTRONS ,COMMUNICATION ,APPLICATION software - Abstract
Abstract: The use of tuned collective’s module of Open MPI to improve a parallelization efficiency of parallel batch pattern back propagation training algorithm of a multilayer perceptron is considered in this paper. The multilayer perceptron model and the usual sequential batch pattern training algorithm are theoretically described. An algorithmic description of a parallel version of the batch pattern training method is introduced. The obtained parallelization efficiency results using Open MPI tuned collective’s module and MPICH2 are compared. Our results show that (i) Open MPI tuned collective’s module outperforms MPICH2 implementation both on SMP computer and computational cluster and (ii) different internal algorithms of MPI_Allreduce() collective operation give better results on different scenarios and different parallel systems. Therefore the properties of the communication network and user application should be taken into account when a specific collective algorithm is used. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
31. Component-based design for adaptive large-scale infectious disease simulation.
- Author
-
Riechers, Thorsten Matthias, Kuo, Shyh-hao, Mong Goh, Rick Siow, and Hung, Terence
- Subjects
MULTIAGENT systems ,COMMUNICABLE diseases ,HETEROGENEOUS computing ,GRAPHICS processing units - Abstract
Abstract: Component-based design improves productivity by concentrating development efforts on one component at a time without having to worry about a change having an application-wide effect. In this paper, we demonstrate the usefulness of componentbased approach in the development of an infectious disease simulator. Specifically, we have explored the possibility of self performance tuning at runtime through the use of hot-swappable components by incrementally develop optimised component variants easily. The application has achieved 4 times speedup using dynamic kernel adaptation and a further 5.3 times speedup through parallelisation on a multicore and GPU server. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
32. An abstract virtual instrument system for high throughput automatic microscopy.
- Author
-
Russel, A.B.M., Abramson, David, Bethwaite, Blair, Dinh, Minh Ngoc, Enticott, Colin, Firth, Stephen, Garic, Slavisa, Harper, Ian, Lackmann, Martin, Schek, Stefan, and Vail, Mary
- Subjects
MICROSCOPY ,CANCER research ,GRID computing ,BLOOD vessels - Abstract
Abstract: Modern biomedical therapies often involve disease specific drug development and may require observing cells at a very high resolution. Existing commercial microscopes behave very much like their traditional counterparts, where a user controls the microscope and chooses the areas of interest manually on a given specimen scan. This mode of discovery is suited to problems where it is easy for a user to draw a conclusion from observations by finding a small number of areas that might require further investigation. However, observations by an expert can be very time consuming and error prone when there are a large number of potential areas of interest (such as cells or vessels in a tumour), and compute intensive image processing is required to analyse them. In this paper, we propose an Abstract Virtual Instrument (AVI) system for accelerating scientific discovery. An AVI system is a novel software architecture for building an hierarchical scientific instrument–one in which a virtual instrument could be defined in terms of other physical instruments, and in which significant processing is required in producing the illusion of a single virtual scientific discovery instrument. We show that an AVI can be implemented using existing scientific workflow tools that both control the microscope and perform image analysis operations. The resulting solution is a flexible and powerful system for performing dynamic high throughput automatic microscopy. We illustrate the system using a case study that involves searching for blood vessels in an optical tissue scan, and automatically instructing the microscope to rescan these vessels at higher resolution. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
33. Generating ontologies with basic level concepts from folksonomies.
- Author
-
Chen, Wen-hao, Cai, Yi, Leung, Ho-fung, and Li, Qing
- Subjects
ONTOLOGY ,FOLKSONOMIES ,COGNITIVE psychology ,VOCABULARY - Abstract
Abstract: This paper deals with the problem of ontology generation. Ontology plays an important role in knowledge representation, and it is an artifact describing a certain reality with specific vocabulary. Recently many researchers have realized that folksonomy is a potential knowledge source for generating ontologies. Although some results have already been reported on generating ontologies from folksonomies, most of them do not consider what a more acceptable and applicable ontology for users should be, nor do they take human thinking into consideration. Cognitive psychologists find that most human knowledge is represented by basic level concepts which is a family of concepts frequently used by people in daily life. Taking cognitive psychology into consideration, we propose a method to generate ontologies with basic level concepts from folksonomies. Using Open Directory Project (ODP) as the benchmark, we demonstrate that the ontology generated by our method is reasonable and consistent with human thinking. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
34. Simulation of multiphysics multiscale systems, 7th international workshop.
- Author
-
Krzhizhanovskaya, Valeria
- Subjects
MULTISCALE modeling ,COMPLEXITY (Philosophy) ,COMPUTER simulation ,COMPUTERS - Abstract
Abstract: Modeling and Simulation of Multiphysics Multiscale Systems (SMMS) poses a grand challenge to computational science. To adequately simulate numerous intertwined processes characterized by different spatial and temporal scales spanning many orders of magnitude, sophisticated models and advanced computational techniques are required. The aim of the SMMS workshop is to encourage and review the progress in this multidisciplinary research field. This short paper describes the scope of the workshop and gives pointers to the papers reflecting the latest developments in the field. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
35. A two scale model of air corona discharges.
- Author
-
Seimandi, Pierre, Dufour, Guillaume, and Rogier, François
- Subjects
ELECTROHYDRODYNAMICS ,ACTUATORS ,ELECTRODES ,ELECTRIC discharges - Abstract
Abstract: This paper deals with the modelling of plasma discharges induced by electrohydrodynamic actuators. We propose a multi-model method based on fluid conservation equations that allows an increase of the maximum time step imposed by usual explicit schemes. The idea consists in replacing the numerical integration of the plasma equations at the vicinity of the electrodes–where the time step is particularly limited–by a simplified model adapted to the local plasma dynamics. In this study, we apply this method to the modelling of wire-induced discharges and we present the numerical results obtained for an unsteady 2D simulation of a reference test case. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
36. Numerical simulation of wall heat flux in supersonic weakly ionized gas flow.
- Author
-
Bobashev, Sergey V., Chernyshev, Alexander S., and Schmidt, Alexander A.
- Subjects
COMPUTER simulation ,FLUX (Metallurgy) ,IONIZED gases ,MATHEMATICAL models - Abstract
Abstract: The paper presents results of numerical simulation of laminar and turbulent weakly ionized plasma flows around model bodies under MHD interaction. Numerical investigations have been carried out of supersonic MHD flow of weakly ionized nitrogen plasma around model bodies (spherically blunted cylinder, truncated cylinder, and cone-cylinder lay-outs) at conditions of experiments on the Big Shock Tube (BST) of the Ioffe Institute of Russian Academy of Sciences. The purpose of the investigations was verification of a mathematical model and algorithm as well as analysis of predominant factors determining the MHD impact on the flow structure and thermal load on the model. The effect of the magnetic field induced by the coil installed in the model on the plasma flow was investigated and the efficiency of the MHD interaction was estimated. To enhance the efficiency of the MHD interaction a surface electric discharge was arranged between an electrode installed on the model nose and a ring coaxial electrode installed in the vicinity of the cone-cylinder conjugation. In the magnetic field induced by the coil the discharge rotates producing a domain of high plasma electric conductivity near the cone surface. Comparison of the predictions with experimental data obtained on the BST was presented. Thus the considered system is typical muliphysics and multiscale one involving interaction of supersonic flow with the electromagnetic field, description of large eddies evolution, turbulence model for subgrid scale, and effects of localized electric discharge. Investigations showed that at conditions under study the predominant factors of MHD impact on weakly ionized plasma flow around the body determining the flow structure are both the ponderomotive force and the Joule heating. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
37. A new asymptotic approximate model for the Vlasov-Maxwell equations.
- Author
-
Assous, F. and Tsipis, F.
- Subjects
SPEED ,ALGORITHMS ,EQUATIONS ,LIGHT - Abstract
Abstract: In this paper, we derive a new asymptotic approximation of the Vlasov-Maxwell equations. This formulation follows the beam in a speed-of-light frame. It is fourth order accurate in the small characteristic velocity of the beam. The formulation is simpler than standard particle-in-cell methods in the lab frame or in the beam frame. It promises to be very powerful in its ability to get an accurate, but fast and easy to implement algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
38. Computing of gas flows in micro- and nanoscale channels on the base of the Boltzmann Kinetic equation.
- Author
-
Anikin, Yu.A., Derbakova, E.P., Dodulad, O.I., Kloss, Yu.Yu., Martynov, D.V., Rogozin, O.A., Shuvalov, P.V., and Tcheremissine, F.G.
- Subjects
GAS flow ,NANOELECTROMECHANICAL systems ,METHODOLOGY ,GEOMETRY - Abstract
Abstract: The paper describes the methodology of computing gas flows in narrow micro- and nanoscale channels on the basis of finite-difference solution of the Boltzmann kinetic equation using the conservative projection method of collision integral calculation. Mathematical framework of the method is considered and the problem solving environment for calculation of the above mentioned flows is described. Examples of the flow calculations in the plane and 3D geometry are given. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
39. Time space domain decomposition for reactive transport.
- Author
-
Haeberlein, F., Michel, A., and Caetano, F.
- Subjects
SCHWARZ function ,TRANSPORTATION ,FOURIER analysis ,COUPLED mode theory (Wave-motion) - Abstract
Abstract: In this paper, we apply a Schwarz waveform relaxation method to a two-species reactive transport system. By Fourier analysis we find optimal coupling conditions that result in pseudo-differential operators. We approximate these operators by differential operators and give an upper bound for the convergence rate. By this technique a best approximation problem arises that is solved numerically. We finally obtain an optimised transmission condition that we will analyse numerically. This result is a first important theoretical issue for the application of domain decomposition methods to large coupled systems of reactive transport equations and has a great implication on the global performance of the numerical approach. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
40. Application of the numerical density-enthalpy method to the multi-phase flow through a porous medium.
- Author
-
Ibrahim, Vermolen, F.J., and Vuik, C.
- Subjects
ENTHALPY ,MULTIPHASE flow ,POROUS materials ,DENSITY - Abstract
Abstract: In this paper we apply a new method to solve multi-phase fluid flow problem for and fluid systems. This method is developed at TNO and presented in for spatially homogeneous systems. We call this method the numerical density-enthalpy method (or - method) because density-enthalpy phase diagrams (or - diagrams) play an important role in it. Finite elements are used for spatial discretization along with the streamline upwind PetrovGalerkin method. In contrast to conventional methods, the - method eliminates the requirement of separate sets of equations for various phases and necessitates fewer parametric assumptions. Therefore, it is aimed at a simple formulation and at an increase of efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
41. Improving parallel performance of large-scale watershed simulations.
- Author
-
Eller, Paul R., Cheng, Jing-Ru C., Nguyen, Hung V., and Maier, Robert S.
- Subjects
WATERSHEDS ,SCALABILITY ,SYSTEMS design ,PHYSICS - Abstract
Abstract: A comprehensive, physics-based watershed model with multispatial domains and multitemporal scales has been developed and used. This paper discusses interfacing the watershed model with PETSc and evaluating the model performance for a variety of PETSc preconditioners. Both wall-clock time and scalability are compared based on performance on the Cray XT4 machine, along with tests to verify that all solutions are producing accurate results. The findings conclude that the PETSc Conjugate Gradient solver and preconditioners outperform the simple Conjugate Gradient solver and Jacobi preconditioner originally used by the watershed model. Tests show that the HypreBoomeramg preconditioner provides the most significant speedup for the watershed model. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
42. Analysis of the neural hypercolumn in parallel PCSIM simulations.
- Author
-
Wojcik, Grzegorz M. and Garcia-Lazaro, Jose A.
- Subjects
SIMULATION methods & models ,SOUNDS ,MONKEYS - Abstract
Abstract: Large and sudden changes in pitch or loudness occur statistically less frequently than gradual fluctuations, which means that natural sounds typically exhibit 1/f spectra. Experiments conducted on human subjects showed that listeners indeed prefer 1/f distributed melodies to melodies with faster or slower dynamics. It was recently demonstrated by using animal models, that neurons in primary auditory cortex of anesthetized ferrets exhibit a pronounced preference to stimuli that exhibit 1/f statistics. In the visual modality, it was shown that neurons in primary visual cortex of macaque monkeys exhibit tuning to sinusoidal gratings featuring 1/f dynamics. One might therefore suspect that neurons in mammalian cortex exhibit Self-Organizing Criticality. Indeed, we have found SOC-like phenomena in neurophysiological data collected in rat primary somatosensory cortex. In this paper we concentrated on investigation of the dynamics of cortical hypercolumn consisting of about 128 thousand simulated neurons. The set of 128 Liquid State Machines, each consisting 1024 neurons, was simulated on a simple cluster built of two double quad-core machines (16 cores). PCSIM was designed as a tool for simulating artificial biological-like neural networks composed of different models of neurons and different types of synapses. The simulator was written in C++ with a primary interface dedicated for the Python programming language. As its authors ensure it is intended to simulate networks containing up to millions of neurons and on the order of billions of synapses. This is achieved by distributing the network over different nodes of a computing cluster by using Message Passing Interface. The results obtained for Leaky Integrate-and-Fire model of neurons used for the construction of the hypercolumn and varying density of inter-column connections will be discussed. Benchmarking results for using the PCSIM on the cluster and predictions for grid computing will be presented to some extent. Research presented herein makes a good starting point for the simulations of very large parts of mammalian brain cortex and in some way leading to better understanding of the functionality of human brain. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
43. eResearch bootcamp: grooming next-gen researchers.
- Author
-
Maxville, Valerie
- Subjects
INTERNSHIP programs ,GRADUATE education ,NANOCHEMISTRY ,ASTRONOMY - Abstract
Abstract: The next generation of researchers are not only attempting to grasp their discipline and research areas, but also need to be well-versed in technology powered techniques for research and its dissemination - a shift referred to as eResearch. The iVEC summer internship program provides an intense research training program to help undergraduate students transition into postgraduate programs and innovative industry positions. Interns are supervised by leading researchers in areas including radio astronomy, nanochemistry and molecular dynamics. More than a summer research project, the program aims to give a comprehensive indoctrination into core and emerging techniques for research. This paper describes the evolution of the internship program, the goals and methods used and a reflection on the results we are observing. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
44. Simulating the formation of biofilms in an undergraduate modeling course.
- Author
-
Shiflet, Angela B. and Shiflet, George W.
- Subjects
BIOFILMS ,SIMULATION methods & models ,STUDENTS - Abstract
Abstract: Meaningful applications that illustrate fundamental concepts and techniques are crucial in computational science education. In this paper, we discuss development of a simulation on the structural growth of a biofilm that is appropriate for modeling, simulation, or high performance computing courses. Consideration of cellular automaton simulations, boundary conditions, and diffusion in this context can empower students to develop similar simulations for other applications. Moreover, extensions of the basic model can illustrate and motivate the need for high performance computing in computational science. The module, “Biofilms: United They Stand, Divided They Colonize,” used for instruction and developed by the authors as an Undergraduate Petascale Education Program (UPEP) Curriculum Module is available at http://computationalscience.org/upep/curriculum. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
45. Anisotropic mesh adaptivity for cardiac electrophysiology.
- Author
-
Southern, J., Gorman, G.J., Piggott, M.D., Farrell, P.E., Bernabeu, M.O., and Pitt-Francis, J.
- Subjects
ANISOTROPY ,ELECTROPHYSIOLOGY ,CENTRAL processing units ,COMPUTERS - Abstract
Abstract: The simulation of cardiac electrophysiology requires small time steps and a fine mesh in order to resolve very sharp, but highly localized, wavefronts. The use of very high resolution meshes containing large numbers of nodes results in a high computational cost, both in terms of CPU hours and memory footprint. In this paper an anisotropic mesh adaptivity technique is implemented in the Chaste physiological simulation library in order to reduce the mesh resolution away from the depolarization front. Adapting the mesh results in a reduction in the number of degrees of freedom of the system to be solved by an order of magnitude during propagation and 2–3 orders of magnitude in the subsequent plateau phase. As a result, a computational speedup by a factor of between 5 and 12 has been obtained with no loss of accuracy, both in a slab-like geometry and for a realistic heart mesh with a spatial resolution of 0.125 mm. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
46. A note on discontinuous rate functions for the gate variables in mathematical models of cardiac cells.
- Author
-
Hanslien, Monicaa, Holden, Nina, and Sundnes, Joakim
- Subjects
HEART cells ,MATHEMATICAL models ,BIOLOGICAL membranes ,EQUATIONS - Abstract
Abstract: The gating mechanism of ionic channels in cardiac cells is often modeled by ordinary differential equations (ODEs) with voltage dependent rates of change. Some of these rate functions contain discontinuities or singularities, which are not physiologically founded but rather introduced to fit experimental data. Such non-smooth right hand sides of ODEs are associated with potential problems when the equations are solved numerically, in the form of reduced order of accuracy and inconsistent convergence. In this paper we propose to replace the discontinuous rates with smooth versions, by fitting functions of the form introduced by Noble (1962) to the original data found by Ebihara and Johnson (1980). We find that eliminating the discontinuities in the rate functions enables the numerical method to obtain the expected order of accuracy, and has a negligible effect on the kinetics of the membrane model. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
47. Automated measurement of quality of mucosa inspection for colonoscopy.
- Author
-
Liu, Xuemin, Tavanapong, Wallapak, Wong, Johnny, Oh, JungHwan, and de Groen, Piet C.
- Subjects
MUCOUS membranes ,COLONOSCOPY ,IMAGE analysis ,QUALITY control - Abstract
Abstract: Colonoscopy is currently the preferred screening modality for prevention of colorectal cancer. However, the effectiveness of colonoscopy depends on the quality of the procedure, which depends on several factors. In this paper, we present new methods that derive a new quality metric for automated scoring of quality of mucosa inspection performed by the endoscopist. We conducted Pearsons’ Correlation analysis of the computerized metric scores against the averages of the manual scores given by four domain experts on twenty-one colonoscopy videos. Our metric shows a relatively strong positive correlation (Pearson’s correlation coefficient of 0.72) between the computer-generated score and the ground truth. Hence, the proposed work is very promising to be used for quality control/assurance in routine colonoscopy screening. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
48. The influence of mitoses rate on growth dynamics of a cellular automata model of tumour growth.
- Author
-
Naumov, Lev, Hoekstra, Alfons, and Sloot, Peter
- Subjects
MITOSIS ,TUMORS ,CELLULAR automata ,BIOLOGY - Abstract
Abstract: Mitosis inside a tumour can be prohibited for different reasons, such as overcrowding or physical pressure. At the same time, the rate of successful mitoses inside a tumour can hardly be measured in vivo or vitro, but is easily modeled in silico. In this paper we present a study of the influence of mitoses rate on the growth dynamics of a cellular automata model for growth. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
49. Geomedica: managing and querying clinical data distributions on geographical database systems.
- Author
-
Tradigo, G., Veltri, P., and Greco, S.
- Subjects
DATABASES ,DATA mining ,SPATIO-temporal variation - Abstract
Abstract: Geographical databases are a significant and mature tool, useful in many application areas thanks to the spread of new positioning and mapping technologies. Geographical functionalities can be added to existing applications, from land management to water and electricity control systems. The use of geographical information applications greatly improves data interpretation, thus helping users in making better decisions. Further improvements can be obtained by using more sophisticated tools (e.g. On Line Analytical Processing and Data Mining techniques) to highlight interesting and previously unknown relations on spatio-temporal data, which can help in a better understanding of data. In this paper we report the experience of using GIS technologies to analyze clinical data containing health information about a large population. Clinical data have been geocoded by associating tuples related to some geographical position with the coordinates of a map and then analyzed and queried using both SQL-like languages and a graphical user interface. Several experiments have been performed using data related to an italian district which have been furnished by an association of family doctors and patients. Test queries performed on the available dataset were able to correctly correlate health data about patients with geographical features (e.g. points of interest, boundaries, coastlines vectors) and to visualize diseases geographical distributions on a map. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
50. Using ontologies for querying and analysing protein-protein interaction data.
- Author
-
Cannataro, Mario, Guzzi, Pietro Hiram, and Veltri, Pierangelo
- Subjects
ONTOLOGIES (Information retrieval) ,PROTEINS ,QUERYING (Computer science) ,ONTOLOGY - Abstract
Abstract: Biological function is to a large extent mediated and controlled by interactions among proteins. The study of interactions about proteins has lead to the accumulation of a large amount of data, also referred to as protein-protein interaction (PPI) data. Such data, stored in publicly available databases, are often queried by using simple keybased query interfaces with little semantic. Current PPI databases enable the retrieval of one or more proteins that interact with a target protein using target protein identifier. Nevertheless, a lot of biological information is available and is spread on different sources and encoded in different ontologies (e.g. Gene Ontology). Annotating existing PPI databases with biological information may result in richer querying interface and successively could enable the development of novel algorithms that may use such biological information. The main contributions of this paper are: (i) a framework able to extend existing PPI databases by using ontologies, and (ii) a semantic based querying interface. The framework merges PPI data with annotations extracted from Gene Ontology and stores annotated data into a database. Then, a semantic-based query interface enables users to query these data by using biological concepts. Finally, a real case study showing the effectiveness of such framework on the analysis of PPI data is also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.