29 results on '"Graphical method"'
Search Results
2. A graphical method for simplifying Bayesian games.
- Author
-
Thwaites, Peter A. and Smith, Jim Q.
- Subjects
- *
DECISION trees , *PARSIMONIOUS models , *WEB design , *UTILITY functions , *ALGORITHMS - Abstract
If the influence diagram (ID) depicting a Bayesian game is common knowledge to its players then additional assumptions may allow the players to make use of its embodied irrelevance statements. They can then use these to discover a simpler game which still embodies both their optimal decision policies. However the impact of this result has been rather limited because many common Bayesian games do not exhibit sufficient symmetry to be fully and efficiently represented by an ID. The tree-based chain event graph (CEG) has been developed specifically for such asymmetric problems. By using these graphs rational players can make analogous deductions, assuming the topology of the CEG as common knowledge. In this paper we describe these powerful new techniques and illustrate them through an example modelling a game played between a government department and the provider of a website designed to radicalise vulnerable people. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
3. A Graphical Method for Solving Interval Matrix Games.
- Author
-
Akyar, Handan and Akyar, Emrah
- Subjects
- *
GAME theory , *FUZZY numbers , *GRAPHIC methods , *LINEAR programming , *ALGORITHMS , *REAL numbers , *MATHEMATICAL functions - Abstract
2 x n or m x 2 interval matrix games are considered, and a graphical method for solving such games is given. Intervalmatrix game is the interval generation of classicalmatrix games. Because of uncertainty in real-world applications, payoffs of a matrix game may not be a fixed number. Since the payoffs may vary within a range for fixed strategies, an interval-valued matrix can be used to model such uncertainties. In the literature, there are different approaches for the comparison of fuzzy numbers and interval numbers. In this work, the idea of acceptability index is used which is suggested by Sengupta et al.2001 and Sengupta and Pal2009, and in view of acceptability index, well-known graphical method for matrix games is adapted to interval matrix games. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
4. A Graphical Method for Assessing the Identification of Linear Structural Equation Models.
- Author
-
Eusebi, Paolo
- Subjects
- *
STRUCTURAL equation modeling , *LINEAR statistical models , *GRAPHIC methods , *MATRICES (Mathematics) , *ALGORITHMS , *PARAMETERS (Statistics) , *MATHEMATICAL variables , *STATISTICAL correlation - Abstract
A graphical method is presented for assessing the state of identifiability of the parameters in a linear structural equation model based on the associated directed graph. We do not restrict attention to recursive models. In the recent literature, methods based on graphical models have been presented as a useful tool for assessing the state of identifiability of the parameters of the model. This article proposes the graphical counterpart of the rank condition of the matrix of structural coefficients, which allows for checking of the identifiability through a simple graphical rule. This approach can be used to develop algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
5. A graphical method for reducing and relating models in systems biology.
- Author
-
Gay, Steven, Soliman, Sylvain, and Fages, François
- Subjects
- *
SYSTEMS biology , *COMPUTATIONAL biology , *MOLECULAR biology , *ALGORITHMS , *BIOINFORMATICS - Abstract
Motivation: In Systems Biology, an increasing collection of models of various biological processes is currently developed and made available in publicly accessible repositories, such as biomodels.net for instance, through common exchange formats such as SBML. To date, however, there is no general method to relate different models to each other by abstraction or reduction relationships, and this task is left to the modeler for re-using and coupling models. In mathematical biology, model reduction techniques have been studied for a long time, mainly in the case where a model exhibits different time scales, or different spatial phases, which can be analyzed separately. These techniques are however far too restrictive to be applied on a large scale in systems biology, and do not take into account abstractions other than time or phase decompositions. Our purpose here is to propose a general computational method for relating models together, by considering primarily the structure of the interactions and abstracting from their dynamics in a first step. Results: We present a graph-theoretic formalism with node merge and delete operations, in which model reductions can be studied as graph matching problems. From this setting, we derive an algorithm for deciding whether there exists a reduction from one model to another, and evaluate it on the computation of the reduction relations between all SBML models of the biomodels.net repository. In particular, in the case of the numerous models of MAPK signalling, and of the circadian clock, biologically meaningful mappings between models of each class are automatically inferred from the structure of the interactions. We conclude on the generality of our graphical method, on its limits with respect to the representation of the structure of the interactions in SBML, and on some perspectives for dealing with the dynamics. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
6. Graphical comparison of multivariate nonparametric location tests for restricted alternatives
- Author
-
Vock, Michael
- Subjects
- *
GRAPHICAL modeling (Statistics) , *GRAPHIC methods for multivariate analysis , *ALGORITHMS , *FOUNDATIONS of arithmetic - Abstract
Abstract: There have been several proposals of nonparametric tests for restricted (or “one-sided”) multivariate location alternatives. In this article, a graphical means of assessing the adequacy of a test for the different types of hypotheses is presented, and an algorithm is proposed for practical application. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
7. Comparison of two different methods of image analysis for the assessment of microglial activation in patients with multiple sclerosis using (R)-[N-methyl-carbon-11]PK11195.
- Author
-
Kang, Yeona, Schlyer, David, Kaunzner, Ulrike W., Kuceyeski, Amy, Kothari, Paresh J., and Gauthier, Susan A.
- Subjects
- *
MULTIPLE sclerosis , *MICROGLIA , *MACROPHAGES , *IMMUNE response , *MULTIPLE sclerosis treatment , *PATIENTS - Abstract
Chronic active multiple sclerosis (MS) lesions have a rim of activated microglia/macrophages (m/M) leading to ongoing tissue damage, and thus represent a potential treatment target. Activation of this innate immune response in MS has been visualized and quantified using PET imaging with [11C]-(R)-PK11195 (PK). Accurate identification of m/M activation in chronic MS lesions requires the sensitivity to detect lower levels of activity within a small tissue volume. We assessed the ability of kinetic modeling of PK PET data to detect m/M activity in different central nervous system (CNS) tissue regions of varying sizes and in chronic MS lesions. Ten patients with MS underwent a single brain MRI and two PK PET scans 2 hours apart. Volume of interest (VOI) masks were generated for the white matter (WM), cortical gray matter (CGM), and thalamus (TH). The distribution volume (VT) was calculated with the Logan graphical method (LGM-VT) utilizing an image-derived input function (IDIF). The binding potential (BPND) was calculated with the reference Logan graphical method (RLGM) utilizing a supervised clustering algorithm (SuperPK) to determine the non-specific binding region. Masks of varying volume were created in the CNS to assess the impact of region size on the various metrics among high and low uptake regions. Chronic MS lesions were also evaluated and individual lesion masks were generated. The highest PK uptake occurred the TH and lowest within the WM, as demonstrated by the mean time activity curves. In the TH, both reference and IDIF based methods resulted in estimates that did not significantly depend on VOI size. However, in the WM, the test-retest reliability of BPND was significantly lower in the smallest VOI, compared to the estimates of LGM-VT. These observations were consistent for all chronic MS lesions examined. In this study, we demonstrate that BPND and LGM-VT are both reliable for quantifying m/M activation in regions of high uptake, however with blood input function LGM-VT is preferred to assess longitudinal m/M activation in regions of relatively low uptake, such as chronic MS lesions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. Stochastic EM algorithm for generalized exponential cure rate model and an empirical study.
- Author
-
Davies, Katherine, Pal, Suvra, and Siddiqua, Joynob A.
- Subjects
- *
EXPECTATION-maximization algorithms , *BREAST cancer , *ALGORITHMS , *DISTRIBUTION (Probability theory) , *SURVIVAL analysis (Biometry) - Abstract
In this paper, we consider two well-known parametric long-term survival models, namely, the Bernoulli cure rate model and the promotion time (or Poisson) cure rate model. Assuming the long-term survival probability to depend on a set of risk factors, the main contribution is in the development of the stochastic expectation maximization (SEM) algorithm to determine the maximum likelihood estimates of the model parameters. We carry out a detailed simulation study to demonstrate the performance of the proposed SEM algorithm. For this purpose, we assume the lifetimes due to each competing cause to follow a two-parameter generalized exponential distribution. We also compare the results obtained from the SEM algorithm with those obtained from the well-known expectation maximization (EM) algorithm. Furthermore, we investigate a simplified estimation procedure for both SEM and EM algorithms that allow the objective function to be maximized to split into simpler functions with lower dimensions with respect to model parameters. Moreover, we present examples where the EM algorithm fails to converge but the SEM algorithm still works. For illustrative purposes, we analyze a breast cancer survival data. Finally, we use a graphical method to assess the goodness-of-fit of the model with generalized exponential lifetimes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Active vibration control: A graphical approach for optimal distribution.
- Author
-
Wang, Jiqiang
- Subjects
- *
VIBRATION (Mechanics) , *OPTIMAL control theory , *MATHEMATICAL optimization , *ALGORITHMS , *ELECTRONIC controllers , *SIMULATION methods & models - Abstract
Highlights • A graphical method is proposed for optimal vibration distribution. • Existence and optimality of feasible controllers are addressed. • Performance limit of controllers is obtained. • Influence of constraints on optimal solutions is delineated. Abstract A number of design methodologies have been proposed for active control of vibrations. Most of the approaches proceed with formulating control design as optimization problems. These optimization-based methods are powerful and easy to use since numerical algorithms are usually available for finding solutions. However, the available methods have to be significantly modified to answering questions such as controller existence and achievable performance etc. This paper proposes a performance-limit-oriented solution developed for the problem of optimal distribution. A graphical approach is utilized for representation of the solutions. A number of important results upon existence of feasible controllers, optimality and performance limit of controllers, controller synthesis, as well as constraints handling are obtained; the puzzling issue of compromise between performance indices is resolved; the influence of constraints on optimal solutions are developed within the same graphical framework. This leads to a systematic design methodology to active vibration control for optimal distribution. The theoretical results are demonstrated through numerical examples and real time simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. Estimation of Aquifer Parameters from Pumping Test Data by Genetic Algorithm Optimization Technique.
- Author
-
Samuel, Manoj P. and Jha, Madan K.
- Subjects
- *
AQUIFERS , *HYDROGEOLOGY , *GENETIC algorithms , *COMBINATORIAL optimization , *ALGORITHMS , *MATHEMATICAL optimization - Abstract
Adequate and reliable estimates of aquifer parameters are of utmost importance for proper management of vital groundwater resources. The pumping (aquifer) test is the standard technique for estimating various hydraulic properties of aquifer systems, viz., transmissivity (T), hydraulic conductivity (K), storage coefficient (S), and leakance (L), for which the graphical method is widely used. In the present study, the efficacy of the genetic algorithm (GA) optimization technique is assessed in estimating aquifer parameters from the time-drawdown pumping test data. Computer codes were developed to optimize various aquifer parameters under different hydrogeologic conditions by using the GA technique. Applicability, adequacy, and robustness of the developed codes were tested using 12 sets of the published and unpublished aquifer test data. The aquifer parameters were also estimated by the graphical method using AquiferTest software, and were compared with those obtained by the GA technique. The GA technique yielded significantly low values of the sum of square errors (SSE) for almost all the datasets under study. The results revealed that the GA technique is an efficient and reliable method for estimating various aquifer parameters, especially in the situation when the graphical matching is poor. Also, it was found that because of its inherent characteristics, GA avoids the subjectivity, long computation time and ill-posedness often associated with conventional optimization techniques. Furthermore, the performance evaluation of the developed GA-based computer codes showed that the fitness value (SSE) of the best point in a population reduces with increasing generation number and population size. The analysis of the sensitivity of the parameters during the performance of GA indicated that a unique set of aquifer parameters was obtained for all three aquifer systems. The GA-based computer programs with interactive windows developed in this study are user-friendly and can serve as a teaching and research tool, which could also be useful for practicing hydrologists and hydrogeologists. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
11. Voxel level quantification of [11C]CURB, a radioligand for Fatty Acid Amide Hydrolase, using high resolution positron emission tomography.
- Author
-
Rusjan, Pablo M., Knezevic, Dunja, Boileau, Isabelle, Tong, Junchao, Mizrahi, Romina, Wilson, Alan A., and Houle, Sylvain
- Subjects
- *
BRAIN mapping , *VOXEL-based morphometry , *RADIOLIGAND assay , *POSITRON emission tomography , *PROBABILITY theory - Abstract
[11C]CURB is a novel irreversible radioligand for imaging fatty acid amide hydrolase in the human brain. In the present work, we validate an algorithm for generating parametric map images of [11C]CURB acquired with a high resolution research tomograph (HRRT) scanner. This algorithm applies the basis function method on an irreversible two-tissue compartment model (k4 = 0) with arterial input function, i.e., BAFPIC. Monte Carlo simulations are employed to assess bias and variability of the binding macroparameters (Ki and λk3) as a function of the voxel noise level and the range of basis functions. The results show that for a [11C]CURB time activity curve with noise levels corresponding to a voxel of an image acquired with the HRRT and reconstructed with the filtered back projection algorithm, the implementation of BAFPIC requires the use of a constant vascular fraction of tissue (5%) and a cutoff for slow frequencies (0.06 min-1). With these settings, BAFPIC maintains the probabilistic distributions of the binding macroparameters with approximately Gaussian shape and minimizes the bias and variability for large physiological ranges of the rate constants of [11C]CURB. BAFPIC reduces the variability of Ki to a third of that given by Patlak plot, the standard graphical method for irreversible radioligands. Application to real data demonstrated an excellent correlation between region of interest and BAFPIC parametric data and agreed with the simulations results. Therefore, BAFPIC with a constant vascular fraction can be used to generate parametric maps of [11C]CURB images acquired with an HRRT provided that the limits of the basis functions are carefully selected. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
12. THE CURVE FITTING ALGORITHM FOR THERMAL DESORPTION OF SHALE GAS.
- Author
-
GUOYONG ZUO, GUOSHENG JIANG, BO LI, HAITONG ZHAO, and MENG ZHANG
- Subjects
- *
THERMAL desorption , *SHALE gas , *ALGORITHMS , *DESORPTION - Abstract
Shale gas is a kind of very important unconventional oil and gas resource, which exists in the form of adsorbed gas and free gas in shale. Shale gas content is the key factor for shale gas assessment and core area evaluation. Desorption method is an useful method of shale gas content measurement, but the disadvantage of this method is that the test course takes a long time. We have done some shale gas desorption experiments in Chinese northern Shanxi and put forward five fitting algorithm models. By analysing correlation coefficient and the characteristic of desorption rate, it is found that composite function fitting can correctly reflect the desorption process, in the meantime, according to the graphical method of measured data, it is revealed that there is a clear positive correlation between desorption rate and desorption temperature. The research results are useful for predicting shale gas content and greatly reducing the testing time of shale gas desorption. [ABSTRACT FROM AUTHOR]
- Published
- 2016
13. Evaluation of a weighting approach for performing sensitivity analysis after multiple imputation.
- Author
-
Rezvan, Panteha Hayati, White, Ian R., Lee, Katherine J., Carlin, John B., Simpson, Julie A., and Hayati Rezvan, Panteha
- Subjects
- *
ALGORITHMS , *COMPUTER simulation , *REGRESSION analysis , *RESEARCH funding , *STATISTICS , *DATA analysis , *STATISTICAL models - Abstract
Background: Multiple imputation (MI) is a well-recognised statistical technique for handling missing data. As usually implemented in standard statistical software, MI assumes that data are 'Missing at random' (MAR); an assumption that in many settings is implausible. It is not possible to distinguish whether data are MAR or 'Missing not at random' (MNAR) using the observed data, so it is desirable to discover the impact of departures from the MAR assumption on the MI results by conducting sensitivity analyses. A weighting approach based on a selection model has been proposed for performing MNAR analyses to assess the robustness of results obtained under standard MI to departures from MAR.Methods: In this article, we use simulation to evaluate the weighting approach as a method for exploring possible departures from MAR, with missingness in a single variable, where the parameters of interest are the marginal mean (and probability) of a partially observed outcome variable and a measure of association between the outcome and a fully observed exposure. The simulation studies compare the weighting-based MNAR estimates for various numbers of imputations in small and large samples, for moderate to large magnitudes of departure from MAR, where the degree of departure from MAR was assumed known. Further, we evaluated a proposed graphical method, which uses the dataset with missing data, for obtaining a plausible range of values for the parameter that quantifies the magnitude of departure from MAR.Results: Our simulation studies confirm that the weighting approach outperformed the MAR approach, but it still suffered from bias. In particular, our findings demonstrate that the weighting approach provides biased parameter estimates, even when a large number of imputations is performed. In the examples presented, the graphical approach for selecting a range of values for the possible departures from MAR did not capture the true parameter value of departure used in generating the data.Conclusions: Overall, the weighting approach is not recommended for sensitivity analyses following MI, and further research is required to develop more appropriate methods to perform such sensitivity analyses. [ABSTRACT FROM AUTHOR]- Published
- 2015
- Full Text
- View/download PDF
14. A dynamic state-space analysis of interpersonal emotion regulation in couples who smoke.
- Author
-
Butler, Emily A., Hollenstein, Tom, Shoham, Varda, and Rohrbaugh, Michael J.
- Subjects
- *
SMOKING & psychology , *PSYCHOLOGICAL adaptation , *ALGORITHMS , *CONCEPTUAL structures , *INTERPERSONAL relations , *MARITAL status , *PHENOMENOLOGY , *PSYCHOLOGICAL tests , *PSYCHOLOGY , *RESEARCH funding , *SELF-management (Psychology) , *SEX distribution , *SMOKING , *SMOKING cessation , *VIDEO recording , *THEORY , *SOCIAL attitudes , *BODY movement - Abstract
Regulating emotions in interpersonal contexts requires managing one’s own emotion, a partner’s emotion, and the emotional tone of the relationship (e.g., conflict and intimacy). This multifaceted regulatory challenge, often referred to as “relationship-focused coping,” has been associated with health outcomes, but the real-time emotional processes involved are understudied. We use state-space grids (a recently developed graphical method) to investigate dynamic sequences of emotional experience (positive vs. negative) and relationship-focused coping intentions (to protect vs. engage one’s partner) taken from 26 couples in which one or both partners were smokers, while they discussed a health-related disagreement during a nonsmoking baseline and then while smoking. State-space indicators of contingent emotion-coping sequences showed evidence of both successful regulation (associated with improving emotional state) and unsuccessful regulation (associated with worsening emotional state). The pattern of results suggests that interpersonal emotion regulation may interfere with smoking cessation differently depending upon whether one or both partners smoke. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
15. Assisted history matching and graphical methods for estimating individual layer properties from well testing data in stratified reservoirs with multilateral wells.
- Author
-
Bela, Renan Vieira, Pesco, Sinesio, Barreto, Abelardo Borges, and Onur, Mustafa
- Subjects
- *
RADIAL flow , *SKIN permeability , *MATHEMATICAL optimization , *ALGORITHMS , *HORIZONTAL wells - Abstract
One of the main purposes for conducting well tests is to obtain information about reservoir parameters, such as its permeability and the existence of skin effects. Determining individual layer properties from pressure transient data in multilayer reservoirs is challenging, since pressure behavior is influenced by the properties of all layers. This work presents three methods for estimating layer properties from well testing data in multilayer reservoirs with multilateral wells. First, we extend two existing graphical straight-line methods for estimating individual layer properties in multilayer systems with vertical wells to stratified reservoirs with multilateral horizontal wells. These techniques are the rate-normalized pressure analysis and the delta transient method. Both methods require a clear identification of radial flow regimes, which may not occur in a practical case. Additionally, we show that a computer-assisted history matching method based on the Nelder–Mead optimization algorithm can also be used to evaluate layer permeabilities and skin. This optimization method does not rely on the identification of any specific flow regime. Estimates obtained from the graphical methods were used as initial guess for the assisted history matching. The proposed techniques are applied on a set of synthetic well-test cases, where pressure and layer flow-rate profiles are computed from an existing analytical model. Results show that all three methods are able to yield good estimates for layer properties if both early- and late-time radial flow are observed. In cases where only one of the radial flow regimes is identified, then the NM algorithm provides the best results, showing that the assisted history matching improved the estimates provided by the graphical method. • This work aims to determine individual layer permeabilities and skin factors. • It considers a multilayer reservoir drilled by a multilateral horizontal well. • The RNPA and DTM are extended to multilateral horizontal wells. • The Nelder–Mead optimization algorithm was also employed to estimate layer properties. • Good results were obtained in cases where both radial flow regimes are identified. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Vibration-based inverse algorithms for detection of delamination in composites.
- Author
-
Zhang, Zhifang, Shankar, Krishna, Ray, Tapabrata, Morozov, Evgeny V., and Tahtali, Murat
- Subjects
- *
DELAMINATION of composite materials , *ALGORITHMS , *VIBRATION (Mechanics) , *STIFFNESS (Mechanics) , *FRACTURE mechanics , *NUMERICAL analysis , *MECHANICAL behavior of materials - Abstract
Abstract: Delamination is a frequent and potentially serious damage that can occur in laminated polymer composites due to the poor inter-laminar fracture toughness of the matrix. Vibration based detection methods employ changes caused by loss of stiffness in dynamic parameters such as frequencies to detect and assess damage. One of the challenges of using frequency shift for damage detection is that while the presence of damage is easily identified through a shift in measured frequency, the determination of the location and the severity of the damage are not easy to accomplish. To determine the location and severity of damage from measured changes in frequency, it is necessary to solve the inverse problem, which requires the solution of a set of non-linear simultaneous equations. In this paper, we examine three different inverse algorithms for solving the non-linear equations to predict the interface, lengthwise location and size of delamination: direct of solution using a graphical method, artificial neural network (ANN) and surrogate-based optimization. The three inverse algorithms have been validated using numerical data generated from the finite element model (FEM) of delaminated beams and measured frequencies from modal testing conducted on simply supported and cantilever carbon fiber reinforced beam specimens. Results show that all three algorithms can predict the delamination parameters accurately using the validation data directly generated from FE model. However, if using experimental data from real beams, ANN does not fare as well as the other two methods as it is more sensitive to the measurement errors. Finally, the advantages and limitations of each method have been summarized to provide a useful guide for selecting inverse algorithms for vibration-based delamination detection. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
17. Algorithmic Method for Scraper Load-Time Optimization.
- Author
-
Marinelli, Marina and Lambropoulos, Sergios
- Subjects
- *
CONSTRUCTION project management , *CONSTRUCTION equipment , *EARTHMOVING machinery , *INDUSTRIAL productivity , *MATHEMATICAL optimization , *ALGORITHMS - Abstract
Scrapers have established an important position in the earthmoving field as they are independently capable of accomplishing an earthmoving operation. Given that loading a scraper to its capacity does not entail its maximum production, optimizing the scraper's loading time is an essential prerequisite for successful operations management. The relevant literature addresses the loading time optimization through a graphical method that is founded on the invalid assumption that the hauling time is independent of the load time. To correct this, a new algorithmic optimization method that incorporates the golden section search and the bisection algorithm is proposed. Comparison of the results derived from the proposed and the existing method demonstrates that the latter entails the systematic needless prolongation of the loading stage thus resulting in reduced hourly production and increased cost. Therefore, the proposed method achieves an improved modeling of scraper earthmoving operations and contributes toward a more efficient cost management. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
18. PUMPS IN PARALLEL CONNECTION PROBLEM - AN ANALYTICAL APPROACH.
- Author
-
ALDEA, ALEXANDRU
- Subjects
- *
PUMPING machinery , *INVERSE functions , *ALGORITHMS , *POLYNOMIALS , *APPLICATION software , *MATHEMATICAL models - Abstract
The problem of pumps connected in parallel can be easily solved by means of graphical method, but it becomes very time consuming when dealing with multiple scenarios. An analytical approach offers the possibility of implementing a software program that can significantly reduce this time and also adds further features. The mathematical algorithm used is dealing with a nonlinear equitation system and with polynomial fitting functions. A very special issue that is addressed in this paper is about the errors given by the very small values of flow rates. Since the algorithms assume that every physical measure is expressed in SI units, then the unitless values of flow rates are at least 10-3 magnitude order compared to other values (pump head for example). This in turn leads to very poor fitting polynomials and the end results are far from the truth. The key for solving this particular issue resides in constructing the inverse functions of H(Q). Once the algorithms were sort out, they were implemented in LabVIEW in order to create a software application. This final product offers both accuracy of the calculations and user friendly interface. [ABSTRACT FROM AUTHOR]
- Published
- 2012
19. Improved kinetic analysis of dynamic PET data with optimized HYPR-LR.
- Author
-
Floberg, John M., Mistretta, Charles A., Weichert, Jamey P., Hall, Lance T., Holden, James E., and Christian, Bradley T.
- Subjects
- *
POSITRON emission tomography , *MATHEMATICAL optimization , *IMAGE reconstruction , *ANGIOGRAPHY , *MEDICAL imaging systems , *SIGNAL-to-noise ratio , *ALGORITHMS - Abstract
Purpose: Highly constrained backprojection-local reconstruction (HYPR-LR) has made a dramatic impact on magnetic resonance angiography (MRA) and shows promise for positron emission tomography (PET) because of the improvements in the signal-to-noise ratio (SNR) it provides dynamic images. For PET in particular, HYPR-LR could improve kinetic analysis methods that are sensitive to noise. In this work, the authors closely examine the performance of HYPR-LR in the context of kinetic analysis, they develop an implementation of the algorithm that can be tailored to specific PET imaging tasks to minimize bias and maximize improvement in variance, and they provide a framework for validating the use of HYPR-LR processing for a particular imaging task. Methods: HYPR-LR can introduce errors into non sparse PET studies that might bias kinetic parameter estimates. An implementation of HYPR-LR is proposed that uses multiple temporally summed composite images that are formed based on the kinetics of the tracer being studied (HYPR-LR-MC). The effects of HYPR-LR-MC and of HYPR-LR using a full composite formed with all the frames in the study (HYPR-LR-FC) on the kinetic analysis of Pittsburgh compound-B ([11C]-PIB) are studied. HYPR-LR processing is compared to spatial smoothing. HYPR-LR processing was evaluated using both simulated and human studies. Nondisplaceable binding potential (BPND) parametric images were generated from fifty noise realizations of the same numerical phantom and eight [11C]-PIB positive human scans before and after HYPR-LR processing or smoothing using the reference region Logan graphical method and receptor parametric mapping (RPM2). The bias and coefficient of variation in the frontal and parietal cortex in the simulated parametric images were calculated to evaluate the absolute performance of HYPR-LR processing. Bias in the human data was evaluated by comparing parametric image BPND values averaged over large regions of interest (ROIs) to Logan estimates of the BPND from TACs averaged over the same ROIs. Variance was assessed qualitatively in the parametric images and semiquantitatively by studying the correlation between voxel BPND estimates from Logan analysis and RPM2. Results: Both the simulated and human data show that HYPR-LR-FC overestimates BPND values in regions of high [11C]-PIB uptake. HYPR-LR-MC virtually eliminates this bias. Both implementations of HYPR-LR reduce variance in the parametric images generated with both Logan analysis and RPM2, and HYPR-LR-FC provides a greater reduction in variance. This reduction in variance nearly eliminates the noise-dependent Logan bias. The variance reduction is greater for the Logan method, particularly for HYPR-LR-MC, and the variance in the resulting Logan images is comparable to that in the RPM2 images. HYPR-LR processing compares favorably with spatial smoothing, particularly when the data are analyzed with the Logan method, as it provides a reduction in variance with no loss of spatial resolution. Conclusions: HYPR-LR processing shows significant potential for reducing variance in parametric images, and can eliminate the noise-dependent Logan bias. HYPR-LR-FC processing provides the greatest reduction in variance but introduces a positive bias into the BPND of high-uptake border regions. The proposed method for forming HYPR composite images, HYPR-LR-MC, eliminates this bias at the cost of less variance reduction. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
20. A new identification method of viscoelastic behavior: Application to the generalized Maxwell model
- Author
-
Renaud, Franck, Dion, Jean-Luc, Chevallier, Gaël, Tawfiq, Imad, and Lemaire, Rémi
- Subjects
- *
VISCOELASTICITY , *SYSTEM identification , *FRACTIONAL calculus , *TRANSFER functions , *NUMERICAL solutions to Maxwell equations , *ALGORITHMS - Abstract
Abstract: This paper focuses on the generalized Maxwell model (GMM) identification. The formulation of the transfer function of the GMM is defined, as well as its asymptotes. To compare identification methods of the parameters of the GMM, a test transfer function and two quality indicators are defined. Then, three graphical methods are described, the enclosing curve method, the CRONE method and an original one. But the results of graphical methods are not good enough. Thus, two optimization recursive processes are described to improve the results of graphical methods. The first one is based on an unconstrained non-linear optimization algorithm and the second one is original and allows constraining identified parameters. This new process uses the asymptotes of the modulus and the phase of the transfer function of the GMM. The result of the graphical method optimized with the new process is very accurate and fast. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
21. Design and optimization of multipass heat exchangers
- Author
-
Ponce-Ortega, J.M., Serna-González, M., and Jiménez-Gutiérrez, A.
- Subjects
- *
ALGORITHMS , *HEAT exchangers , *GRAPHIC methods , *EXPERIMENTAL design - Abstract
Abstract: In this paper, a simple algorithm is developed for the design and economic optimization of multiple-pass 1–2 shell-and-tube heat exchangers in series. The design model is formulated using the F T design method and inequality constraints that ensure feasible and practical heat exchangers. Simple expressions are obtained for the minimum real (non-integer) number of 1–2 shells in series. A graphical method is also presented to develop some insight into the nature of the optimization problem. The proposed algorithm enables engineers to design optimum multipass heat exchangers quickly and easily. It is also shown how the method can be applied for optimal design of multipass process utility exchangers. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
22. Résolution de l'équation de Young–Laplace par une méthode géométrique utilisant la courbure
- Author
-
Gentes, Mathieu, Rousseaux, Germain, Coullet, Pierre, and De Gennes, Pierre-Gilles
- Subjects
- *
GRAPHIC methods , *ALGORITHMS , *GEOMETRICAL drawing , *MATHEMATICS , *GRAPH theory - Abstract
Abstract: We revisit from a modern viewpoint a graphical method of resolution of the Young–Laplace equation proposed by Thomson in 1886 and improved by Boys in 1893. This method, relying on some axisymmetry properties, was applied to the case of pendant drops, drops on a horizontal plane and meniscii. The several initials conditions necessitated a numerical implementation of the Thomson''s algorithm, particularly in order to obtain pendant drops with multiple bulges. A scaling law for the variation of the drops radii forming this rosary (string of drops) is presented. To cite this article: M. Gentes et al., C. R. Physique 6 (2005). [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
23. An algorithm for detection of couch-beam intersection
- Author
-
Buckle, Andrew H.
- Subjects
- *
HOSPITAL radiological services , *MEDICAL radiology , *MEDICAL electronics , *ALGORITHMS - Abstract
Abstract: A graphical method of avoiding beam blocking by couch supports is described. The technique uses a simplification of the couch cross-section and matrix operations to project the beam into the plane normal to the longitudinal axis of the couch. The algorithm is described, along with a software implementation testing. The application of the software to 3D treatment planning is discussed. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
24. The Bilipschitz criterion for dimension reduction mapping design.
- Author
-
Anderle, Markus, Hundley, Douglas R., and Kirby, Michael J.
- Subjects
- *
MAPS , *CARTOGRAPHY , *ALGORITHMS , *ACCOUNTING - Abstract
We present a graphical method for evaluating the quality of a feature extraction mapping. Based on the Bilipschitz criterion, this Bilipschitz Criterion Plot (BCP) can be used to evaluate dimension reducing mappings for relative quality and to estimate the injectivity of the reduction map (as well as the associated reconstruction map). It can also be used to survey regions where the map is locally an expansion or contraction map. The plot is easy and fast to construct, and gives much more insight than any single value can, such as the distance preservation error. We demonstrate the value of such a mapping when examining the quality of the Sammon map, Neuroscale, the autoassociative map, and a recent technique that is designed to optimize the BCP in a linear fashion, the adaptive secant basis algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
25. An Evaluation of Multiple Objective Decision Support Weighting Techniques in Natural Resource Management.
- Author
-
Hajkowicz, Stefan A., McDonald, Geoff T., and Smith, Phil N.
- Subjects
- *
NATURAL resources management , *DECISION making , *ALGORITHMS - Abstract
Multiple objective decision support (MODS) is a structured framework for evaluating decision alternatives against multiple, and often conflicting, criteria. Its ability to handle complex trade-offs in a variety of quantitative and qualitative units gives it much potential in the field of natural resource management (NRM). A key component of MODS is the process used to obtain information from decision makers on the relative importance of evaluative criteria. Ranking algorithms then use this information to determine the relative value of each decision alternative. This paper explores how practising community based NRM decision makers respond to five generic methods for weighting the criteria. It presents a study in which 55 decision makers throughout five regions in Queensland, Australia, applied MODS to evaluate environmental projects seeking funding under the Australian Natural Heritage Trust. Weighting methods applied include fixed point scoring, rating, ordinal ranking, a graphical method and paired comparisons. Decision makers evaluated each weighting method in terms of ease of use and of how much it helped clarify the decision problem. Results show that decision makers felt uncomfortable applying fixed point scoring and generally preferred to express their preferences through ordinal ranking. This has implications for the types of ranking algorithms that can be applied to evaluate the decision alternatives. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
26. Confidence Curves in Nonlinear Regression.
- Author
-
Cook, R. Dennis and Weisberg, Sanford
- Subjects
- *
REGRESSION analysis , *NONLINEAR statistical models , *ESTIMATION theory , *GRAPHIC methods , *ALGORITHMS - Abstract
Standard Wald confidence regions for parameters in a normal nonlinear regression model often fail to capture accurately the uncertainty of estimation as reflected by the corresponding profile log-likelihood. We present a graphical method, along with a stable computational algorithm, for reference on scalar parameters in a nonlinear regression model. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
27. A simple graphical approach for understanding probabilistic inference in Bayesian networks
- Author
-
Butz, C.J., Hua, S., Chen, J., and Yao, H.
- Subjects
- *
GRAPHIC methods , *DISTRIBUTION (Probability theory) , *COMPUTER networks , *BAYESIAN analysis , *ACYCLIC model , *ALGORITHMS , *EQUATIONS , *INFORMATION technology - Abstract
Abstract: We present a simple graphical method for understanding exact probabilistic inference in discrete Bayesian networks (BNs). A conditional probability table (conditional) is depicted as a directed acyclic graph involving one or more black vertices and zero or more white vertices. The probability information propagated in a network can then be graphically illustrated by introducing the black variable elimination (BVE) algorithm. We prove the correctness of BVE and establish its polynomial time complexity. Our method possesses two salient characteristics. This purely graphical approach can be used as a pedagogical tool to introduce BN inference to beginners. This is important as it is commonly stated that newcomers have difficulty learning BN inference due to intricate mathematical equations and notation. Secondly, BVE provides a more precise description of BN inference than the state-of-the-art discrete BN inference technique, called LAZY-AR. LAZY-AR propagates potentials, which are not well-defined probability distributions. Our approach only involves conditionals, a special case of potential. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
28. Maximal prime subgraph decomposition of Bayesian networks: A relational database perspective
- Author
-
Wu, Dan
- Subjects
- *
BAYESIAN analysis , *RELATIONAL databases , *ALGORITHMS , *DISTRIBUTION (Probability theory) - Abstract
Abstract: A maximal prime subgraph decomposition junction tree (MPD-JT) is a useful computational structure that facilitates lazy propagation in Bayesian networks (BNs). A graphical method was proposed to construct an MPD-JT from a BN. In this paper, a new method is presented from a relational database (RDB) perspective which sheds light on the semantic meaning of the previously proposed graphical algorithm. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
29. GRAPHICAL FUZZY INFERENCE METHOD IN SPARSE RULE BASE.
- Author
-
DOSOFTEI, Constantin - Catalin and MASTACAN, Lucian
- Subjects
- *
FUZZY systems , *SYSTEM analysis , *ARTIFICIAL intelligence , *SIMULATION methods & models , *GRAPHIC methods , *ALGORITHMS , *LOGIC machines , *MACHINE theory - Abstract
One of the most important and intriguing goals of Computational intelligence is to construct methods, models and algorithms to cope with very complex systems, often lacking known analytical models. The aim of this paper is to obtain a graphical method to fuzzy inference in a sparse rule base. The rules are with two inputs and one consequence. Graphical methods will create a pyramid for two inputs with triangular- type membership function. [ABSTRACT FROM AUTHOR]
- Published
- 2009
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.