231 results on '"Monte-Carlo methods"'
Search Results
2. Multivariate simulation‐based forecasting for intraday power markets: Modeling cross‐product price effects.
- Author
-
Hirsch, Simon and Ziel, Florian
- Subjects
ELECTRICITY markets ,MARGINAL distributions ,RENEWABLE energy sources ,ELECTRICITY pricing ,MARKET design & structure (Economics) ,PREDICTION markets - Abstract
Intraday electricity markets play an increasingly important role in balancing the intermittent generation of renewable energy resources, which creates a need for accurate probabilistic price forecasts. However, research to date has focused on univariate approaches, while in many European intraday electricity markets all delivery periods are traded in parallel. Thus, the dependency structure between different traded products and the corresponding cross‐product effects cannot be ignored. We aim to fill this gap in the literature by using copulas to model the high‐dimensional intraday price return vector. We model the marginal distribution as a zero‐inflated Johnson's SU$$ {S}_U $$ distribution with location, scale, and shape parameters that depend on market and fundamental data. The dependence structure is modeled using copulas, accounting for the particular market structure of the intraday electricity market, such as overlapping but independent trading sessions for different delivery days and allowing the dependence parameter to be time‐varying. We validate our approach in a simulation study for the German intraday electricity market and find that modeling the dependence structure improves the forecasting performance. Additionally, we shed light on the impact of the single intraday coupling on the trading activity and price distribution and interpret our results in light of the market efficiency hypothesis. The approach is directly applicable to other European electricity markets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Stochastic evaluation of quasi-zero stiffness magnetic spring using a reluctance-corrected analytical model.
- Author
-
Jung, Jaehwan, Yoon, Kyung-Taek, and Choi, Young-Man
- Subjects
- *
MAGNETIC suspension , *MAGNETISM , *PERMANENT magnets , *MONTE Carlo method , *MAGNETS - Abstract
—In state of the art high-precision motion systems, which utilize magnetic levitation, a quasi-zero stiffness gravity compensation is one of the essentials to improve dynamic performance, eliminate position dependency and thus simplify controller design. Special configurations of permanent magnets, such as Halbach magnet arrays, can realize magnetic levitation as a form of magnetic spring. However, unlike conventional permanent magnet machines with large stiffness, the quasi-zero stiffness magnetic spring requires higher modeling accuracy for force estimation, which is affected by the nonlinear reluctance effect of permanent magnets. In this study, we propose an accurate magnetic modeling method for the quasi-zero stiffness magnetic spring. By correcting the reluctance effect of magnets, the proposed magnetic model achieves superior accuracy estimating levitation force and stiffness to the conventional surface current model. Using the reluctance-corrected magnetic model, tolerance analysis was performed to identify dominant geometric parameters affecting the uncertainty of the Halbach array magnetic spring performance. A Monte-Carlo simulation was used to estimate the overall tolerance of magnetic forces and stiffness. The experimental results fit in the tolerances. • Accurate analytical model for Halbach array linear magnetic spring (HMS) considering magnet's reluctance. • The model estimates the level of quasi-zero stiffness of the HMS accurately. • Performance variations were quantitatively estimated with some dominant tolerances identified. • Overall uncertainties were evaluated using the Monte-Carlo method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Statistical simulations with LR random fuzzy numbers.
- Author
-
Parchami, Abbas, Grzegorzewski, Przemyslaw, and Romaniuk, Maciej
- Subjects
FUZZY numbers ,RANDOM numbers ,MEMBERSHIP functions (Fuzzy logic) ,PROBABILITY density function ,FUZZY sets ,RANDOM graphs - Abstract
Computer simulations are a powerful tool in many fields of research. This also applies to the broadly understood analysis of experimental data, which are frequently burdened with multiple imperfections. Often the underlying imprecision or vagueness can be suitably described in terms of fuzzy numbers which enable also the capture of subjectivity. On the other hand, due to the random nature of the experimental data, the tools for their description must take into account their statistical nature. In this way, we come to random fuzzy numbers that model fuzzy data and are also solidly formalized within the probabilistic setting. In this contribution, we introduce the so-called LR random fuzzy numbers that can be used in various Monte-Carlo simulations on fuzzy data. The proposed method of generating fuzzy numbers with membership functions given by probability densities is both simple and rich, well-grounded mathematically, and has a high application potential. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. CVaR-Based Formulation for Stochastic Extended Vertical Second-Order-Cone Linear Complementarity Problems and Applications in Optimal Power Flow in Networks
- Author
-
Sun, Guo, Ren, Jianfeng, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Qian, Zhihong, editor, Jabbar, M.A., editor, Cheung, Simon K. S., editor, and Li, Xiaolong, editor
- Published
- 2023
- Full Text
- View/download PDF
6. Monte-Carlo modelling of double parton scattering
- Author
-
Cabouat, Baptiste, Forshaw, Jeffrey, and Seymour, Michael
- Subjects
539.7 ,Phenomenological Model ,Double Parton Scattering ,Quantum Chromodynamics ,Monte-Carlo Methods - Abstract
The aim of this thesis is to present a new Monte-Carlo simulation of double parton scattering (DPS) named dShower. DPS is the process where two separate parton-parton interactions happen in a single proton-proton collision and constitutes an important background signal to numerous processes of interest such as diboson pair production or four-jet production. An accurate modelling of DPS requires to account for the correlations between the partons belonging to the same proton. In particular, it is necessary to take into account the fact that two partons may have a common origin in a single parton due to the so-called "one-to-two" perturbative splitting mechanism. Including this splitting effect is a cumbersome task since it leads to a potential double counting with single parton scattering (SPS). Indeed, a DPS process where a one-to-two splitting happens in each proton can be seen as a loop correction to SPS. The dShower simulation introduced in this thesis allows the user to generate parton-level DPS events. The two hard scatters are sampled according to the full DPS cross section and are evolved simultaneously with a parton shower. In the algorithm, the one-to-two splitting mechanism is included in a consistent manner. DPS and SPS events can also be combined in the simulation without double counting. Some numerical results for same-sign WW and ZZ pair production are presented in the thesis.
- Published
- 2021
7. Projecting the socio-economic impact of a big science center: the world's largest particle accelerator at CERN.
- Author
-
Bastianin, Andrea, Del Bo, Chiara F., Florio, Massimo, and Giffoni, Francesco
- Subjects
PARTICLE accelerators ,LARGE Hadron Collider ,MONTE Carlo method ,NET present value ,EXTERNALITIES - Abstract
Public investment in Big Science generates social benefits that can ultimately support economic growth. This paper implements a model for the social Cost – Benefit Analysis (CBA) of Big Science and relies on Monte Carlo methods to quantify the uncertainty of long-term projections. We evaluate social costs and benefits of the High Luminosity upgrade of the Large Hadron Collider (HL-LHC) at CERN up to 2038. Monte Carlo simulations show that there is a 94% chance to observe a positive net present value for society. The attractiveness of CERN for Early Stage Researchers and technological spillovers for collaborating firms are key for a positive CBA result. Cultural effects, especially those related to onsite visitors, also contribute to generating societal benefits. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Testing uniformity based on negative cumulative extropy.
- Author
-
Alizadeh Noughabi, Hadi
- Subjects
- *
UNIFORMITY , *BETA distribution , *ASYMPTOTIC distribution , *DISTRIBUTION (Probability theory) - Abstract
Recently, extropy as an alternative measure of uncertainty has been widely used in statistical procedures. In this article, we present some properties of the negative cumulative residual extropy and then propose a test for uniformity. An approximation of distribution of the test statistic is derived. The mean, variance and the asymptotic null distribution of the test statistic is presented. Percentage points and power against seven alternatives are reported. The results of a simulation study show the test is competitive in terms of power. Moreover, the proposed statistic is easy to compute and can approximate by the beta distribution. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Optimal reinsurance-investment with loss aversion under rough Heston model.
- Author
-
Ma, Jingtang, Lu, Zhengyang, and Chen, Dengsheng
- Subjects
- *
REINSURANCE , *LOSS aversion , *STOCHASTIC models , *FINANCIAL markets , *PRICES , *INSURANCE companies - Abstract
The paper investigates optimal reinsurance-investment strategies with the assumption that the insurers can purchase proportional reinsurance contracts and invest their wealth in a financial market consisting of one risk-free asset and one risky asset whose price process obeys the rough Heston model. The problem is formulated as a utility maximization problem with a minimum guarantee under an S-shaped utility. Since the rough Heston model is non-Markovian and non-semimartingale, the utility maximization problem cannot be solved by the classical dynamical programming principle and related approaches. This paper uses semi-martingale approximation techniques to approximate the utility maximization problem and proves the rates of convergence for the optimal strategies. The approximate problem is a kind of classical stochastic control problem under multi-factor stochastic volatility models. As the approximate control problem still cannot be solved analytically, a dual-control Monte-Carlo method is developed to solve it. Numerical examples and implementations are provided. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Monte-Carlo Evaluation of Residential Energy System Morphologies Applying Device Agnostic Energy Management
- Author
-
Stefan Arens, Sunke Schluters, Benedikt Hanke, Karsten Von Maydell, and Carsten Agert
- Subjects
Energy management ,Monte-Carlo methods ,scenario analysis ,systems modelling ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Decarbonization requires new energy systems components to mitigate fossil fuel dependency, for instance electric vehicles and heat pumps, forming a sector integrated energy system. Energy management is a promising approach to integrate these devices more efficiently by orchestrating the respective consumption and generation. This study investigates the advantage of an advanced energy management algorithm that is applied to varying energy system scenarios. The energy management algorithm is based on economic principles and the system topology is represented by a rooted tree. Grid elements form parents, which act as auctioneers and devices act according to type specific demand and supply functions. This algorithm is compared to an approach where devices are not coordinated, at a system scale of six households. In order to account for different characteristics of the energy system, the different scenarios are defined according to a morphological analysis and are analysed by means of Monte-Carlo simulation. These scenarios vary the PV generation, heating technology, and building insulation. It is shown that the algorithm reduces peak loads across all scenarios by around 15 kW. Other key performance indicators, such as own consumption and self-sufficiency show a dependency on the scenarios, although the algorithm outperforms the reference in each one, achieving an increase in own consumption of at least 13 p.p. and 22 p.p. in terms of self-sufficiency.
- Published
- 2022
- Full Text
- View/download PDF
11. Novel use of the Monte-Carlo methods to visualize singularity configurations in serial manipulators
- Author
-
M.I.M. Abo Elnasr, Hussein M Bahaa, and Ossama Mokhiamar
- Subjects
singularity analysis ,kinematic modeling ,monte-carlo methods ,6 dof serial robotic arm ,forward kinematics ,Mechanical engineering and machinery ,TJ1-1570 ,Mechanics of engineering. Applied mechanics ,TA349-359 - Abstract
This paper analyses the problem of the kinematic singularity of 6 DOF serial robots by extending the use of Monte-Carlo numerical methods to visualize singularity configurations. To achieve this goal, first, forward kinematics and D-H parameters have been derived for the manipulator. Second, the derived equations are used to generate and visualize a workspace that gives a good intuition of the motion shape of the manipulator. Third, the Jacobian matrix is computed using graphical methods, aiming to locate positions that cause singularity. Finally, the data obtained are processed in order to visualize the singularity and to design a trajectory free of singularity. MATLAB robotics toolbox, Symbolic toolbox, and curve fitting toolbox are the MATLAB toolboxes used in the calculations. The results of the surface and contour plots of the determinate of the Jacobian matrix behavior lead to design a manipulator’s trajectory free of singularity and show the parameters that affect the manipulator’s singularity and its behavior in the workspace.
- Published
- 2021
- Full Text
- View/download PDF
12. A fast algorithm for simulation of rough volatility models.
- Author
-
Ma, Jingtang and Wu, Haofei
- Subjects
- *
EXPONENTIAL sums , *SINGULAR integrals , *APPROXIMATION algorithms , *ALGORITHMS , *STOCHASTIC integrals - Abstract
A rough volatility model contains a stochastic Volterra integral with a weakly singular kernel. The classical Euler-Maruyama algorithm is not very efficient for simulating this kind of model because one needs to keep records of all the past path-values and thus the computational complexity is too large. This paper develops a fast two-step iteration algorithm using an approximation of the weakly singular kernel with a sum of exponential functions. Compared to the Euler-Maruyama algorithm, the complexity of the fast algorithm is reduced from O (N 2) to O (N log N) or O (N log 2 N) for simulating one path, where N is the number of time steps. Further, the fast algorithm is developed to simulate rough Heston models with (or without) regime switching, and multi-factor approximation algorithms are also studied and compared. The convergence rates of the Euler-Maruyama algorithm and the fast algorithm are proved. A number of numerical examples are carried out to confirm the high efficiency of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Efficient ensemble stochastic algorithms for agent-based models with spatial predator–prey dynamics.
- Author
-
Albi, Giacomo, Chignola, Roberto, and Ferrarese, Federica
- Subjects
- *
PREDATION , *LOTKA-Volterra equations , *ALGORITHMS , *SAMPLE size (Statistics) , *POPULATION dynamics , *OSCILLATIONS - Abstract
Experiments in predator–prey systems show the emergence of long-term cycles. Deterministic model typically fails in capturing these behaviors, which emerge from the microscopic interplay of individual based dynamics and stochastic effects. However, simulating stochastic individual based models can be extremely demanding, especially when the sample size is large. Hence, we propose an alternative simulation approach, whose computation cost is lower than the one of the classic stochastic algorithms. First, we describe the agent-based model with predator–prey dynamics, and its mean-field approximation. Then, we provide a consistency result for the novel stochastic algorithm at the microscopic and mesoscopic scale. Finally, we perform different numerical experiments in order to test the efficiency of the proposed algorithm, focusing also on the analysis of the different nature of oscillations between mean-field and stochastic simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Node Importance Analysis of a Gas Transmission Network with Evaluation of a New Infrastructure by ProGasNet
- Author
-
Praks, Pavel, Kopustinskas, Vytis, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Luiijf, Eric, editor, Žutautaitė, Inga, editor, and Hämmerli, Bernhard M., editor
- Published
- 2019
- Full Text
- View/download PDF
15. Anisotropy of the Runaway Electron Generation Process in Strongly Inhomogeneous Electric Fields.
- Author
-
Mamontov, Yuriy I., Zubarev, Nikolay M., and Uimanov, Igor V.
- Subjects
- *
ELECTRIC fields , *ELECTRIC field effects , *ENERGY dissipation , *ELECTRONS , *ELECTRON emission , *ANISOTROPY , *ELECTRON scattering , *ELECTRON field emission - Abstract
An investigation of runaway electron (RE) kinetics under the influence of a strongly inhomogeneous electric field was carried out by means of 2-D analytical and 2-D numerical Monte–Carlo approaches. Several shapes of cathodes generating an inhomogeneous electric field were studied. Within the developed 2-D analytical model, blade-shaped and cone-shaped cathodes were considered. The voltage values which allow electrons to scatter at large angles close to $\pi $ /2 near a blade-shaped cathode were determined. Also, it was shown that the RE scatter angles are larger for the blade (in comparison with the ones for the cone) due to the electric field defocusing effect. Within the framework of the numerical 2-D Monte–Carlo model, a cathode was assumed to have a parabolic shape. Inapplicability of the classical runaway criterion in strongly inhomogeneous electric fields was demonstrated. Also, it was shown that the cathode surface domain of electron emission and an initial direction of electron motion have a strong influence on their energy balance and, hence, on the probability for electrons to transit into the continuous accelerating regime. Energy losses of REs were estimated. A correlation between the full-width at half-maximum of the RE energy spectrum and an inelastic energy loss value was proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. Monte-Carlo Sampling Applied to Multiple Instance Learning for Histological Image Classification
- Author
-
Combalia, Marc, Vilaplana, Verónica, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Pandu Rangan, C., Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Stoyanov, Danail, editor, Taylor, Zeike, editor, Carneiro, Gustavo, editor, Syeda-Mahmood, Tanveer, editor, Martel, Anne, editor, Maier-Hein, Lena, editor, Tavares, João Manuel R.S., editor, Bradley, Andrew, editor, Papa, João Paulo, editor, Belagiannis, Vasileios, editor, Nascimento, Jacinto C., editor, Lu, Zhi, editor, Conjeti, Sailesh, editor, Moradi, Mehdi, editor, Greenspan, Hayit, editor, and Madabhushi, Anant, editor
- Published
- 2018
- Full Text
- View/download PDF
17. Closed-form Approximations in Multi-asset Market Making.
- Author
-
Bergault, Philippe, Evangelista, David, Guéant, Olivier, and Vieira, Douglas
- Subjects
REINFORCEMENT learning ,MACHINE learning ,STOCHASTIC control theory ,MARKETING models ,HEURISTIC - Abstract
A large proportion of market making models derive from the seminal model of Avellaneda and Stoikov. The numerical approximation of the value function and the optimal quotes in these models remains a challenge when the number of assets is large. In this article, we propose closed-form approximations for the value functions of many multi-asset extensions of the Avellaneda–Stoikov model. These approximations or proxies can be used (i) as heuristic evaluation functions, (ii) as initial value functions in reinforcement learning algorithms, and/or (iii) directly to design quoting strategies through a greedy approach. Regarding the latter, our results lead to new and easily interpretable closed-form approximations for the optimal quotes, both in the finite-horizon case and in the asymptotic (ergodic) regime. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. Higher-order time integration of Coulomb collisions in a plasma using Langevin equations
- Author
-
Ricketson, L. [Univ. of California, Los Angeles, CA (United States). Dept. of Mathematics]
- Published
- 2013
- Full Text
- View/download PDF
19. Least-squares Monte-Carlo methods for optimal stopping investment under CEV models.
- Author
-
Ma, Jingtang, Lu, Zhengyang, Li, Wenyuan, and Xing, Jie
- Subjects
- *
EXPECTED utility , *CONTROL theory (Engineering) , *ELECTRIC utilities , *INVESTMENTS , *COMPUTER simulation - Abstract
The optimal stopping investment is a kind of mixed expected utility maximization problems with optimal stopping time. The aim of this paper is to develop the least-squares Monte-Carlo methods to solve the optimal stopping investment under the constant elasticity of variance (CEV) model. Such a problem has no closed-form solutions for the value functions, optimal strategies and optimal exercise boundaries due to the early exercised feature. The dual optimal stopping problem is first derived and then the strong duality between the dual and prime problems is established. The least-squares Monte-Carlo methods based on the dual control theory are developed and numerical simulations are provided. Both the power and non-HARA utilities are studied. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
20. Sampling from Large Matrices: An Approach through Geometric Functional Analysis.
- Author
-
Rudelson, Mark and Vershynin, Roman
- Subjects
MATRICES (Mathematics) ,FUNCTIONAL analysis ,GEOMETRY ,APPROXIMATION theory ,BANACH spaces ,ALGORITHMS - Abstract
We study random submatrices of a large matrix A. We show how to approximately compute A from its random submatrix of the smallest possible size O(r log r ) with a small error in the spectral norm, where r = ||A||²
F /||A||²2 is the numerical rank of A. The numerical rank is always bounded by, and is a stable relaxation of, the rank of A. This yields an asymptotically optimal guarantee in an algorithm for computing low-rank approximations of A. We also prove asymptotically optimal estimates on the spectral norm and the cut-norm of random submatrices of A. The result for the cut-norm yields a slight improvement on the best-known sample complexity for an approximation algorithm for MAX-2CSP problems. We use methods of Probability in Banach spaces, in particular the law of large numbers for operator-valued random variables. [ABSTRACT FROM AUTHOR]- Published
- 2007
- Full Text
- View/download PDF
21. SoftCorner: Relaxation of Corner Values for Deterministic Static Timing Analysis of VLSI Systems
- Author
-
Hyunjeong Kwon, Jae Hoon Kim, Seokhyeong Kang, and Young Hwan Kim
- Subjects
Statistical analysis ,computer aided analysis ,analysis of variance ,circuit analysis ,Monte-Carlo methods ,deterministic static timing analysis ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
We propose SoftCorner, which is a novel approach to estimate the worst delay (3σ delay) of a VLSI system. SoftCorner is a modification of deterministic static timing analysis to overcome its tendency to give pessimistic estimates of the system delays when it uses 3σ corner delays for logic gates. The basic idea of SoftCorner is to use corner delays that are relaxed to th, 95th and 85th percentiles with an average error 4 times faster than the MC simulation on average.
- Published
- 2018
- Full Text
- View/download PDF
22. QED.jl - First-Principal Description of QED-Processes in x-ray laser fields
- Author
-
(0000-0002-6182-1481) Hernandez Acosta, U., (0000-0001-8965-1149) Steiniger, K., Jungnickel, T., (0000-0002-8258-3881) Bussmann, M., (0000-0002-6182-1481) Hernandez Acosta, U., (0000-0001-8965-1149) Steiniger, K., Jungnickel, T., and (0000-0002-8258-3881) Bussmann, M.
- Abstract
We present a novel approach for an event generator inherently using exact QED descriptions to predict the results of high-energy electron-photon scattering experiments that can be performed at modern X-ray free-electron laser facilities. Future experiments taking place at HIBEF, LCLS, and other facilities targeting this regime, will encounter processes in x-ray scattering from (laser-driven) relativistic plasmas, where the effects of the energy spectrum of the laser field as well as multi-photon interactions can not be neglected anymore. In contrast to the application window of existing QED-PIC codes, our event generator makes use of the fact, that the classical nonlinearity parameter barely approaches unity in high-frequency regimes, which allows taking the finite bandwidth of the x-ray laser into account in the description of the QED-like multi-photon interaction. Consequently, we exploit these effects in Compton scattering, Breit-Wheeler pair-production and trident pair-production in x-ray laser fields as one of the driving forces of electromagnetic cascades and plasma formation.
- Published
- 2023
23. Goodness--of--Fit Tests for Birnbaum--Saunders Distributions.
- Author
-
Darijani, Saeed, Zakerzadeh, Hojatollah, and Torabi, Hamzeh
- Subjects
GOODNESS-of-fit tests ,MONTE Carlo method ,DISTRIBUTION (Probability theory) ,PARAMETER estimation ,DATA analysis - Abstract
Goodness-of-fit tests are constructed for the two-parameter Birnbaum-Saunders distribution in the case where the parameters are unknown and therefore are estimated from the data. In each test, the procedure starts by computing efficient estimators of the parameters. Then the data are transformed by a normal transformation and normality tests are applied on the transformed data, thereby avoiding reliance on parametric asymptotic critical values or the need for bootstrap computations. Three classes of tests are considered, the first class being the classical tests based on the empirical distribution function, while the other class utilizes the empirical characteristic function and the final class utilizes the Kullback-Leibler information function. All methods are extended to cover the case of generalized three-parameter Birnbaum-Saunders distributions. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Machine Learning for Semi Linear PDEs.
- Author
-
Chan-Wai-Nam, Quentin, Mikael, Joseph, and Warin, Xavier
- Abstract
Recent machine learning algorithms dedicated to solving semi-linear PDEs are improved by using different neural network architectures and different parameterizations. These algorithms are compared to a new one that solves a fixed point problem by using deep learning techniques. This new algorithm appears to be competitive in terms of accuracy with the best existing algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. A cluster controller for transition matrix calculations.
- Author
-
Yevick, David and Lee, Yong Hwan
- Subjects
- *
MATRICES (Mathematics) , *STATISTICAL physics - Abstract
We demonstrate that a temperature schedule for single-spin flip transition matrix calculations can be simply and rapidly generated by monitoring the average size of the Wolff clusters at a set of discrete temperatures. Optimizing this schedule yields a potentially interesting quantity related to the fractal structure of Ising clusters. We also introduce a technique in which the transition matrix is constructed at a sequence of discrete temperatures at which Wolff cluster reversals are alternated with certain series of single-spin flip steps. The single spin-flip transitions are then employed to construct a single transition matrix. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. Monte-Carlo Based Sensitivity Analysis of Acoustic Sorting Methods.
- Author
-
Simon, Gergely, Hantos, Gergely B., Andrade, Marco A. B., Desmulliez, Marc P. Y., Riehle, Mathis O., and Bernassau, Anne L.
- Subjects
SENSITIVITY analysis ,MICROFLUIDIC devices ,ENERGY density ,NUMERICAL analysis - Abstract
Separation in microfluidic devices is a crucial enabling step for many industrial, biomedical, clinical or chemical applications. Acoustic methods offer contactless, biocompatible, scalable sorting with high degree of reconfigurability and are therefore favored techniques. The literature reports on various techniques to achieve particle separation, but these do not investigate the sensitivity of these methods or are difficult to compare due to the lack of figures of merit. In this paper, we present analytical and numerical sensitivity analysis of the time-of-flight and a phase-modulated sorting scheme against various extrinsic and intrinsic properties. The results reveal great robustness of the phase-modulated sorting method against variations of the flow rate or acoustic energy density, while the time-offlight method shows lower efficiency drop against size and density variations. The results presented in this paper provide a better understanding of the two sorting methods and offer advice on the selection of the right technique for a given sorting application. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. The Amnesiac Lookback Option: Selectively Monitored Lookback Options and Cryptocurrencies
- Author
-
Ho-Chun Herbert Chang and Kevin Li
- Subjects
options pricing ,lookback options ,path-dependent options ,Monte-Carlo methods ,cryptocurrency ,smart contracts ,Applied mathematics. Quantitative methods ,T57-57.97 ,Probabilities. Mathematical statistics ,QA273-280 - Abstract
This study proposes a strategy to make the lookback option cheaper and more practical, and suggests the use of its properties to reduce risk exposure in cryptocurrency markets through blockchain enforced smart contracts and correct for informational inefficiencies surrounding prices and volatility. This paper generalizes partial, discretely-monitored lookback options that dilute premiums by selecting a subset of specified periods to determine payoff, which we call amnesiac lookback options. Prior literature on discretely-monitored lookback options considers the number of periods and assumes equidistant lookback periods in pricing partial lookback options. This study by contrast considers random sampling of lookback periods and compares resulting payoff of the call, put and spread options under floating and fixed strikes. Amnesiac lookbacks are priced with Monte Carlo simulations of Gaussian random walks under equidistant and random periods. Results are compared to analytic and binomial pricing models for the same derivatives. Simulations show diminishing marginal increases to the fair price as the number of selected periods is increased. The returns correspond to a Hill curve whose parameters are set by interest rate and volatility. We demonstrate over-pricing under equidistant monitoring assumptions with error increasing as the lookback periods decrease. An example of a direct implication for event trading is when shock is forecasted but its timing uncertain, equidistant sampling produces a lower error on the true maximum than random choice. We conclude that the instrument provides an ideal space for investors to balance their risk, and as a prime candidate to hedge extreme volatility. We discuss the application of the amnesiac lookback option and path-dependent options to cryptocurrencies and blockchain commodities in the context of smart contracts.
- Published
- 2018
- Full Text
- View/download PDF
28. Stochastic models driven by a Lévy noise: Application to rods orientation in turbulence
- Author
-
Maurer, Paul, Stochastic Approaches for Complex Flows and Environment (CALISTO), Centre de Mise en Forme des Matériaux (CEMEF), Mines Paris - PSL (École nationale supérieure des mines de Paris), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS)-Mines Paris - PSL (École nationale supérieure des mines de Paris), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS)-Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Académie Systèmes Complexes, Complex Systems Academy, COMPLEX SYSTEMS ACADEMY, and ANR-15-IDEX-0001,UCA JEDI,Idex UCA JEDI(2015)
- Subjects
[PHYS]Physics [physics] ,Stochastic models ,Lévy processes ,Stochastic differential equations ,Monte-Carlo methods ,Turbulent flows ,[MATH]Mathematics [math] - Abstract
National audience
- Published
- 2023
29. Pattern Formation and Dynamics in Bacterial Cells
- Author
-
Subramanian, Srikanth and Murray, Séan (Dr.)
- Subjects
chromosome dynamics ,polymer simulations ,Non-Linear Dynamics ,microsco ,Pattern selection ,Biophysics ,partial differential equations ,Turing Patterns ,monte-carlo methods ,Reaction-diffusion systems ,Statistical Mechanics ,Pattern Formation ,Physics ,Physik ,ddc:530 - Abstract
Spatio-temporal organisation plays a critical role in all life. More specifically in biological cells, the spatial organisation of key proteins and the chromosome is essential for their function, segregation and faithful inheritance. Within bacterial cells pattern formation appears to play an essential role at different levels. Examples of pattern formation in proteins include pole-to-pole oscillations, self-positioning clusters and protein gradients. Chromosomes on the other hand display an ordered structure with individual domains exhibiting specific spatio-temporal organisation. This work examines the processes determining dynamics and organisation within bacterial cells by combining analytical, computational and experimental approaches. The thesis is split into two distinct parts, one providing new physical insights into pattern formation in general and the other detailing the dynamics of chromosomes. Reaction-diffusion systems are helpful models in order to study pattern formation in chemical, physical and biological systems. A pattern or Turing state emerges when the spatially homogeneous state becomes unstable to small perturbations. While initially intended for describing pattern formation in biological systems (for example embryogenesis, scale patterning etc.), their practical application has been notoriously difficult. The biggest challenge is our inability to predict in general the steady-state patterns obtained from a given set of parameters. While much is known near the onset (when the system is marginally unstable) of the spatial instability, the mechanisms underlying pattern selection and dynamics away from the onset are much less understood. In the first part of this thesis, we provide physical insight into the dynamics of these patterns and their selection at steady state. We find that peaks in a Turing pattern behave as point sinks, the dynamics of which are determined by the diffusive fluxes into them. As a result, peaks move toward a periodic steady-state configuration that minimizes the mass of the diffusive species. Importantly, we also show that the preferred number of peaks at the final steady state is such that this mass is minimized. Our work presents mass minimization as a general principle for understanding pattern formation in reaction-diffusion systems. In the second part, we discuss a more biological problem that involves the study of bacterial DNA loci dynamics at short time scales, where we perform polymer simulations, modelling and fluorescent tracking experiments in conjunction. Chromosomal loci in bacterial cells show a robust sub-diffusive scaling of the mean square displacement, $\textrm{MSD}(\tau) \sim \tau^{\alpha}$, with $\alpha [ 0.5$. This is in contrast to scaling predictions from simple polymer models ($\alpha \geq 0.5$). While the motion of the chromosome in a viscoelastic cytoplasm has been proposed as a possible explanation for the difference, recent experiments in compressed cells question this hypothesis. On the other hand, recent experiments have shown that DNA-bridging Nucleoid Associated Proteins (NAPs) play an important role in chromosome organisation and compaction. Here, using polymer simulations we investigate the role of DNA bridging in determining the dynamics of chromosomal loci. We find that bridging compacts the polymer and reproduces the sub-diffusive elastic dynamics of monomers at timescales shorter than the bridge lifetime. Consistent with this prediction, we measure a higher exponent in a NAP mutant ($\Delta$H-NS) compared to wild-type \textit{E. coli}. Furthermore, bridging can reproduce the rare but ubiquitous rapid movements of chromosomal loci that have been observed in experiments. In our model, the scaling exponent defines a relationship between the abundance of bridges and their lifetime. Using this and the observed mobility of chromosomal loci, we predict a lower bound on the average bridge lifetime of around $5$ seconds. We hope that this framework will help guide future model development and understanding of chromosome dynamics.
- Published
- 2023
30. QED.jl - First-Principal Description of QED-Processes in x-ray laser fields
- Author
-
Hernandez Acosta, U., Steiniger, K., Jungnickel, T., and Bussmann, M.
- Subjects
Monte-Carlo methods ,Strong-field QED ,Simulation - Abstract
We present a novel approach for an event generator inherently using exact QED descriptions to predict the results of high-energy electron-photon scattering experiments that can be performed at modern X-ray free-electron laser facilities. Future experiments taking place at HIBEF, LCLS, and other facilities targeting this regime, will encounter processes in x-ray scattering from (laser-driven) relativistic plasmas, where the effects of the energy spectrum of the laser field as well as multi-photon interactions can not be neglected anymore. In contrast to the application window of existing QED-PIC codes, our event generator makes use of the fact, that the classical nonlinearity parameter barely approaches unity in high-frequency regimes, which allows taking the finite bandwidth of the x-ray laser into account in the description of the QED-like multi-photon interaction. Consequently, we exploit these effects in Compton scattering, Breit-Wheeler pair-production and trident pair-production in x-ray laser fields as one of the driving forces of electromagnetic cascades and plasma formation.
- Published
- 2023
31. Statistical Estimations of Lattice-Valued Possibilistic Distributions
- Author
-
Kramosil, Ivan, Daniel, Milan, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, and Liu, Weiru, editor
- Published
- 2011
- Full Text
- View/download PDF
32. Transition matrix cluster algorithms.
- Author
-
Yevick, David and Lee, Yong Hwan
- Subjects
- *
SPIN-spin interactions , *SPIN-orbit interactions , *ALGORITHMS , *MATRICES (Mathematics) - Abstract
We demonstrate that a series of procedures for increasing the efficiency of transition matrix calculations can be realized by integrating the standard single-spin flip transition matrix method with global cluster flipping techniques. Our calculations employ a simple and accurate method based on detailed balance for computing the density of states from the Ising model transition matrix. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
33. Information-Theoretic Model Predictive Control: Theory and Applications to Autonomous Driving.
- Author
-
Williams, Grady, Drews, Paul, Goldfain, Brian, Rehg, James M., and Theodorou, iEvangelos A.
- Subjects
- *
OPTIMAL control theory , *AUTONOMOUS vehicles , *TRAFFIC safety , *FEEDBACK control systems , *HAMILTON-Jacobi equations , *GAUSSIAN processes , *NONLINEAR systems - Abstract
We present an information-theoretic approach to stochastic optimal control problems that can be used to derive general sampling-based optimization schemes. This new mathematical method is used to develop a sampling-based model predictive control algorithm. We apply this information-theoretic model predictive control scheme to the task of aggressive autonomous driving around a dirt test track, and compare its performance with a model predictive control version of the cross-entropy method. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
34. Optimal investment strategies for general utilities under dynamic elasticity of variance models.
- Author
-
WENYUAN LI and JINGTANG MA
- Subjects
- *
FINANCIAL markets , *MARKET volatility , *STOCHASTIC processes , *MONTE Carlo method , *STOCK prices - Abstract
This paper studies the optimal investment strategies under the dynamic elasticity of variance (DEV) model which maximize the expected utility of terminal wealth. The DEV model is an extension of the constant elasticity of variance model, in which the volatility term is a power function of stock prices with the power being a nonparametric time function. It is not possible to find the explicit solution to the utility maximization problem under the DEV model. In this paper, a dual-control Monte-Carlo method is developed to compute the optimal investment strategies for a variety of utility functions, including power, non-hyperbolic absolute risk aversion and symmetric asymptotic hyperbolic absolute risk aversion utilities. Numerical examples show that this dual-control Monte-Carlo method is quite efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
35. A mesh-free Monte-Carlo method for simulation of three-dimensional transient heat conduction in a composite layered material with temperature dependent thermal properties.
- Author
-
Bahadori, Reza, Gutierrez, Hector, Manikonda, Shashikant, and Meinke, Rainer
- Subjects
- *
HEAT conduction , *HEAT transfer in composite materials , *THERMAL properties , *MONTE Carlo method , *THERMAL diffusivity - Abstract
A new solution for the three-dimensional transient heat conduction from a homogeneous medium to a non-homogeneous multi-layered composite material with temperature dependent thermal properties using a mesh-free Monte-Carlo method is proposed. The novel contributions include a new algorithm to account for the impact of thermal diffusivities from source to sink in the calculation of the particles’ step length (particles are represented as bundles of energy emitted from each source), and a derivation of the three-dimensional peripheral integration to account for the influence of material properties around the sink on its temperature. Simulations developed using the proposed method are compared against both experimental measurements and results from a finite element simulation. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
36. Bit-wise Pseudo-Bayes genetic algorithms to model data distributions.
- Author
-
Aguilar-Rivera, Anton
- Subjects
GENETIC algorithms ,BAYESIAN analysis ,STATISTICAL correlation ,DISCRETIZATION methods ,DOW Jones industrial average - Abstract
This work introduces a method to generate implicit models from data. The algorithm is based on Bayesian networks, discrete codification and genetic algorithms. The concept of bit-wise Pseudo-Bayes networks is introduced in this work. It refers to Bayesian networks generated from discretized data. The network describes the model in function of the correlations between bits. The model is consider implicit in the sense the meaning of the original variables is lost during discretization, but it can provide random samples with a distribution that is similar to the one of the original data. These samples can be the input of other algorithms that rely on samples of data in a transparent fashion. Moreover, this approach alleviates the problem of storing and handling of larges volumes of data, a common occurrence in modern data science, and circumvent the problems of identification process. The algorithm to generate bit-wise Pseudo-Bayes models is described in detail, and introduces innovations to representation of Bayesian networks based on extended chain structures. Also, it introduces new discretization methods and compares them to others reported in the literature, which have been mainly used to evolutionary continuous optimization. The performance of the proposed algorithm is studied using two data sets: prices data from the stock of the Dow Jones industrial average, and prices data from the stocks of the Mexican Índice de precios y cotizaciones. The proposed method is compared against other discrete modeling techniques reported in the literature, attaining a higher performance. The results indicate direct discretization methods were effective when they are coupled with Bayesian networks, but further research is needed to guarantee scalability. The study indicated the Dow Jones data set was more difficult than the Mexican index data set, but this was attributed to 2008 American crisis. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
37. Recent improvements for the lepton propagator PROPOSAL.
- Author
-
Dunsch, Mario, Soedingrekso, Jan, Sandrock, Alexander, Meier, Maximilian, Menne, Thorben, and Rhode, Wolfgang
- Subjects
- *
MONTE Carlo method , *MUONS , *BREMSSTRAHLUNG , *PAIR production , *PARTICLE decays , *MODULAR construction , *PARTICLE tracks (Nuclear physics) , *PROGRAMMING languages - Abstract
The lepton propagator PROPOSAL is a Monte-Carlo Simulation library written in C++, propagating high energy muons and other charged particles over large distances in media. In this article, a restructuring of the code is described, which yields a performance improvement of up to 30%. For an improved accuracy of the propagation processes, more exact calculations of the leptonic and hadronic decay process and more precise parametrizations for the interaction cross sections are now available. The new modular structure allows a more flexible and customized usage, which is further facilitated with a Python interface. Program Title: PROPOSAL Program Files doi: http://dx.doi.org/10.17632/g478pjdcxy.1 Licensing provisions: LGPL Programming language: C++ Nature of problem: Propagation of charged particles over large distances in three dimensions through different kinds of media. These particles lose their energy stochastically via the processes of ionization, pair production, bremsstrahlung and inelastic nuclear interaction and eventually decay, producing secondary particles along the trajectory. Solution method: Monte-Carlo simulation. The program samples the next stochastic interaction point, the type of interaction and the amount of energy lost in the interaction until either the particle decays, its energy is below a certain threshold or it reaches a given distance. To improve the performance and to deal with the bremsstrahlung divergence at small energy losses, an adaptable energy cut is used below which all losses are treated continuously. The use of interpolation tables further reduces computation time. The sampled energy till the next stochastic loss is smeared out with a Gaussian randomization inside the physically allowed limits of the continuous losses using the second moment of the summed processes to avoid artifacts introduced by the energy cut. The deviation from a straight trajectory is evaluated using the multiple scattering calculation by Molière or the Highland parametrization, a Gaussian approximation to Molière's theory. Multiple kinds of parametrizations are also available for bremsstrahlung, pair production and inelastic nuclear interaction to study the effects of the uncertainty of the cross sections on the propagation and further simulation steps. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
38. Monte Carlo Integration for Quasi–linear Models
- Author
-
Gundlich, B., Kusche, J., Sansò, Fernando, editor, Xu, Peiliang, editor, Liu, Jingnan, editor, and Dermanis, Athanasios, editor
- Published
- 2008
- Full Text
- View/download PDF
39. Modelling the Bivariate Spatial Distribution of Amacrine Cells
- Author
-
Diggle, Peter J., Eglen, Stephen J., Troy, John B., Bickel, P., editor, Diggle, P., editor, Fienberg, S., editor, Gather, U., editor, Olkin, I., editor, Zeger, S., editor, Baddeley, Adrian, editor, Gregori, Pablo, editor, Mateu, Jorge, editor, Stoica, Radu, editor, and Stoyan, Dietrich, editor
- Published
- 2006
- Full Text
- View/download PDF
40. Explorando los regímenes de aprendizaje dentro y fuera del equilibrio de las máquinas de Boltzmann restringidas
- Author
-
Navas Gómez, Alfonso de Jesús, Giraldo Gallo, José Jairo, and Seoane Bartolomé, Beatriz
- Subjects
Sistemas expertos ,Computational complexity ,Artificial intelligence ,Statistical physics of disordered systems ,Restricted Boltzmann machines ,Complejidad computacional ,Machine learning ,Monte-Carlo methods ,Física estadística de sistemas desordenados ,Métodos de Monte-Carlo ,Aprendizaje automatizado ,Máquinas de Boltzmann restringidas ,Inteligencia artificial - Abstract
ilustraciones Aunque los métodos de inteligencia artificial basados en aprendizaje automatizado son considerados como una de las tecnologías disruptivas de nuestros tiempos, el entendimiento de estas herramientas yace muy por detrás de su éxito práctico. La física estadística de sistemas desordenados goza de una larga historia estudiando problemas de inferencia y aprendizaje usando sus propias herramientas. Siguiendo con esta tradición, en este trabajo final de maestría se estudió cómo el protocolo de aprendizaje afecta a los patrones extraídos por una Máquina Restringida de Boltzmann. En particular, se entrenaron máquinas dentro y fuera del equilibrio con muestras del modelo de Ising en 1 y 2 dimensiones para luego, usando un nuevo método de inferencia, extraer la matriz de acoplamientos del modelo efectivo aprendido en cada caso. Este experimento permitió dilucidar algunas consecuencias de los regímenes de entrenamiento dentro y fuera de equilibrio. Adicionalmente, se exploró el potencial del uso de las Máquinas Restringidas de Boltzmann para la extracción automática de patrones para muestras similares a las del modelo de Ising, siendo este el primer paso para abordar problemas más complejos. (Texto tomado de la fuente) Although machine learning based artificial intelligence is considered as one of the most disruptive technologies of our age, the understanding of many of these methods lies behind their practical success. Statistical physics of disordered systems has a long history studying inference problems and learning processes with its own tools, shedding light on the underlying mechanisms of many machine learning models. Following this tradition, in this master's thesis we studied how the training protocol affects the model and the features extracted by an unsupervised machine learning method called Restricted Boltzmann Machine. In particular, we trained machines in and out-of-equilibrium learning regimes with Ising Model samples and then, using a novel pattern extraction protocol developed in this work, we inferred the coupling matrix of the effective Ising model learned in each case. Such experiment allowed us to elucidate some consequences of equilibrium and non-equilibrium training regimes. Additionally, we explored the potential use of restricted Boltzmann machine as an inference tool for Ising model-like sample data, being the first step towards to tackle more complex problems. Maestría Magíster en Ciencias - Física
- Published
- 2022
41. A full-angle Monte-Carlo scattering technique including cumulative and single-event Rutherford scattering in plasmas.
- Author
-
Higginson, Drew P.
- Subjects
- *
MONTE Carlo method , *SCATTERING (Physics) , *RUTHERFORD'S backscattering , *COLLISIONAL plasma , *INERTIAL confinement fusion - Abstract
We describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event. We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10 − 3 to 0.3–0.7; the upper limit corresponds to Coulomb logarithm of 20–2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
42. Weighted negative cumulative extropy with application in testing uniformity.
- Author
-
Chakraborty, Siddhartha, Das, Oindrali, and Pradhan, Biswabrata
- Subjects
- *
UNIFORMITY , *GOODNESS-of-fit tests , *CUMULATIVE distribution function , *INFORMATION measurement - Abstract
In this paper, we propose a new information measure, related to cumulative extropy, called weighted negative cumulative extropy measure. Various properties of the proposed measure are obtained and it is shown that weighted negative cumulative extropy measure has relationship with both weighted mean residual and mean past lifetimes of the associated random variable. Non-parametric estimator of the proposed measure is provided and, using this estimator, a goodness-of-fit test for uniform distribution is developed. The proposed test performs reasonably well. • A new weighted information measure is proposed. • Various properties of the proposed measure are studied. • A Non-parametric estimator of the said measure is provided. • Application in testing uniformity is considered. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Dual-ratio approach for detection of point fluorophores in biological tissue.
- Author
-
Blaney, Giles, Ivich, Fernando, Sassaroli, Angelo, Niedre, Mark, and Fantini, Sergio
- Subjects
- *
FLUOROPHORES , *OPTICAL measurements , *SIGNAL-to-noise ratio , *BIOFLUORESCENCE , *FLOW cytometry , *TISSUES , *PULMONARY veins - Abstract
Diffuse in vivo flow cytometry (DiFC) is an emerging fluorescence sensing method to non-invasively detect labeled circulating cells in vivo. However, due to signal-to-noise ratio (SNR) constraints largely attributed to background tissue autofluorescence (AF), DiFC's measurement depth is limited. The dual ratio (DR)/dual slope is an optical measurement method that aims to suppress noise and enhance SNR to deep tissue regions. We aim to investigate the combination of DR and near-infrared (NIR) DiFC to improve circulating cells' maximum detectable depth and SNR. Phantom experiments were used to estimate the key parameters in a diffuse fluorescence excitation and emission model. This model and parameters were implemented in Monte Carlo to simulate DR DiFC while varying noise and AF parameters to identify the advantages and limitations of the proposed technique. Two key factors must be true to give DR DiFC an advantage over traditional DiFC: first, the fraction of noise that DR methods cannot cancel cannot be above the order of 10% for acceptable SNR. Second, DR DiFC has an advantage, in terms of SNR, if the distribution of tissue AF contributors is surface-weighted. DR cancelable noise may be designed (e.g., through the use of source multiplexing), and indications point to the AF contributors' distribution being truly surface-weighted in vivo. Successful and worthwhile implementation of DR DiFC depends on these considerations, but results point to DR DiFC having possible advantages over traditional DiFC. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. INFERRING OF REGULATORY NETWORKS FROM EXPRESSION DATA USING BAYESIAN NETWORKS
- Author
-
Alexander A. Loboda and Alexey A. Sergushichev
- Subjects
markov chains ,Markov chain ,Discretization ,Computer science ,Mechanical Engineering ,Monte Carlo method ,Bayesian network ,Atomic and Molecular Physics, and Optics ,lcsh:QA75.5-76.95 ,Computer Science Applications ,Electronic, Optical and Magnetic Materials ,monte-carlo methods ,bayesian networks ,lcsh:QC350-467 ,discretization ,lcsh:Electronic computers. Computer science ,Algorithm ,gene regulatory networks ,lcsh:Optics. Light ,Information Systems - Abstract
Subject of Research. The paper considers the inferring of gene regulatory networks in the form of Bayesian networks from gene expression data. We present this problem as the problem of the marginal probability estimation for each edge appearance in the true Bayesian network under the known gene expression levels. Monte Carlo approach based on the Markov chains is proposed. Method. The proposed method involved the sampling of Bayesian network pairs and a discretization policy, providing a way for the network to be applied to continuous gene expression data according to a posteriori distribution. The Markov chain Monte Carlo approach was used for sampling with implementation via the Metropolis-Hastings algorithm. Then, the desired probabilities were estimated based on the obtained sample. Main Results. The proposed method is tested on simulated data from the DREAM4 Challenges. Comparison with the leaders shows that the developed method quality surpasses the leader among the existing methods, the regularized gradient boosting machines method (RGBM), on some tests and is comparable on the others in view of the results. At the same time, the proposed method is flexible enough and can be adapted to the other types of experimental data. Practical Relevance. The method is applicable in computational biology for research of the gene regulation mechanisms in various processes, including the tumor growth or the immune system operation.
- Published
- 2020
45. On Generating Optimum Configurations of Commuter Aircraft using Stochastic Optimisation
- Author
-
Pant, R., Kalker-Kalkman, C. M., Chawdhry, P. K., editor, Roy, R., editor, and Pant, R. K., editor
- Published
- 1998
- Full Text
- View/download PDF
46. The confining baryonic Y-strings on the lattice.
- Author
-
Bakry, Ahmed S., Xurong Chen, and Peng-Ming Zhang
- Subjects
- *
BARYONICHIDAE , *LATTICE dynamics , *STRING theory , *QUARKS , *GAUSSIAN beams - Abstract
In a string picture, the nucleon is conjectured as consisting of a Y-shaped gluonic string ended by constituent quarks. In this proceeding, we summarize our results on revealing the signature of the confining Y-bosonic string in the gluonic profile due to a system of three static quarks on the lattice at finite temperature. The analysis of the action density unveils a background of a filled-Δ distribution. However, we found that these Δ-shaped profiles are comprised of three Yshaped Gaussian-like flux tubes. The length of the revealed Y-string-like distribution is maximum near the deconfinement point and approaches the geometrical minimal near the end of the QCD plateau. The action density width profile returns good fits to a baryonic string model for the junction fluctuations at large quark source separation. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
47. The confining baryonic Y-strings on the lattice.
- Author
-
Bakry, Ahmed S., Xurong Chen, and Peng-Ming Zhang
- Subjects
QUANTUM confinement effects ,BARYONS ,LATTICE gauge theories ,QUARKS ,GLUONS - Abstract
In a string picture, the nucleon is conjectured as consisting of a Y-shaped gluonic string ended by constituent quarks. In this proceeding, we summarize our results on revealing the signature of the confining Y-bosonic string in the gluonic profile due to a system of three static quarks on the lattice at finite temperature. The analysis of the action density unveils a background of a filled- distribution. However, we found that these -shaped profiles are comprised of three Yshaped Gaussian-like flux tubes. The length of the revealed Y-string-like distribution is maximum near the deconfinement point and approaches the geometrical minimal near the end of the QCD plateau. The action density width profile returns good fits to a baryonic string model for the junction fluctuations at large quark source separation. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
48. Accelerated rare event sampling: Refinement and Ising model analysis.
- Author
-
Yevick, David and Lee, Yong Hwan
- Subjects
- *
MAGNETIZATION , *ISING model , *COMPUTATIONAL physics , *MONTE Carlo method , *STATISTICAL mechanics - Abstract
In this paper, a recently introduced accelerated sampling technique [D. Yevick, Int. J. Mod. Phys. C 27, 1650041 (2016)] for constructing transition matrices is further developed and applied to a two-dimensional Ising spin system. By permitting backward displacements up to a certain limit for each forward step while evolving the system to first higher and then lower energies within a restricted interval that is steadily displaced toward zero temperature as the computation proceeds, accuracy can be greatly enhanced. Simultaneously, the elements obtained from numerous independent calculations are collected in a single transition matrix. The relative accuracy of this novel method is established through a comparison to a transition matrix procedure based on the Metropolis algorithm in which the temperature is appropriately varied during the calculation and the results interpreted in terms of the distribution of realizations over both energy and magnetization. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
49. Bayesian analysis of data from segmented super-resolution images for quantifying protein clustering
- Author
-
Kosuta, Tina, Cullell-Dalmau, Marta, Cella Zanacchi, Francesca, Manzo, and Carlo
- Subjects
FOS: Computer and information sciences ,Computer science ,Bayesian probability ,Population ,FOS: Physical sciences ,General Physics and Astronomy ,Chemical ,MOLECULE LOCALIZATION MICROSCOPY ,MONTE-CARLO METHODS ,Quantitative Biology - Quantitative Methods ,Statistics - Applications ,03 medical and health sciences ,Bayes Theorem ,Models, Chemical ,Proteins ,Molecular Imaging ,0302 clinical medicine ,Models ,Applications (stat.AP) ,Physics - Biological Physics ,Physical and Theoretical Chemistry ,Cluster analysis ,education ,Quantitative Methods (q-bio.QM) ,Nested sampling algorithm ,030304 developmental biology ,0303 health sciences ,education.field_of_study ,Model selection ,Experimental data ,Mixture model ,Data point ,Biological Physics (physics.bio-ph) ,FOS: Biological sciences ,Physics - Data Analysis, Statistics and Probability ,INFERENCE ,Biological system ,Data Analysis, Statistics and Probability (physics.data-an) ,030217 neurology & neurosurgery - Abstract
Super-resolution imaging techniques have largely improved our capabilities to visualize nanometric structures in biological systems. Their application further enables one to potentially quantitate relevant parameters to determine the molecular organization and stoichiometry in cells. However, the inherently stochastic nature of the fluorescence emission and labeling strategies imposes the use of dedicated methods to accurately measure these parameters. Here, we describe a Bayesian approach to precisely quantitate the relative abundance of molecular oligomers from segmented images. The distribution of proxies for the number of molecules in a cluster -- such as the number of localizations or the fluorescence intensity -- is fitted via a nested sampling algorithm to compare mixture models of increasing complexity and determine the optimal number of mixture components and their weights. We test the performance of the algorithm on {\it in silico} data as a function of the number of data points, threshold, and distribution shape. We compare these results to those obtained with other statistical methods, showing the improved performance of our approach. Our method provides a robust tool for model selection in fitting data extracted from fluorescence imaging, thus improving the precision of parameter determination. Importantly, the largest benefit of this method occurs for small-statistics or incomplete datasets, enabling accurate analysis at the single image level. We further present the results of its application to experimental data obtained from the super-resolution imaging of dynein in HeLa cells, confirming the presence of a mixed population of cytoplasmatic single motors and higher-order structures., 17 pages, 6 figures
- Published
- 2020
- Full Text
- View/download PDF
50. Incorporating Metadata Into the Active Learning Cycle for 2D Object Detection
- Author
-
Stadler, Karsten and Stadler, Karsten
- Abstract
In the past years, Deep Convolutional Neural Networks have proven to be very useful for 2D Object Detection in many applications. These types of networks require large amounts of labeled data, which can be increasingly costly for companies deploying these detectors in practice if the data quality is lacking. Pool-based Active Learning is an iterative process of collecting subsets of data to be labeled by a human annotator and used for training to optimize performance per labeled image. The detectors used in Active Learning cycles are conventionally pre-trained with a small subset, approximately 2% of available data labeled uniformly at random. This is something I challenged in this thesis by using image metadata. With the motivation of many Machine Learning models being a "jack of all trades, master of none", thus it is hard to train models such that they generalize to all of the data domain, it can be interesting to develop a detector for a certain target metadata domain. A simple Monte Carlo method, Rejection Sampling, can be implemented to sample according to a metadata target domain. This would require a target and proposal metadata distribution. The proposal metadata distribution would be a parametric model in the form of a Gaussian Mixture Model learned from the training metadata. The parametric model for the target distribution could be learned in a similar manner, however from a target dataset. In this way, only the training images with metadata most similar to the target metadata distribution can be sampled. This sampling approach was employed and tested with a 2D Object Detector: Faster-RCNN with ResNet-50 backbone. The Rejection Sampling approach was tested against conventional random uniform sampling and a classical Active Learning baseline: Min Entropy Sampling. The performance was measured and compared on two different target metadata distributions that were inferred from a specific target dataset. With a labeling budget of 2% for each cycle, the max M, De senaste åren har Djupa Neurala Faltningsnätverk visat sig vara mycket användbara för 2D Objektdetektering i många applikationer. De här typen av nätverk behöver stora mängder av etiketterat data, något som kan innebära ökad kostnad för företag som distribuerar dem, om kvaliteten på etiketterna är bristfällig. Pool-baserad Aktiv Inlärning är en iterativ process som innebär insamling av delmängder data som ska etiketteras av en människa och användas för träning, för att optimera prestanda per etiketterat data. Detektorerna som används i Aktiv Inlärning är konventionellt sätt förtränade med en mindre delmängd data, ungefär 2% av all tillgänglig data, etiketterat enligt slumpen. Det här är något jag utmanade i det här arbetet genom att använda bild metadata. Med motiveringen att många Maskininlärningsmodeller presterar sämre på större datadomäner, eftersom det kan vara svårt att lära detektorer stora datadomäner, kan det vara intressant att utveckla en detektor för ett särskild metadata mål-domän. För att samla in data enligt en metadata måldomän, kan en enkel Monte Carlo metod, Rejection Sampling implementeras. Det skulle behövas en mål-metadata-distribution och en faktisk metadata distribution. den faktiska metadata distributionen skulle vara en parametrisk modell i formen av en Gaussisk blandningsmodell som är tränad på träningsdata. Den parametriska modellen för mål-metadata-distributionen skulle kunna vara tränad på liknande sätt, fast ifrån mål-datasetet. På detta sätt, skulle endast träningsbilder med metadata mest lik mål-datadistributionen kunna samlas in. Den här samplings-metoden utvecklades och testades med en 2D objektdetektor: Faster R-CNN med ResNet-50 bildegenskapextraktor. Rejection sampling metoden blev testad mot konventionell likformig slumpmässig sampling av data och en klassisk Aktiv Inlärnings metod: Minimum Entropi sampling. Prestandan mättes och jämfördes mellan två olika mål-metadatadistributioner som var framtagna från specifika mål-metadat
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.