58 results
Search Results
2. Transitioning GlideinWMS, a multi domain distributed workload manager, from GSI proxies to tokens and other granular credentials.
- Author
-
Mambelli, Marco, Coimbra, Bruno, and Box, Dennis
- Subjects
TOKENS ,JETTONS ,COMPUTER systems ,ALGORITHMS ,COMPUTING platforms - Abstract
GlideinWMS is a distributed workload manager that has been used in production for many years to provision resources for experiments like CERN's CMS, many Neutrino experiments, and the OSG. Its security model was based mainly on GSI (Grid Security Infrastructure), using X.509 certificate proxies and VOMS (Virtual Organization Membership Service) extensions. Even when other credentials, like SSH keys, were used to authenticate with resources, proxies were also added all the time, to establish the identity of the requestor and the associated memberships or privileges. This single credential was used for everything and was, often implicitly, forwarded wherever needed. The addition of identity and access tokens and the phase-out of GSI forced us to reconsider the security model of GlideinWMS, to handle multiple credentials which can differ in type, technology, and functionality. Both identity tokens and access tokens are supported. GSI proxies even if no more mandatory, are still used, together with various JWT (JSON Web Token) based tokens and other certificates. The functionality of the credentials, defined by issuer, audience, and scope, also differ: a credential can allow access to a computing resource, or can protect the GlideinWMS framework from tampering, or can grant read or write access to storage, can provide an identity for accounting or auditing, or can provide a combination of any the formers. Furthermore, the tools in use do not include automatic forwarding and renewal of the new credentials so credential lifetime and renewal requirements became part of the discussion as well. In this paper, we will present how GlideinWMS was able to change its design and code to respond to all these changes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Updates to the ATLAS Data Carousel Project.
- Author
-
Borodin, Mikhail, Cameron, David, Klimentov, Alexei, Korchuganova, Tatiana, Lassnig, Mario, Maeno, Tadashi, Musheghyan, Haykuhi, South, David, and Zhao, Xin
- Subjects
LUMINOSITY ,OPTICAL properties ,WORKFLOW ,ALGORITHMS ,ALGEBRA - Abstract
The High Luminosity upgrade to the LHC (HL-LHC) is expected to deliver scientific data at the multi-exabyte scale. In order to address this unprecedented data storage challenge, the ATLAS experiment launched the Data Carousel project in 2018. Data Carousel is a tape-driven workflow whereby bulk production campaigns with input data resident on tape are executed by staging and promptly processing a sliding window to disk buffer such that only a small fraction of inputs are pinned on disk at any one time. Data Carousel is now in production for ATLAS in Run3. In this paper, we provide updates on recent Data Carousel R&D projects, including data-on-demand and tape smart writing. Data-on-demand removes from disk data that has not been accessed for a predefined period, when users request them, they will be either staged from tape or recreated by following the original production steps. Tape smart writing employs intelligent algorithms for file placement on tape in order to retrieve data back more efficiently, which is our long term strategy to achieve optimal tape usage in Data Carousel. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Potentiality of automatic parameter tuning suite available in ACTS track reconstruction software framework.
- Author
-
Garg, Rocky Bala, Allaire, Corentin, Salzburger, Andreas, Grasland, Hadrien, Tompkins, Lauren, and Hofgard, Elyssa
- Subjects
COMPUTER software ,ALGORITHMS ,COST analysis ,COST accounting ,ACCOUNTING - Abstract
Particle tracking is among the most sophisticated and complex part of the full event reconstruction chain. A number of reconstruction algorithms work in a sequence to build these trajectories from detector hits. Each of these algorithms use many configuration parameters that need to be fine-tuned to properly account for the detector/experimental setup, the available CPU budget and the desired physics performance. Few examples of such parameters include the cut values limiting the search space of the algorithm, the approximations accounting for complex phenomena or the parameters controlling algorithm performance. The most popular method to tune these parameters is hand-tuning using brute-force techniques. These techniques can be inefficient and raise issues for the long-term maintainability of such algorithms. The opensource track reconstruction software framework known as "A Common Tracking Framework (ACTS)" offers an alternative solution to these parameter tuning techniques through the use of automatic parameter optimization algorithms. ACTS come equipped with an auto-tuning suite that provides necessary setup for performing optimization of input parameters belonging to track reconstruction algorithms. The user can choose the tunable parameters in a flexible way and define a cost/benefit function for optimizing the full reconstruction chain. The fast execution speed of ACTS allows the user to run several iterations of optimization within a reasonable time bracket. The performance of these optimizers has been demonstrated on different track reconstruction algorithms such as trajectory seed reconstruction and selection, particle vertex reconstruction and generation of simplified material map, and on different detector geometries such as Generic Detector and Open Data Detector (ODD). We aim to bring this approach to all aspects of trajectory reconstruction by having a more flexible integration of tunable parameters within ACTS. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. The U.S. CMS HL-LHC R&D Strategic Plan.
- Author
-
Gutsche, Oliver, Bose, Tulika, Votava, Margaret, Mason, David, Melo, Andrew, Liu, Mia, Hufnagel, Dirk, Gray, Lindsey, Hildreth, Mike, Holzman, Burt, Lannon, Kevin, Sehrish, Saba, Sperka, David, Letts, James, Bauerdick, Lothar, and Bloom, Kenneth
- Subjects
COMPUTER software ,COMPUTER systems ,STRATEGIC planning ,ALGORITHMS ,ALGEBRA - Abstract
The HL-LHC run is anticipated to start at the end of this decade and will pose a significant challenge for the scale of the HEP software and computing infrastructure. The mission of the U.S. CMS Software & Computing Operations Program is to develop and operate the software and computing resources necessary to process CMS data expeditiously and to enable U.S. physicists to fully participate in the physics of CMS. We have developed a strategic plan to prioritize R&D efforts to reach this goal for the HL-LHC. This plan includes four grand challenges: modernizing physics software and improving algorithms, building infrastructure for exabyte-scale datasets, transforming the scientific data analysis process and transitioning from R&D to operations. We are involved in a variety of R&D projects that fall within these grand challenges. In this talk, we will introduce our four grand challenges and outline the R&D program of the U.S. CMS Software & Computing Operations Program. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Ranking-based neural network for ambiguity resolution in ACTS.
- Author
-
Allaire, Corentin, Bouvet, Françoise, Grasland, Hadrien, and Rousseau, David
- Subjects
NEURAL circuitry ,INSTITUTIONAL repositories ,MOTHERBOARDS ,ALGORITHMS ,ALGEBRA - Abstract
The reconstruction of particle trajectories is a key challenge of particle physics experiments, as it directly impacts particle identification and physics performances while also representing one of the main CPU consumers of many high-energy physics experiments. As the luminosity of particle colliders increases, this reconstruction will become more challenging and resourceintensive. New algorithms are thus needed to address these challenges efficiently. One potential step of track reconstruction is ambiguity resolution. In this step, performed at the end of the tracking chain, we select which tracks candidates should be kept and which must be discarded. The speed of this algorithm is directly driven by the number of track candidates, which can be reduced at the cost of some physics performance. Since this problem is fundamentally an issue of comparison and classification, we propose to use a machine learning-based approach to the Ambiguity Resolution. Using a shared-hits-based clustering algorithm, we can efficiently determine which candidates belong to the same truth particle. Afterwards, we can apply a Neural Network (NN) to compare those tracks and decide which ones are duplicates and which ones should be kept. This approach is implemented within A Common Tracking Software (ACTS) framework and tested on the Open Data Detector (ODD), a realistic virtual detector similar to a future ATLAS one. This new approach was shown to be 15 times faster than the default ACTS algorithm while removing 32 times more duplicates down to less than one duplicated track per event. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Overview of the distributed image processing infrastructure to produce the Legacy Survey of Space and Time.
- Author
-
Hernandez, Fabio, Beckett, George, Clark, Peter, Doidge, Matt, Jenness, Tim, Karavakis, Edward, Le Boulc'h, Quentin, Love, Peter, Mainetti, Gabriele, Noble, Timothy, White, Brandon, and Yang, Wei
- Subjects
IMAGE processing ,SERVER farms (Computer network management) ,ALGORITHMS ,ASTRONOMICAL observatories ,ELECTRONIC data processing - Abstract
The Vera C. Rubin Observatory is preparing to execute the most ambitious astronomical survey ever attempted, the Legacy Survey of Space and Time (LSST). Currently the final phase of construction is under way in the Chilean Andes, with the Observatory's ten-year science mission scheduled to begin in 2025. Rubin's 8.4-meter telescope will nightly scan the southern hemisphere collecting imagery in the wavelength range 320–1050 nm covering the entire observable sky every 4 nights using a 3.2 gigapixel camera, the largest imaging device ever built for astronomy. Automated detection and classification of celestial objects will be performed by sophisticated algorithms on high-resolution images to progressively produce an astronomical catalog eventually composed of 20 billion galaxies and 17 billion stars and their associated physical properties. In this article we present an overview of the system currently being constructed to perform data distribution as well as the annual campaigns which reprocess the entire image dataset collected since the beginning of the survey. These processing campaigns will utilize computing and storage resources provided by three Rubin data facilities (one in the US and two in Europe). Each year a Data Release will be produced and disseminated to science collaborations for use in studies comprising four main science pillars: probing dark matter and dark energy, taking inventory of solar system objects, exploring the transient optical sky and mapping the Milky Way. Also presented is the method by which we leverage some of the common tools and best practices used for management of large-scale distributed data processing projects in the high energy physics and astronomy communities. We also demonstrate how these tools and practices are utilized within the Rubin project in order to overcome the specific challenges faced by the Observatory. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. The v-Ball campaign at ALTO: Study for neural network based trigger for fission process.
- Author
-
Lebois, Matthieu, Jovančević, Nikola, Thisse, Damien, Wilson, Jonathan, Canavan, Rhiann, and Rudigier, Mathias
- Subjects
NUCLEAR fission ,NEURAL circuitry ,DATA analysis ,ALGORITHMS ,RECONSTRUCTION (Psychoanalysis) - Abstract
A γ-spectroscopy campaign named "ν-Ball" was perfomed at the ALTO facility. A large fraction of the beam time was dedicated to the fast neutron induced fission of two fissioning systems:
232 Th and238 U. During the data analysis, it was noticed that the high activity of thenat Th was heavily contaminating any coincidence matrices (or cubes) built. This caused the identification of weakly produced fission fragments identification to be almost impossible. It was decided to explore the opportunity opened by new analysis methods based on neural networks algorithms. In this paper, the methods to build an adequate neural network and the results obtained for fission event reconstruction are presented. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
9. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications.
- Author
-
Khairi, Nor Asilah and Jambek, Asral Bahari
- Subjects
DATA compression ,ELECTRONIC equipment ,INTERNET of things ,ALGORITHMS ,WIRELESS sensor networks - Abstract
An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
10. Two Parameter-Retrieval Algorithms of Aircraft Wake Vortex with Doppler Lidar in Clear Air.
- Author
-
Liu, D., Wang, Y., Wu, Y., Gross, B., Moshary, F., Shen, Chun, Li, Jianbing, and Gao, Hang
- Subjects
ALGORITHMS ,DOPPLER lidar ,CLEAR air turbulence ,AERONAUTICAL safety measures ,TOOLBOXES - Abstract
Aircraft wake is a pair of strong counter-rotating vortices generated behind an aircraft, which might be very hazardous to a flowing aircraft and the detection of which has attracted much attention in aviation safety field. This conference paper introduces two parameter-retrieval algorithms, i.e., Optimization method and Max-min method. They have been integrated into a toolbox and can retrieve the parameters of wake vortex efficiently and robustly. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
11. Mathematical Modeling of Production Processes of Discrete Machine-Building Enterprises Based on the Interaction of Simulation Systems and Operational Planning Systems.
- Author
-
Nadykto, A., Aleksic, N., Lima, P., Pivkin, P., Uvarova, L., Jiang, X., Zelensky, A., Dolgov, Vitalii A., Nikishechkin, Petr A., Leonov, Aleksandr A., Ivashin, Sergey S., and Dolgov, Nikita V.
- Subjects
MATHEMATICAL models ,MACHINERY ,SIMULATION methods & models ,ALGORITHMS ,DECISION making - Abstract
Analysis of production systems (PS) of discrete multi-nomenclature machine-building enterprises is a complex task, its solution is necessary to support decision-making during technical re-equipment, modernization or technological preparation of production. The paper shows a concept of joint use of operational scheduling systems and simulation modeling systems to improve the efficiency and adequacy of PS analysis. The problem of determining the deviation of the planned state of the PS from the simulated state and evaluating the level of stability and stability of the PS behaviour on its basis is considered. It is revealed that the proposed approach allows us to more adequately determine the timing of the production program, assess the stability of the PS behaviour when using various planning logics and algorithms, and choose the best one for subsequent use in a real PS. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Different-Scale Simulation of Flows in Porous Media.
- Author
-
Nadykto, A., Aleksic, N., Lima, P., Pivkin, P., Uvarova, L., Jiang, X., Zelensky, A., Trapeznikova, Marina, Churbanova, Natalia, and Chechina, Antonina
- Subjects
POROUS materials ,DARCY'S law ,GAS dynamics ,HYPERBOLIC functions ,ALGORITHMS - Abstract
The paper considers the development of algorithms for an adequate description of processes of different scales in porous media. The choice of a computational technique is determined by the reference size of the problem being solved. Models of porous medium flow under Darcy's law, neglecting the medium microstructure, are used for the simulation at macro-scale. While at micro-scale, a direct description of fluid flow in porous channels with complex geometry by means of gas dynamic equations is used. In the first case the proposed model of non-isothermal multiphase multicomponent flow in a porous medium includes the mass balance and total energy conservation equations modified by analogy to the known quasi-gas dynamic equations. The model features are the introduction of minimal reference scales in space and in time and the change of the system type from parabolic to hyperbolic to increase the stability of explicit difference schemes applied for approximation. In the second case the dimensionless form of the quasi-gas dynamic system with pressure decomposition, developed by the authors earlier, is adapted to the simulation of flows in the pore space. The fictitious domain method is proposed to reproduce the core microstructure. The developed approaches have been verified by test predictions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. Gradient-Based Algorithm for Tracking the Activity of Neural Network Weights Changing.
- Author
-
Nadykto, A., Aleksic, N., Lima, P., Pivkin, P., Uvarova, L., Jiang, X., Zelensky, A., Starodub, Anton, Eliseeva, Natalia, and Georgiev, Milen
- Subjects
MACHINE learning ,ARTIFICIAL neural networks ,ALGORITHMS ,WEIGHTS & measures ,DATA analysis - Abstract
The research conducted in this paper is in the field of machine learning. The main object of the research is the learning process of an artificial neural network in order to increase its efficiency. The algorithm based on the analysis of retrospective learning data. The dynamics of changes in the values of the weights of an artificial neural network during training is an important indicator of training efficiency. The algorithm proposed in this work is based on changing the weight gradients values. Changing of the gradients weights makes it possible to understand how actively the network weights change during training. This knowledge helps to diagnose the training process and makes an adjusting the training parameters. The results of the algorithm can be used to train an artificial neural network. The network will help to determine the set of measures (actions) needed to optimize the learning process by the algorithm results. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Evaluating Kubernetes as an orchestrator of the Event Filter computing farm of the Trigger and Data Acquisition system of the ATLAS experiment at the Large Hadron Collider.
- Author
-
Avolio, Giuseppe, Cadeddu, Mattia, Hauser, Reiner, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
DATA acquisition systems ,ACQUISITION of data ,ALGORITHMS ,CLUSTER analysis (Statistics) ,COMPUTING platforms - Abstract
The ATLAS experiment at the LHC relies on a complex and distributed Trigger and Data Acquisition (TDAQ) system to gather and select particle collision data. The Event Filter (EF) component of the TDAQ system is responsible for executing advanced selection algorithms, reducing the data rate to a level suitable for recording to permanent storage. The EF functionality is provided by a computing farm made up of thousands of commodity servers, each executing one or more processes. Moving the EF farm management towards a solution based on software containers is one of the main themes of the ATLAS TDAQ Phase-II upgrades in the area of the online software; it would make it possible to open new possibilities for fault tolerance, reliability and scalability. This paper presents the results of an evaluation of Kubernetes as a possible orchestrator of the ATLAS TDAQ EF computing farm. Kubernetes is a system for advanced management of containerized applications in large clusters. This paper will first highlight some of the technical solutions adopted to run the offline version of today's EF software in a Docker container. Then it will focus on some scaling performance measurements executed with a cluster of 1000 CPU cores. In particular, this paper will report about the way Kubernetes scales in deploying containers as a function of the cluster size and show how a proper tuning of the Query per Second (QPS) Kubernetes parameter set can improve the scaling of applications in terms of running replicas. Finally, an assessment will be given about the possibility to use Kubernetes as an orchestrator of the EF computing farm in LHC's Run 4. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
15. Development of ATLID Retrieval Algorithms.
- Author
-
Liu, D., Wang, Y., Wu, Y., Gross, B., Moshary, F., Donovan, D.P., van Zadelhoff, G-J, Williams, J. E., Wandinger, U., Haarig, M., and Qu, Z.
- Subjects
AEROSOL industry ,CLOUD computing ,RADIATION ,ALGORITHMS ,OPTICAL properties - Abstract
ATLID ("ATmospheric LIDar") is the lidar to be flown on the multi-instrument Earth Clouds and Radiation Explorer (EarthCARE or ECARE) joint ESA/JAXA mission now scheduled for launch in 2022. ATID is a 3 channel linearly polarized High-Spectral Resolution (HSRL) system operating at 355nm. Cloud and aerosol optical properties are key ECARE products. This paper will provide an overview of the ATLID L2a (i.e. single instrument) retrieval algorithms being developed and implemented in order to derive cloud and aerosol optical properties. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Designing an Expert System for Recognizing the Emotional State of an Enterprise Employee.
- Author
-
Nadykto, A., Aleksic, N., Lima, P., Pivkin, P., Uvarova, L., Jiang, X., Zelensky, A., Sekin, A.A., and Bychkova, N.A.
- Subjects
EMOTIONS ,BUSINESS enterprises ,EMPLOYEES ,ALGORITHMS ,ACCURACY - Abstract
The emotional state of an employee of any enterprise influences both the efficiency of work performance and the quality and stability of the final result. Management of production processes, taking into account the monitoring of the emotional state of the employee, is a rather urgent task that allows minimizing the risks of deviations from the specified level of product quality and production safety. However, the quality of the assessment of this influence is currently subjective and is based both on the personal opinion and competences of the expert conducting the monitoring, and on the tools used by him for assessing the emotional state. At the same time, the use of modern intelligent automated methods and tracking systems will reduce the distortion of expert judgment. Creation of an expert system for analyzing the emotional state of employees of an enterprise will make it possible to recognize the emotions of a particular employee with a fairly high degree of accuracy, accumulate a system of knowledge and generate analytical conclusions and predictions of behavior based on it, compile an emotional portfolio of each employee and draw conclusions about the ability to perform a certain type of work and current states. This paper presents the concept of an algorithm of an expert system (hereinafter referred to as ES), which is able, on the basis of the data obtained on the individual methods of non-verbal expression of an employee's emotions, to assess the influence of his emotions on the quality of his work. The article reflects the results obtained in the framework of the implementation of the Agreement on research No. 05.601.21.0019 dated November 29, 2019. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Application of Artificial Neural Networks and Singular-Spectral Analysis in Forecasting the Daily Traffic in the Moscow Metro.
- Author
-
Ivanov, Victor and Osetrov, Evgenii
- Subjects
ARTIFICIAL neural networks ,PASSENGER traffic ,PUBLIC transit ,TRAFFIC estimation ,ALGORITHMS ,SPECTRUM analysis ,HIGH speed trains - Abstract
In this paper, we investigate the possibility of applying various approaches to solving the problem of medium-term forecasting of daily passenger traffic volumes in the Moscow metro (MM): 1) on the basis of artificial neural networks (ANN); 2) using the singular-spectral analysis implemented in the package “Caterpillar”-SSA; 3) sharing the ANN and the “Caterpillar”-SSA approach. We demonstrate that the developed methods and algorithms allow us to conduct medium-term forecasting of passenger traffic in the MM with reasonable accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
18. Active power measurement verification for electric power systems with battery energy storage.
- Author
-
Rehtanz, C., Voropai, N., Glazunova, Anna, and Aksaeva, Elena
- Subjects
POWER measurement (Electricity) ,ENERGY storage ,ELECTRIC power systems ,ALGORITHMS ,ELECTRIC power consumption - Abstract
The paper presents a method developed for detecting rough errors in the measurements related to batteries in the part of electric power system that is characterised by low data redundancy. Since batteries either produce or consume power, not all methods of bad data detection can be used to detect erroneous measurements of the active power of the battery in the case of low measurement redundancy. Because of different values of the active power that a battery may produce or consume at several snapshots in row, dynamic algorithms cannot be used. In this study, a new method of bad data detection is developed. The method is based on the battery control strategy analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
19. Computing Gröbner and Involutive Bases for Linear Systems of Difference Equations.
- Author
-
Yanovich, Denis
- Subjects
GROBNER bases ,DIFFERENCE equations ,LINEAR systems ,ALGORITHMS ,SCALABILITY - Abstract
The computation of involutive bases and Gröbner bases for linear systems of difference equations is solved and its importance for physical and mathematical problems is discussed. The algorithm and issues concerning its implementation in C are presented and calculation times are compared with the competing programs. The paper ends with consideration on the parallel version of this implementation and its scalability. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. Towards J/ψ e+e- Decays Triggering with TRD in CBM Experiment.
- Author
-
Derenovskaya, Olga, Ablyazimov, Timur, and Ivanov, Victor
- Subjects
CELLULAR automata ,ALGORITHMS ,RADIOACTIVE decay ,TRANSITION radiation detector ,BARYONS ,ELECTRON-electron interactions - Abstract
The paper presents an efficient Cellular Automaton based algorithm for trajectory reconstruction in the Transition Radiation Detector of the CBM experiment. The comparison of the different electron identification methods is also given. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
21. The Automation of Stochastization Algorithm with Use of SymPy Computer Algebra Library.
- Author
-
Demidova, Anastasya, Gevorkyan, Migran, Kulyabov, Dmitry, Korolkova, Anna, and Sevastianov, Leonid
- Subjects
AUTOMATION ,ALGORITHMS ,ALGEBRA software ,STOCHASTIC systems ,DIFFERENTIAL equations - Abstract
SymPy computer algebra library is used for automatic generation of ordinary and stochastic systems of differential equations from the schemes of kinetic interaction. Schemes of this type are used not only in chemical kinetics but also in biological, ecological and technical models. This paper describes the automatic generation algorithm with an emphasis on application details. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
22. Validation of the MCNP6 electron-photon transport algorithm: multiple-scattering of 13- and 20-MeV electrons in thin foils.
- Author
-
Dixon, David A. and Hughes, H. Grady
- Subjects
MONTE Carlo method ,ELECTRON scattering ,ALGORITHMS ,ALUMINUM foil ,PHOTON scattering - Abstract
This paper presents a validation test comparing angular distributions from an electron multiple-scattering experiment with those generated using the MCNP6 Monte Carlo code system. In this experiment, a 13- and 20-MeV electron pencil beam is deflected by thin foils with atomic numbers from 4 to 79. To determine the angular distribution, the fluence is measured down range of the scattering foil at various radii orthogonal to the beam line. The characteristic angle (the angle for which the max of the distribution is reduced by 1/e) is then determined from the angular distribution and compared with experiment. Multiple scattering foils tested herein include beryllium, carbon, aluminum, copper, and gold. For the default electron-photon transport settings, the calculated characteristic angle was statistically distinguishable from measurement and generally broader than the measured distributions. The average relative difference ranged from 5.8% to 12.2% over all of the foils, source energies, and physics settings tested. This validation illuminated a deficiency in the computation of the underlying angular distributions that is well understood. As a result, code enhancements were made to stabilize the angular distributions in the presence of very small substeps. However, the enhancement only marginally improved results indicating that additional algorithmic details should be studied. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
23. Consistent neutron-physical and thermal-physical calculations of fuel rods of VVER type reactors.
- Author
-
Tikhomirov, Georgy, Saldikov, Ivan, Ternovykh, Mikhail, and Gerasimov, Alexander
- Subjects
NUCLEAR reactors ,ISOTOPIC analysis ,NUCLEAR fuel rods ,THERMAL conductivity ,ALGORITHMS - Abstract
For modeling the isotopic composition of fuel, and maximum temperatures at different moments of time, one can use different algorithms and codes. In connection with the development of new types of fuel assemblies and progress in computer technology, the task makes important to increase accuracy in modeling of the above characteristics of fuel assemblies during the operation. Calculations of neutron-physical characteristics of fuel rods are mainly based on models using averaged temperature, thermal conductivity factors, and heat power density. In this paper, complex approach is presented, based on modern algorithms, methods and codes to solve separate tasks of thermal conductivity, neutron transport, and nuclide transformation kinetics. It allows to perform neutron-physical and thermal-physical calculation of the reactor with detailed temperature distribution, with account of temperature-depending thermal conductivity and other characteristics. It was applied to studies of fuel cell of the VVER-1000 reactor. When developing new algorithms and programs, which should improve the accuracy of modeling the isotopic composition and maximum temperature in the fuel rod, it is necessary to have a set of test tasks for verification. The proposed approach can be used for development of such verification base for testing calculation of fuel rods of VVER type reactors. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
24. Computational methods of Gaussian Particle Swarm Optimization (GPSO) and Lagrange Multiplier on economic dispatch issues (case study on electrical system of Java-Bali IV area).
- Author
-
Komsiyah, S.
- Subjects
ELECTRIC power production ,PARTICLE swarm optimization ,LAGRANGE multiplier ,ALGORITHMS ,STOCHASTIC processes ,PROBABILITY theory - Abstract
The objective in this paper is about economic dispatch problem of electric power generation where scheduling the committed generating units outputs so as to meet the required load demand at minimum operating cost, while satisfying all units and system equality and inequality constraint. In the operating of electric power system, an economic planning problem is one of variables that its must be considered since economically planning will give more efficiency in operational cost. In this paper the economic dispatch problem which has non linear cost function solved by using swarm intelligent method is Gaussian Particle Swarm Optimization (GPSO) and Lagrange Multiplier. GPSO is a population-based stochastic algorithms which their moving inspired by swarm intelligent and probabilities theories. To analize its accuracy, the economic dispatch solution by GPSO method will be compared with Lagrange multiplier method. From the running test result the GPSO method give economically planning calculation which it better than Lagrange multiplier method and the GPSO method faster to getting error convergence. Therefore the GPSO method have better performance to getting global best solution than the Lagrange method. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
25. Application of Algorithms for Placement of Orthogonal Polyhedrons for Solving the Problems of Packing Objects of Complex Geometric Shape.
- Author
-
Nadykto, A., Aleksic, N., Lima, P., Pivkin, P., Uvarova, L., Jiang, X., Zelensky, A., Chekanin, Vladislav, and Chekanin, Alexander
- Subjects
ALGORITHMS ,POLYHEDRA ,GEOMETRIC shapes ,NONLINEAR programming ,ORTHOGONAL polynomials - Abstract
The article is devoted to the development and research of algorithms for placing objects of complex geometric shapes. To solve the placement problem is proposed an approach which consists in transforming the shape of all objects and further application of the developed algorithm for placing orthogonal polyhedrons of arbitrary dimension to the resulting transformed objects. In the process of transforming the shape of the objects being placed, they are initially voxelized, after which the developed decomposition algorithm is applied to the resulting voxelized objects, which provides the formation of orthogonal polyhedrons consisting of the largest possible orthogonal objects. The proposed model of potential containers is used to describe the free space of containers as a set of orthogonal areas. The developed algorithm for the placement of orthogonal polyhedrons provides a fast solution to NP-hard problems of placing objects of complex geometric shapes without resorting to the use of time-consuming nonlinear programming methods. Examples of the practical application of the developed algorithms for modeling the dense layout of parts of complex geometric shapes on the platform of a 3D printer are given. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
26. INDEPENDENT RETRIEVAL OF AEROSOL TYPE FROM LIDAR.
- Author
-
Nicolae, Doina, Vasilescu, Jeni, Talianu, Camelia, and Dandocsi, Alexandru
- Subjects
ATMOSPHERIC aerosols ,LIDAR ,ALGORITHMS ,ARTIFICIAL neural networks ,OPTICAL properties - Abstract
This paper presents an algorithm for aerosol typing from multiwavelength lidar data, based on Artificial Neural Networks. The aerosol model used to simulate optical properties for the training of the network is described. The algorithm is tested on real observations from ESA-CALIPSO database. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
27. New Algorithm of Seed Finding for Track Reconstruction.
- Author
-
Baranov, Dmitry, Merts, Sergei, Ososkov, Gennady, and Rogachevsky, Oleg
- Subjects
ALGORITHMS ,ENERGY measurement ,CHARGED particle accelerators ,KALMAN filtering ,CONTROL theory (Engineering) ,MAGNETIC fields ,MATHEMATICAL models ,EDUCATION - Abstract
Event reconstruction is a fundamental problem in the high energy physics experiments. It consists of track finding and track fitting procedures in the experiment tracking detectors. This requires a tremendous search of detector responses belonging to each track aimed at obtaining so-called "seeds", i.e. initial approximations of track parameters of charged particles. In the paper we propose a new algorithm of the seedfinding procedure for the BM@N experiment. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
28. New Approach to the Simulation of Complex Systems.
- Author
-
Bogdanov, Alexander, Degtyarev, Alexander, and Korkhov, Vladimir
- Subjects
MATHEMATICAL programming ,ALGORITHMS ,COMPUTER engineering ,NUMERICAL calculations ,VECTOR algebra ,EDUCATION - Abstract
The paper analyzes the problems of scalability of modern computational systems, and offers a new paradigm for solving complex problems on them. It implies (1) Creating a virtual computing cluster with shared virtual memory, (2) Selecting a representation for the problem that minimizes the interaction between computing threads and (3) Configuring the virtual computer system for optimal mapping of the pertinent algorithm on it. Arguments for optimizing virtual clusters are given and test results on them are shown. We discuss the challenges that can be addressed most effectively within the framework of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
29. Recent Results from the CMS Experiment at the LHC.
- Author
-
Isildak, Bora
- Subjects
QUANTUM chromodynamics ,HIGGS bosons ,PHYSICS research ,COMPACT muon solenoid experiment ,STANDARD model (Nuclear physics) ,ALGORITHMS - Abstract
Numerous studies on Jet Production, Vector Boson Production, V+Jets Production and Multi-Boson Production have been carried out by the Compact Muon Solenoid (CMS) Collaboration to test perturbative quantum chromodynamics (QCD) predictions, and to put more stringent constraints on PDFs (Parton Distribution Functions). In this paper, some of these experimental results will be presented, and their possible impacts on Higgs physics and new physics searches will be discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
30. HelioTrope: An innovative and efficient prototype for solar power production.
- Author
-
Papageorgiou, George, Maimaris, Athanasios, Hadjixenophontos, Savvas, and Ioannou, Petros
- Subjects
SOLAR energy ,PHOTOVOLTAIC power generation ,SOLAR technology ,PROTOTYPES ,STIRLING engines ,ALGORITHMS ,CIRCULAR motion - Abstract
The solar energy alternative could provide us with all the energy we need as it exist in vast quantities all around us. We only should be innovative enough in order to improve the efficiency of our systems in capturing and converting solar energy in usable forms of power. By making a case for the solar energy alternative, we identify areas where efficiency can be improved and thereby Solar Energy can become a competitive energy source. This paper suggests an innovative approach to solar energy power production, which is manifested in a prototype given the name HelioTrope. The Heliotrope Solar Energy Production prototype is tested on its' capabilities to efficiently covert solar energy to generation of electricity and other forms of energy for storage or direct use. HelioTrope involves an innovative Stirling engine design and a parabolic concentrating dish with a sun tracking system implementing a control algorithm to maximize the capturing of solar energy. Further, it utilizes a patent developed by the authors where a mechanism is designed for the transmission of reciprocating motion of variable amplitude into unidirectional circular motion. This is employed in our prototype for converting linear reciprocating motion into circular for electricity production, which gives a significant increase in efficiency and reduces maintenance costs. Preliminary calculations indicate that the Heliotrope approach constitutes a competitive solution to solar power production. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
31. Retrieval of Aerosol Optical Properties Based on High Spectral Resolution Lidar.
- Author
-
Liu, D., Wang, Y., Wu, Y., Gross, B., Moshary, F., Xiao, Da, Zhong, Tianfen, Shen, Xue, Wang, Nanchao, Rong, Yuhang, Liu, Chong, Zhang, Yupeng, and Liu, Dong
- Subjects
AEROSOLS ,OPTICAL properties ,DOPPLER lidar ,CLIMATE research ,REMOTE sensing ,ALGORITHMS - Abstract
The detection of clouds and aerosols is important for climate research. Lidar has been widely used in atmospheric remote sensing research because of its high spatial and temporal resolution and ability to detect profiles. High spectral resolution lidar (HSRL) accurately calculates the optical properties of aerosols and clouds without relying on any assumptions. Based on the 532nm iodine HSRL system, the lidar ratio of the urban aerosol in Hangzhou is 40-50sr, and the average lidar ratio of the cirrus is 24.79sr, demonstrating that the HSRL system and retrieval algorithms accurately obtain the optical properties of clouds and aerosols. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
32. Long-Term Analyses of Aerosol Optical Thickness Using Caliop.
- Author
-
Liu, D., Wang, Y., Wu, Y., Gross, B., Moshary, F., Fujikawa, Masahiro, Kudo, Rei, Nishizawa, Tomoaki, Oikawa, Eiji, Higurashi, Akiko, and Okamoto, Hajime
- Subjects
AEROSOLS ,ALGORITHMS ,SPECTRORADIOMETER ,OPTICAL properties ,SOOT - Abstract
We developed an algorithm to derive extinction coefficients for four aerosol components (water-soluble, dust, sea salt, black carbon) from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) data. The algorithm was applied to the nine-year data for 2007–2015 and the results were compared to CALIOP standard product (CALIOP-ST) and MODerate resolution Imaging Spectroradiometer (MODIS) standard product (MODIS-ST). Comparisons of the total aerosol optical thickness (AOT) showed that MODIS-ST was the largest, followed by CALIOP-ST (Ver.4), and our product. CALIOP-ST (Ver.3) showed a similar magnitude to ours. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
33. Renormalization Approach to the Gribov Process: Numerical Evaluation of Critical Exponents in Two Subtraction Schemes.
- Author
-
Adam, Gh., Buša, J., Hnatič, M., Adzhemyan, Loran Ts., Hnatič, Michal, Ivanova, Ella, Kompaniets, Mikhail V., Lučivjanský, Tomáš, and Mižišin, Lukáš
- Subjects
RENORMALIZATION (Physics) ,NUMERICAL calculations ,INTEGRALS ,DOCUMENT clustering ,ALGORITHMS - Abstract
We study universal quantities characterizing the second order phase transition in the Gribov process. To this end, we use numerical methods for the calculation of the renormalization group functions up to two-loop order in perturbation theory in the famous ε-expansion. Within this procedure the anomalous dimensions are evaluated using two different subtraction schemes: the minimal subtraction scheme and the null-momentum scheme. Numerical calculation of integrals was done on the HybriLIT cluster using the Vegas algorithm from the CUBA library. The comparison with existing analytic calculations shows that the minimal subtraction scheme yields more precise results. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
34. Study of Neural Network Size Requirements for Approximating Functions Relevant to HEP.
- Author
-
Stietzel, Jessica, Lannon, Kevin, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
PARTICLE physics ,ARTIFICIAL neural networks ,DATA analysis ,INFORMATION retrieval ,ALGORITHMS - Abstract
A new event data format has been designed and prototyped by the CMS collaboration to satisfy the needs of a large fraction of physics analyses (at least 50%) with a per event size of order 1 kB. This new format is more than a factor of 20 smaller than the MINIAOD format and contains only top level information typically used in the last steps of the analysis. The talk will review the current analysis strategy from the point of view of event format in CMS (both skims and formats such as RECO, AOD, MINIAOD, NANOAOD) and will describe the design guidelines for the new NANOAOD format. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
35. Belle II Track Reconstruction and Results from first Collisions.
- Author
-
Hauth, Thomas, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
COLLISIONS (Nuclear physics) ,PARTICLE detectors ,LUMINOSITY ,IMAGE reconstruction ,ALGORITHMS - Abstract
In April 2018, e
+ e- collisions of the SuperKEKB B-Factory have been recorded by the Belle II detector in Tsukuba (Japan) for the first time. The new accelerator and detector represent a major upgrade from the previous Belle experiment and will achieve a 40 times higher instantaneous luminosity. Special considerations and challenges arise for track reconstruction at Belle II due to multiple factors. This high luminosity configuration of the collider increases the beam-induced background by many factors compared to Belle and a new track reconstruction software has been developed from scratch to achieve an excellent physics performance in this busy environment. Even though on average only eleven signal tracks are present in one event, all of them need to be reconstructed down to a transverse momentum of 50 MeV and no fake tracks should be present in the event. Many analyses at Belle II rely on the advantage that the initial state in B-factories is well known and a clean event reconstruction is possible if no tracks are left after assigning all tracks to particle hypotheses. This contribution will introduce the concepts and algorithms of the Belle II tracking software. Special emphasis will be put on the mitigation techniques developed to perform track reconstruction in high-occupancy events. First results from the data-taking with the Belle II detector will be presented. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
36. Multi-turn track fitting for the COMET experiment.
- Author
-
Zhang, Yao, Nakatsugawa, Yohei, Li, Haibo, Yuan, Ye, Xing, Tianyu, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
COMETS ,DRIFT chambers ,DETECTORS ,ARITHMETIC mean ,ALGORITHMS - Abstract
The reconstruction of multi-turn curling tracks is a big challenge for the drift chamber of the COMET [1] experiment. A method of deterministic annealing filter that implements a global competition between hits from different turns is introduced. This method assigns the detector measurements to the candidate track based on the weighted mean of fitting quality on different turns. We present here the study of multi-turn track fitting based on the simulated tracks. We found that the algorithm can successfully distinguish the hits from different turns. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
37. Polarization of Radiation by the Aerosol-Gas Component of the Atmosphere for Lidar Wavelengths.
- Author
-
Liu, D., Wang, Y., Wu, Y., Gross, B., Moshary, F., Zimovaya, Anna, and Konoshonkin, Alexander
- Subjects
POLARIZATION (Nuclear physics) ,RADIATION ,AEROSOLS ,ALGORITHMS ,BACKSCATTERING - Abstract
The report presents an original algorithm for solving the radiation transfer equation based on the Monte Carlo method for problems of atmospheric laser sensing, taking into account the polarization of radiation. A number of test calculations were performed to analyze the data of the polarization scanning lidar of the V.E. Zuev Institute of Atmospheric Optics SB RAS. It is shown that the proposed algorithm, taking into account the polarization, makes it possible to reduce the discrepancy in interpreting lidar data in comparison with the algorithm without taking into account polarization. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
38. Studies on Aerosol Optical Properties at High Altitude Station in Western Himalayas Using Raman Lidar.
- Author
-
Liu, D., Wang, Y., Wu, Y., Gross, B., Moshary, F., Singh, Shishir Kumar, Jaswant, Radhakrishnan, S.R., Sethi, Davender, and Sharma, Chhemendra
- Subjects
AEROSOLS ,OPTICAL properties ,LIDAR ,BACKSCATTERING ,ALGORITHMS - Abstract
The aerosol optical properties have been investigated using the Raman lidar system for the month of November 2018 at the western Himalayan station of Palampur. Before deriving the optical properties, the lidar data has been applied with initial pre-processing such as Dead time correction, atmospheric noise correction, temporal and spatial averaging, range correction, gluing etc. The optical properties such as backscatter coefficient, extinction coefficient and linear depolarization ratio have been derived by using the inversion algorithm proposed by Fernald. The results show that the backscatter coefficient was found in the range from 9.00E-9 m
−1 sr−1 to 4.97E-6 m−1 sr−1 and the extinction coefficient was found in the range from 3.16E-7m-1 to 1.74E-4m-1 . The Linear depolarization ratio was in the range from 0.0179 to 0.621 with lower values at near heights suggesting the dominance of spherical particles at the lower heights. We have also observed a cloud layer at a height of 9.5 km to 12.1 km with high depolarization ratio during the observation period on 22/11/2018. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
39. Short-Range Interaction Impact on Two-Dimensional Dipolar Scattering.
- Author
-
Koval, Eugene A. and Koval, Oksana A.
- Subjects
SHORT-range order (Magnetic structure) ,QUANTUM scattering ,POLAR molecules ,COLLISIONS (Physics) ,ALGORITHMS - Abstract
We report numerical investigation of the short range interaction influence on the two-dimensional quantum scattering of two dipoles. The model simulates two ultracold polar molecules collisions in two spatial dimensions. The used algorithm allows us to quantitatively analyse the scattering of two polarized dipoles with account for strongly anisotropic nature of dipolar interaction. The strong dependence of the scattering total cross section on the short range interaction radius was discovered for threshold collision energies. We also discuss differences of calculated scattering cross section dependencies for different polarisation axis tilt angles. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
40. Numerical Solution of the Time Dependent 3D Schrödinger Equation Describing Tunneling of Atoms from Anharmonic Traps.
- Author
-
Ishmukhamedov, Ilyas, Ishmukhamedov, Altay, and Melezhik, Vladimir
- Subjects
SCHRODINGER equation ,QUANTUM tunneling ,BOSONS ,ALGORITHMS ,CRANK-nicolson method ,FINITE difference method - Abstract
We present an efficient numerical method for the integration of the 3D Schrödinger equation. A tunneling problem of two interacting bosonic atoms confined in a 1D anharmonic trap has been successfully solved by means of this method. We demonstrate fast convergence of the final results with respect to spatial and temporal grid steps. The computational scheme is based on the operator-splitting technique with the implicit Crank-Nicolson algorithm on spatial sixth-order finite-differences. The computational time is proportional to the number of spatial grid points. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. Interpolation Hermite Polynomials For Finite Element Method.
- Author
-
Gusev, Alexander, Vinitsky, Sergue, Chuluunbaatar, Ochbadrakh, Chuluunbaatar, Galmandakh, Gerdt, Vladimir, Derbov, Vladimir, Góźdź, Andrzej, and Krassovitskiy, Pavel
- Subjects
HERMITE polynomials ,INTERPOLATION ,FINITE element method ,ALGORITHMS ,NUMERICAL calculations - Abstract
We describe a new algorithm for analytic calculation of high-order Hermite interpolation polynomials of the simplex and give their classification. A typical example of triangle element, to be built in high accuracy finite element schemes, is given. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
42. Linear Approximation of Volume Integral Equations for the Problem of Magnetostatics.
- Author
-
Akishin, Pavel and Sapozhnikov, Andrey
- Subjects
INTEGRAL equations ,APPROXIMATION theory ,MAGNETOSTATICS ,ALGORITHMS ,FINITE element method ,MAGNETS - Abstract
The volume integral equation method is considered for magnetic systems. New modeling results are reported. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
43. Determination of equilibrium fuel composition for fast reactor in closed fuel cycle.
- Author
-
Ternovykh, Mikhail, Tikhomirov, Georgy, Khomyakov, Yury, and Suslov, Igor
- Subjects
NUCLEAR reactors ,PLUTONIUM as fuel ,REFUELING of fusion reactors ,ISOTOPIC analysis ,ALGORITHMS - Abstract
Technique of evaluation of multiplying and reactivity characteristics of fast reactor operating in the mode of multiple refueling is presented. We describe the calculation model of the vertical section of the reactor. Calculation validations of the possibility of correct application of methods and models are given. Results on the isotopic composition, mass feed, and changes in the reactivity of the reactor in closed fuel cycle are obtained. Recommendations for choosing perspective fuel compositions for further research are proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
44. Calibration of DEM parameters on shear test experiments using Kriging method.
- Author
-
Xavier, Bednarek, Sylvain, Martin, Abibatou, Ndiaye, Véronique, Peres, and Olivier, Bonnefoy
- Subjects
DISCRETE element method ,KRIGING ,ALGORITHMS ,GLOBAL optimization ,POWDERS ,MIXING ,SHEAR (Mechanics) - Abstract
Calibration of powder mixing simulation using Discrete-Element-Method is still an issue. Achieving good agreement with experimental results is difficult because time-efficient use of DEM involves strong assumptions. This work presents a methodology to calibrate DEM parameters using Efficient Global Optimization (EGO) algorithm based on Kriging interpolation method. Classical shear test experiments are used as calibration experiments. The calibration is made on two parameters - Young modulus and friction coefficient. The determination of the minimal number of grains that has to be used is a critical step. Simulations of a too small amount of grains would indeed not represent the realistic behavior of powder when using huge amout of grains will be strongly time consuming. The optimization goal is the minimization of the objective function which is the distance between simulated and measured behaviors. The EGO algorithm uses the maximization of the Expected Improvement criterion to find next point that has to be simulated. This stochastic criterion handles with the two interpolations made by the Kriging method : prediction of the objective function and estimation of the error made. It is thus able to quantify the improvement in the minimization that new simulations at specified DEM parameters would lead to. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
45. Drift chambers in BM@N Experiment.
- Author
-
Fedorišin, Ján
- Subjects
DRIFT chambers ,DEUTERONS ,GAUSSIAN beams ,ALGORITHMS ,BARYONS ,NUCLEAR energy - Abstract
Drift chambers (DCH) constitute an important part of the tracking system of BM@N experiment designed to study the production of baryonic matter at the Nuclotron energies. The method of particle hit and track reconstruction in the drift chambers is proposed and tested on the BM@N deuteron beam data. In first step, the radius vs drift time calibration curve is estimated and applied to calculate DCH hit closest approach coordinates. These coordinates are used to construct hits in each DCH under the assumption of track linearity. Hits in both the DCHs are subsequently aligned and fitted to produce global linear track candidates. Eventually the hit and track reconstruction is optimized by the autocalibration method. The coordinate resolutions are estimated from Gaussian fits of the DCH hit residual spectra for different DCH planes. Furthermore, the deuteron beam momentum value is reconstructed in order to check reliability of the employed track reconstruction algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
46. A SENSITIVITY STUDY OF LIRIC ALGORITHM TO USER-DEFINED INPUT PARAMETERS, USING SELECTED CASES FROM THESSALONIKI'S MEASUREMENTS.
- Author
-
M., Filioglou, D., Balis, Siomos, N., Poupkou, A., Dimopoulos, S., and Chaikovsky, A.
- Subjects
LIDAR ,ALGORITHMS ,DUST ,ATMOSPHERIC aerosols ,ATMOSPHERIC sciences - Abstract
A targeted sensitivity study of the LIRIC algorithm was considered necessary to estimate the uncertainty introduced to the volume concentration profiles, due to the arbitrary selection of user-defined input parameters. For this purpose three different tests were performed using Thessaloniki's Lidar data. Overall, tests in the selection of the regularization parameters, an upper and a lower limit test were performed. The different sensitivity tests were applied on two cases with different predominant aerosol types, a dust episode and a typical urban case. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
47. ARRANGE AND AVERAGE ALGORITHM FOR MICROPHYSICAL RETRIEVALS WITH A "3β+3α" LIDAR CONFIGURATION.
- Author
-
Chemyakin, Eduard, Müller, Detlef, Burton, Sharon, Hostetler, Chris, and Ferrare, Richard
- Subjects
ALGORITHMS ,MICROPHYSICS ,LIDAR ,ATMOSPHERIC aerosols ,REFRACTIVE index - Abstract
We present the results of a comparison study in which a simple, automated, and unsupervised algorithm, which we call the arrange and average algorithm, was used to infer microphysical parameters (complex refractive index (CRI), effective radius, total number, surface area, and volume concentrations) of atmospheric aerosol particles. The algorithm normally uses backscatter coefficients (β) at 355, 532, and 1064 nm and extinction coefficients (α) at 355 and 532 nm as input information. We compared the performance of the algorithm for the existing "3β+2α" and potential "3β+3α" configurations of a multiwavelength aerosol Raman lidar or highspectral- resolution lidar (HSRL). The "3β+3α" configuration uses an extra extinction coefficient at 1064 nm. Testing of the algorithm is based on synthetic optical data that are computed from prescribed CRIs and monomodal logarithmically normal particle size distributions that represent spherical, primarily fine mode aerosols. We investigated the degree to which the microphysical results retrieved by this algorithm benefits from the increased number of input extinction coefficients. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
48. REAL TIME TURBULENCE ESTIMATION USING DOPPLER LIDAR MEASUREMENTS.
- Author
-
Rottner, Lucie and Baehr, Christophe
- Subjects
ATMOSPHERIC turbulence ,DOPPLER lidar ,ALGORITHMS ,PARAMETER estimation ,ATMOSPHERE - Abstract
A preliminary work on a new way to estimate atmospheric turbulence using high-frequency Doppler lidar measurements is presented. The turbulence estimations are based on wind reconstruction using 3D Doppler lidar observations and a particle filter. The suggested reconstruction algorithm links the lidar observations to numerical particles to obtain turbulence estimations every time new observations are available. The high frequency of the estimations is a new point which is detailed and discussed. Moreover, the presented algorithm ables to reconstruct the wind in three dimensions in the observed volume. We have thus locally access to the spatial variability of the turbulent atmosphere. The suggested algorithm is applied to a set of real observations. The obtained results are very encouraging : they show significant improvements on turbulent parameter estimations. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
49. A New Segment Building Algorithm for the Cathode Strip Chambers in the CMS Experiment.
- Author
-
Golutvin, I., Karjavin, V., Palichik, V., Voytishin, N., and Zarubin, A.
- Subjects
CATHODES ,ELECTRODES ,MUONS ,MONTE Carlo method ,ALGORITHMS - Abstract
A new segment building algorithm for the Cathode Strip Chambers in the CMS experiment is presented. A detailed description of the new algorithm is given along with a comparison with the algorithm used in the CMS software. The new segment builder was tested with different Monte-Carlo data samples. The new algorithm is meant to be robust and effective for hard muons and the higher luminosity that is expected in the future at the LHC. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
50. Numerical Solution of a Nonlinear Integro-Differential Equation.
- Author
-
Buša, Ján, Hnatič, Michal, Honkonen, Juha, and Lučivjanský, Tomáš
- Subjects
CONSERVED quantity ,NUMERICAL analysis ,ANNIHILATION reactions ,CHAOS theory ,ALGORITHMS ,EDUCATION - Abstract
A discretization algorithm for the numerical solution of a nonlinear integrodifferential equation modeling the temporal variation of the mean number density a(t) in the single-species annihilation reaction A + A → θ is discussed. The proposed solution for the two-dimensional case (where the integral entering the equation is divergent) uses regularization and then finite differences for the approximation of the differential operator together with a piecewise linear approximation of a(t) under the integral. The presented numerical results point to basic features of the behavior of the number density function a(t) and suggest further improvement of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.