76 results
Search Results
2. Transitioning GlideinWMS, a multi domain distributed workload manager, from GSI proxies to tokens and other granular credentials.
- Author
-
Mambelli, Marco, Coimbra, Bruno, and Box, Dennis
- Subjects
TOKENS ,JETTONS ,COMPUTER systems ,ALGORITHMS ,COMPUTING platforms - Abstract
GlideinWMS is a distributed workload manager that has been used in production for many years to provision resources for experiments like CERN's CMS, many Neutrino experiments, and the OSG. Its security model was based mainly on GSI (Grid Security Infrastructure), using X.509 certificate proxies and VOMS (Virtual Organization Membership Service) extensions. Even when other credentials, like SSH keys, were used to authenticate with resources, proxies were also added all the time, to establish the identity of the requestor and the associated memberships or privileges. This single credential was used for everything and was, often implicitly, forwarded wherever needed. The addition of identity and access tokens and the phase-out of GSI forced us to reconsider the security model of GlideinWMS, to handle multiple credentials which can differ in type, technology, and functionality. Both identity tokens and access tokens are supported. GSI proxies even if no more mandatory, are still used, together with various JWT (JSON Web Token) based tokens and other certificates. The functionality of the credentials, defined by issuer, audience, and scope, also differ: a credential can allow access to a computing resource, or can protect the GlideinWMS framework from tampering, or can grant read or write access to storage, can provide an identity for accounting or auditing, or can provide a combination of any the formers. Furthermore, the tools in use do not include automatic forwarding and renewal of the new credentials so credential lifetime and renewal requirements became part of the discussion as well. In this paper, we will present how GlideinWMS was able to change its design and code to respond to all these changes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Updates to the ATLAS Data Carousel Project.
- Author
-
Borodin, Mikhail, Cameron, David, Klimentov, Alexei, Korchuganova, Tatiana, Lassnig, Mario, Maeno, Tadashi, Musheghyan, Haykuhi, South, David, and Zhao, Xin
- Subjects
LUMINOSITY ,OPTICAL properties ,WORKFLOW ,ALGORITHMS ,ALGEBRA - Abstract
The High Luminosity upgrade to the LHC (HL-LHC) is expected to deliver scientific data at the multi-exabyte scale. In order to address this unprecedented data storage challenge, the ATLAS experiment launched the Data Carousel project in 2018. Data Carousel is a tape-driven workflow whereby bulk production campaigns with input data resident on tape are executed by staging and promptly processing a sliding window to disk buffer such that only a small fraction of inputs are pinned on disk at any one time. Data Carousel is now in production for ATLAS in Run3. In this paper, we provide updates on recent Data Carousel R&D projects, including data-on-demand and tape smart writing. Data-on-demand removes from disk data that has not been accessed for a predefined period, when users request them, they will be either staged from tape or recreated by following the original production steps. Tape smart writing employs intelligent algorithms for file placement on tape in order to retrieve data back more efficiently, which is our long term strategy to achieve optimal tape usage in Data Carousel. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Potentiality of automatic parameter tuning suite available in ACTS track reconstruction software framework.
- Author
-
Garg, Rocky Bala, Allaire, Corentin, Salzburger, Andreas, Grasland, Hadrien, Tompkins, Lauren, and Hofgard, Elyssa
- Subjects
COMPUTER software ,ALGORITHMS ,COST analysis ,COST accounting ,ACCOUNTING - Abstract
Particle tracking is among the most sophisticated and complex part of the full event reconstruction chain. A number of reconstruction algorithms work in a sequence to build these trajectories from detector hits. Each of these algorithms use many configuration parameters that need to be fine-tuned to properly account for the detector/experimental setup, the available CPU budget and the desired physics performance. Few examples of such parameters include the cut values limiting the search space of the algorithm, the approximations accounting for complex phenomena or the parameters controlling algorithm performance. The most popular method to tune these parameters is hand-tuning using brute-force techniques. These techniques can be inefficient and raise issues for the long-term maintainability of such algorithms. The opensource track reconstruction software framework known as "A Common Tracking Framework (ACTS)" offers an alternative solution to these parameter tuning techniques through the use of automatic parameter optimization algorithms. ACTS come equipped with an auto-tuning suite that provides necessary setup for performing optimization of input parameters belonging to track reconstruction algorithms. The user can choose the tunable parameters in a flexible way and define a cost/benefit function for optimizing the full reconstruction chain. The fast execution speed of ACTS allows the user to run several iterations of optimization within a reasonable time bracket. The performance of these optimizers has been demonstrated on different track reconstruction algorithms such as trajectory seed reconstruction and selection, particle vertex reconstruction and generation of simplified material map, and on different detector geometries such as Generic Detector and Open Data Detector (ODD). We aim to bring this approach to all aspects of trajectory reconstruction by having a more flexible integration of tunable parameters within ACTS. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. The U.S. CMS HL-LHC R&D Strategic Plan.
- Author
-
Gutsche, Oliver, Bose, Tulika, Votava, Margaret, Mason, David, Melo, Andrew, Liu, Mia, Hufnagel, Dirk, Gray, Lindsey, Hildreth, Mike, Holzman, Burt, Lannon, Kevin, Sehrish, Saba, Sperka, David, Letts, James, Bauerdick, Lothar, and Bloom, Kenneth
- Subjects
COMPUTER software ,COMPUTER systems ,STRATEGIC planning ,ALGORITHMS ,ALGEBRA - Abstract
The HL-LHC run is anticipated to start at the end of this decade and will pose a significant challenge for the scale of the HEP software and computing infrastructure. The mission of the U.S. CMS Software & Computing Operations Program is to develop and operate the software and computing resources necessary to process CMS data expeditiously and to enable U.S. physicists to fully participate in the physics of CMS. We have developed a strategic plan to prioritize R&D efforts to reach this goal for the HL-LHC. This plan includes four grand challenges: modernizing physics software and improving algorithms, building infrastructure for exabyte-scale datasets, transforming the scientific data analysis process and transitioning from R&D to operations. We are involved in a variety of R&D projects that fall within these grand challenges. In this talk, we will introduce our four grand challenges and outline the R&D program of the U.S. CMS Software & Computing Operations Program. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Ranking-based neural network for ambiguity resolution in ACTS.
- Author
-
Allaire, Corentin, Bouvet, Françoise, Grasland, Hadrien, and Rousseau, David
- Subjects
NEURAL circuitry ,INSTITUTIONAL repositories ,MOTHERBOARDS ,ALGORITHMS ,ALGEBRA - Abstract
The reconstruction of particle trajectories is a key challenge of particle physics experiments, as it directly impacts particle identification and physics performances while also representing one of the main CPU consumers of many high-energy physics experiments. As the luminosity of particle colliders increases, this reconstruction will become more challenging and resourceintensive. New algorithms are thus needed to address these challenges efficiently. One potential step of track reconstruction is ambiguity resolution. In this step, performed at the end of the tracking chain, we select which tracks candidates should be kept and which must be discarded. The speed of this algorithm is directly driven by the number of track candidates, which can be reduced at the cost of some physics performance. Since this problem is fundamentally an issue of comparison and classification, we propose to use a machine learning-based approach to the Ambiguity Resolution. Using a shared-hits-based clustering algorithm, we can efficiently determine which candidates belong to the same truth particle. Afterwards, we can apply a Neural Network (NN) to compare those tracks and decide which ones are duplicates and which ones should be kept. This approach is implemented within A Common Tracking Software (ACTS) framework and tested on the Open Data Detector (ODD), a realistic virtual detector similar to a future ATLAS one. This new approach was shown to be 15 times faster than the default ACTS algorithm while removing 32 times more duplicates down to less than one duplicated track per event. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Overview of the distributed image processing infrastructure to produce the Legacy Survey of Space and Time.
- Author
-
Hernandez, Fabio, Beckett, George, Clark, Peter, Doidge, Matt, Jenness, Tim, Karavakis, Edward, Le Boulc'h, Quentin, Love, Peter, Mainetti, Gabriele, Noble, Timothy, White, Brandon, and Yang, Wei
- Subjects
IMAGE processing ,SERVER farms (Computer network management) ,ALGORITHMS ,ASTRONOMICAL observatories ,ELECTRONIC data processing - Abstract
The Vera C. Rubin Observatory is preparing to execute the most ambitious astronomical survey ever attempted, the Legacy Survey of Space and Time (LSST). Currently the final phase of construction is under way in the Chilean Andes, with the Observatory's ten-year science mission scheduled to begin in 2025. Rubin's 8.4-meter telescope will nightly scan the southern hemisphere collecting imagery in the wavelength range 320–1050 nm covering the entire observable sky every 4 nights using a 3.2 gigapixel camera, the largest imaging device ever built for astronomy. Automated detection and classification of celestial objects will be performed by sophisticated algorithms on high-resolution images to progressively produce an astronomical catalog eventually composed of 20 billion galaxies and 17 billion stars and their associated physical properties. In this article we present an overview of the system currently being constructed to perform data distribution as well as the annual campaigns which reprocess the entire image dataset collected since the beginning of the survey. These processing campaigns will utilize computing and storage resources provided by three Rubin data facilities (one in the US and two in Europe). Each year a Data Release will be produced and disseminated to science collaborations for use in studies comprising four main science pillars: probing dark matter and dark energy, taking inventory of solar system objects, exploring the transient optical sky and mapping the Milky Way. Also presented is the method by which we leverage some of the common tools and best practices used for management of large-scale distributed data processing projects in the high energy physics and astronomy communities. We also demonstrate how these tools and practices are utilized within the Rubin project in order to overcome the specific challenges faced by the Observatory. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. The v-Ball campaign at ALTO: Study for neural network based trigger for fission process.
- Author
-
Lebois, Matthieu, Jovančević, Nikola, Thisse, Damien, Wilson, Jonathan, Canavan, Rhiann, and Rudigier, Mathias
- Subjects
NUCLEAR fission ,NEURAL circuitry ,DATA analysis ,ALGORITHMS ,RECONSTRUCTION (Psychoanalysis) - Abstract
A γ-spectroscopy campaign named "ν-Ball" was perfomed at the ALTO facility. A large fraction of the beam time was dedicated to the fast neutron induced fission of two fissioning systems:
232 Th and238 U. During the data analysis, it was noticed that the high activity of thenat Th was heavily contaminating any coincidence matrices (or cubes) built. This caused the identification of weakly produced fission fragments identification to be almost impossible. It was decided to explore the opportunity opened by new analysis methods based on neural networks algorithms. In this paper, the methods to build an adequate neural network and the results obtained for fission event reconstruction are presented. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
9. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications.
- Author
-
Khairi, Nor Asilah and Jambek, Asral Bahari
- Subjects
DATA compression ,ELECTRONIC equipment ,INTERNET of things ,ALGORITHMS ,WIRELESS sensor networks - Abstract
An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
10. Two Parameter-Retrieval Algorithms of Aircraft Wake Vortex with Doppler Lidar in Clear Air.
- Author
-
Liu, D., Wang, Y., Wu, Y., Gross, B., Moshary, F., Shen, Chun, Li, Jianbing, and Gao, Hang
- Subjects
ALGORITHMS ,DOPPLER lidar ,CLEAR air turbulence ,AERONAUTICAL safety measures ,TOOLBOXES - Abstract
Aircraft wake is a pair of strong counter-rotating vortices generated behind an aircraft, which might be very hazardous to a flowing aircraft and the detection of which has attracted much attention in aviation safety field. This conference paper introduces two parameter-retrieval algorithms, i.e., Optimization method and Max-min method. They have been integrated into a toolbox and can retrieve the parameters of wake vortex efficiently and robustly. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
11. Mathematical Modeling of Production Processes of Discrete Machine-Building Enterprises Based on the Interaction of Simulation Systems and Operational Planning Systems.
- Author
-
Nadykto, A., Aleksic, N., Lima, P., Pivkin, P., Uvarova, L., Jiang, X., Zelensky, A., Dolgov, Vitalii A., Nikishechkin, Petr A., Leonov, Aleksandr A., Ivashin, Sergey S., and Dolgov, Nikita V.
- Subjects
MATHEMATICAL models ,MACHINERY ,SIMULATION methods & models ,ALGORITHMS ,DECISION making - Abstract
Analysis of production systems (PS) of discrete multi-nomenclature machine-building enterprises is a complex task, its solution is necessary to support decision-making during technical re-equipment, modernization or technological preparation of production. The paper shows a concept of joint use of operational scheduling systems and simulation modeling systems to improve the efficiency and adequacy of PS analysis. The problem of determining the deviation of the planned state of the PS from the simulated state and evaluating the level of stability and stability of the PS behaviour on its basis is considered. It is revealed that the proposed approach allows us to more adequately determine the timing of the production program, assess the stability of the PS behaviour when using various planning logics and algorithms, and choose the best one for subsequent use in a real PS. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Different-Scale Simulation of Flows in Porous Media.
- Author
-
Nadykto, A., Aleksic, N., Lima, P., Pivkin, P., Uvarova, L., Jiang, X., Zelensky, A., Trapeznikova, Marina, Churbanova, Natalia, and Chechina, Antonina
- Subjects
POROUS materials ,DARCY'S law ,GAS dynamics ,HYPERBOLIC functions ,ALGORITHMS - Abstract
The paper considers the development of algorithms for an adequate description of processes of different scales in porous media. The choice of a computational technique is determined by the reference size of the problem being solved. Models of porous medium flow under Darcy's law, neglecting the medium microstructure, are used for the simulation at macro-scale. While at micro-scale, a direct description of fluid flow in porous channels with complex geometry by means of gas dynamic equations is used. In the first case the proposed model of non-isothermal multiphase multicomponent flow in a porous medium includes the mass balance and total energy conservation equations modified by analogy to the known quasi-gas dynamic equations. The model features are the introduction of minimal reference scales in space and in time and the change of the system type from parabolic to hyperbolic to increase the stability of explicit difference schemes applied for approximation. In the second case the dimensionless form of the quasi-gas dynamic system with pressure decomposition, developed by the authors earlier, is adapted to the simulation of flows in the pore space. The fictitious domain method is proposed to reproduce the core microstructure. The developed approaches have been verified by test predictions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. Gradient-Based Algorithm for Tracking the Activity of Neural Network Weights Changing.
- Author
-
Nadykto, A., Aleksic, N., Lima, P., Pivkin, P., Uvarova, L., Jiang, X., Zelensky, A., Starodub, Anton, Eliseeva, Natalia, and Georgiev, Milen
- Subjects
MACHINE learning ,ARTIFICIAL neural networks ,ALGORITHMS ,WEIGHTS & measures ,DATA analysis - Abstract
The research conducted in this paper is in the field of machine learning. The main object of the research is the learning process of an artificial neural network in order to increase its efficiency. The algorithm based on the analysis of retrospective learning data. The dynamics of changes in the values of the weights of an artificial neural network during training is an important indicator of training efficiency. The algorithm proposed in this work is based on changing the weight gradients values. Changing of the gradients weights makes it possible to understand how actively the network weights change during training. This knowledge helps to diagnose the training process and makes an adjusting the training parameters. The results of the algorithm can be used to train an artificial neural network. The network will help to determine the set of measures (actions) needed to optimize the learning process by the algorithm results. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Evaluating Kubernetes as an orchestrator of the Event Filter computing farm of the Trigger and Data Acquisition system of the ATLAS experiment at the Large Hadron Collider.
- Author
-
Avolio, Giuseppe, Cadeddu, Mattia, Hauser, Reiner, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
DATA acquisition systems ,ACQUISITION of data ,ALGORITHMS ,CLUSTER analysis (Statistics) ,COMPUTING platforms - Abstract
The ATLAS experiment at the LHC relies on a complex and distributed Trigger and Data Acquisition (TDAQ) system to gather and select particle collision data. The Event Filter (EF) component of the TDAQ system is responsible for executing advanced selection algorithms, reducing the data rate to a level suitable for recording to permanent storage. The EF functionality is provided by a computing farm made up of thousands of commodity servers, each executing one or more processes. Moving the EF farm management towards a solution based on software containers is one of the main themes of the ATLAS TDAQ Phase-II upgrades in the area of the online software; it would make it possible to open new possibilities for fault tolerance, reliability and scalability. This paper presents the results of an evaluation of Kubernetes as a possible orchestrator of the ATLAS TDAQ EF computing farm. Kubernetes is a system for advanced management of containerized applications in large clusters. This paper will first highlight some of the technical solutions adopted to run the offline version of today's EF software in a Docker container. Then it will focus on some scaling performance measurements executed with a cluster of 1000 CPU cores. In particular, this paper will report about the way Kubernetes scales in deploying containers as a function of the cluster size and show how a proper tuning of the Query per Second (QPS) Kubernetes parameter set can improve the scaling of applications in terms of running replicas. Finally, an assessment will be given about the possibility to use Kubernetes as an orchestrator of the EF computing farm in LHC's Run 4. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
15. Development of ATLID Retrieval Algorithms.
- Author
-
Liu, D., Wang, Y., Wu, Y., Gross, B., Moshary, F., Donovan, D.P., van Zadelhoff, G-J, Williams, J. E., Wandinger, U., Haarig, M., and Qu, Z.
- Subjects
AEROSOL industry ,CLOUD computing ,RADIATION ,ALGORITHMS ,OPTICAL properties - Abstract
ATLID ("ATmospheric LIDar") is the lidar to be flown on the multi-instrument Earth Clouds and Radiation Explorer (EarthCARE or ECARE) joint ESA/JAXA mission now scheduled for launch in 2022. ATID is a 3 channel linearly polarized High-Spectral Resolution (HSRL) system operating at 355nm. Cloud and aerosol optical properties are key ECARE products. This paper will provide an overview of the ATLID L2a (i.e. single instrument) retrieval algorithms being developed and implemented in order to derive cloud and aerosol optical properties. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Land subsidence modelling using a long short-term memory algorithm based on time-series datasets.
- Author
-
Li, Huijun, Zhu, Lin, Gong, Huili, Sun, Hanrui, and Yu, Jie
- Subjects
LAND subsidence ,ALGORITHMS ,WATER table ,ARTIFICIAL intelligence - Abstract
With the rapid growth of data volume and the development of artificial intelligence technology, deep-learning methods are a new way to model land subsidence. We utilized a long short-term memory (LSTM) model, a deep-learning-based time-series processing method to model the land subsidence under multiple influencing factors. Land subsidence has non-linear and time dependency characteristics, which the LSTM model takes into account. This paper modelled the time variation in land subsidence for 38 months from 2011 to 2015. The input variables included the change in land subsidence detected by InSAR technology, the change in confined groundwater level, the thickness of the compressible layer and the permeability coefficient. The results show that the LSTM model performed well in areas where the subsidence is slight but poorly in places with severe subsidence. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. Track prediction of moving target for interference fringes without nearby points.
- Author
-
Pengfei Song, Juan Hui, Rongrong Zhu, and Kaiyu Tang
- Subjects
ACOUSTIC field ,WAVEGUIDES ,ALGORITHMS ,TIME-frequency analysis ,MOTION - Abstract
The interference fringes produced by the target movement contain information such as the motion parameters of the moving target and the waveguide invariant of the sound field. However, most of the time required by the estimation algorithm of target motion parameters and waveguide invariant using interference fringes must include the nearest passing point. In order to solve this shortcoming, this paper proposes a new fitting method. The main process is to extract the interference fringes of the target motion from the time-frequency diagram, and use the difference of the interference fringes at different frequencies to predict and fit the interference fringes of the entire motion process of the target, so as to satisfy most of the target motion parameters using interference fringes. And the interference fringe information required by the estimation algorithm of waveguide invariant. The simulation results show that when the interference fringe has no nearest passing point information, the method proposed in this paper can fit the entire interference fringe more accurately, and can obtain the target motion parameters and waveguide invariant. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. Designing an Expert System for Recognizing the Emotional State of an Enterprise Employee.
- Author
-
Nadykto, A., Aleksic, N., Lima, P., Pivkin, P., Uvarova, L., Jiang, X., Zelensky, A., Sekin, A.A., and Bychkova, N.A.
- Subjects
EMOTIONS ,BUSINESS enterprises ,EMPLOYEES ,ALGORITHMS ,ACCURACY - Abstract
The emotional state of an employee of any enterprise influences both the efficiency of work performance and the quality and stability of the final result. Management of production processes, taking into account the monitoring of the emotional state of the employee, is a rather urgent task that allows minimizing the risks of deviations from the specified level of product quality and production safety. However, the quality of the assessment of this influence is currently subjective and is based both on the personal opinion and competences of the expert conducting the monitoring, and on the tools used by him for assessing the emotional state. At the same time, the use of modern intelligent automated methods and tracking systems will reduce the distortion of expert judgment. Creation of an expert system for analyzing the emotional state of employees of an enterprise will make it possible to recognize the emotions of a particular employee with a fairly high degree of accuracy, accumulate a system of knowledge and generate analytical conclusions and predictions of behavior based on it, compile an emotional portfolio of each employee and draw conclusions about the ability to perform a certain type of work and current states. This paper presents the concept of an algorithm of an expert system (hereinafter referred to as ES), which is able, on the basis of the data obtained on the individual methods of non-verbal expression of an employee's emotions, to assess the influence of his emotions on the quality of his work. The article reflects the results obtained in the framework of the implementation of the Agreement on research No. 05.601.21.0019 dated November 29, 2019. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Application of Artificial Neural Networks and Singular-Spectral Analysis in Forecasting the Daily Traffic in the Moscow Metro.
- Author
-
Ivanov, Victor and Osetrov, Evgenii
- Subjects
ARTIFICIAL neural networks ,PASSENGER traffic ,PUBLIC transit ,TRAFFIC estimation ,ALGORITHMS ,SPECTRUM analysis ,HIGH speed trains - Abstract
In this paper, we investigate the possibility of applying various approaches to solving the problem of medium-term forecasting of daily passenger traffic volumes in the Moscow metro (MM): 1) on the basis of artificial neural networks (ANN); 2) using the singular-spectral analysis implemented in the package “Caterpillar”-SSA; 3) sharing the ANN and the “Caterpillar”-SSA approach. We demonstrate that the developed methods and algorithms allow us to conduct medium-term forecasting of passenger traffic in the MM with reasonable accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. TRUNCATED CONTROL VARIATES FOR WEAK APPROXIMATION SCHEMES.
- Author
-
BELOMESTNY, DENIS, HÄFNER, STEFAN, and URUSOV, MIKHAIL
- Subjects
ALGORITHMS ,REGRESSION analysis ,VARIATE difference method ,MATHEMATICAL models ,VARIANCES - Abstract
In this paper we present an enhancement of the regression-based variance reduction approaches recently proposed in Belomestny et al. [1] and [4]. This enhancement is based on a truncation of the control variate and allows for a significant reduction of the computing time, while the complexity stays of the same order. The performances of the proposed truncated algorithms are illustrated by a numerical example. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
21. Nationwide deformation monitoring with SqueeSAR® using Sentinel-1 data.
- Author
-
Bischoff, Christine A., Ferretti, Alessandro, Novali, Fabrizio, Uttini, Andrea, Giannico, Chiara, and Meloni, Francesco
- Subjects
ALGORITHMS ,LAND subsidence ,SCALABILITY ,CALIBRATION - Abstract
Subsidence can now be routinely mapped on a national scale thanks to ESA's Sentinel-1 sensors and advanced scalable SqueeSAR
® processing. In order to be integrated into existing monitoring programmes, the SqueeSAR® datasets can be calibrated with GNSS measurements. The dense spatial coverage of SqueeSAR® deformation maps captures local deformation phenomena, and with appropriate calibration, can advance the understanding of regional deformation trends. The regular and reliable SAR image acquisitions by Sentinel-1, as well as significant improvements in the scalability of SqueeSAR® processing allow regular updates of deformation maps on a national scale. Filtering the large amount of data for relevant information is achieved by using an algorithm to detect changes in displacement trends. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
22. Application of autoencoder to traffic noise analysis.
- Author
-
Czyżewski, Andrzej, Kurowski, Adam, and Zaporowski, Szymon
- Subjects
TRAFFIC noise ,NEURAL circuitry ,ALGORITHMS ,RADAR ,SOUND recordings - Abstract
The aim of an autoencoder neural network is to transform the input data into a lower-dimensional code and then to reconstruct the output from this representation. Applications of autoencoders to classifying sound events in the road traffic have not been found in the literature. The presented research aims to determine whether such an unsupervised learning method may be used for deploying classification algorithms applied to the automatic annotation of road traffic-related events based on noise analysis. Two-dimensional representation of traffic sounds based on 1D convolution was fed the core of autoencoder neural network, and after that classified with seven feed-forward classification subnetworks. Obtained results show that sound recordings can help determine the number of vehicles passing on the road. However, instead of being treated as independent, this method output should be combined with another source of data, e.g., video processing results or microwave radar data readings. Results of vehicle types classification and occupied lane obtained with the use of autoencoder are shown in the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Active power measurement verification for electric power systems with battery energy storage.
- Author
-
Rehtanz, C., Voropai, N., Glazunova, Anna, and Aksaeva, Elena
- Subjects
POWER measurement (Electricity) ,ENERGY storage ,ELECTRIC power systems ,ALGORITHMS ,ELECTRIC power consumption - Abstract
The paper presents a method developed for detecting rough errors in the measurements related to batteries in the part of electric power system that is characterised by low data redundancy. Since batteries either produce or consume power, not all methods of bad data detection can be used to detect erroneous measurements of the active power of the battery in the case of low measurement redundancy. Because of different values of the active power that a battery may produce or consume at several snapshots in row, dynamic algorithms cannot be used. In this study, a new method of bad data detection is developed. The method is based on the battery control strategy analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Computing Gröbner and Involutive Bases for Linear Systems of Difference Equations.
- Author
-
Yanovich, Denis
- Subjects
GROBNER bases ,DIFFERENCE equations ,LINEAR systems ,ALGORITHMS ,SCALABILITY - Abstract
The computation of involutive bases and Gröbner bases for linear systems of difference equations is solved and its importance for physical and mathematical problems is discussed. The algorithm and issues concerning its implementation in C are presented and calculation times are compared with the competing programs. The paper ends with consideration on the parallel version of this implementation and its scalability. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
25. Towards J/ψ e+e- Decays Triggering with TRD in CBM Experiment.
- Author
-
Derenovskaya, Olga, Ablyazimov, Timur, and Ivanov, Victor
- Subjects
CELLULAR automata ,ALGORITHMS ,RADIOACTIVE decay ,TRANSITION radiation detector ,BARYONS ,ELECTRON-electron interactions - Abstract
The paper presents an efficient Cellular Automaton based algorithm for trajectory reconstruction in the Transition Radiation Detector of the CBM experiment. The comparison of the different electron identification methods is also given. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
26. The Automation of Stochastization Algorithm with Use of SymPy Computer Algebra Library.
- Author
-
Demidova, Anastasya, Gevorkyan, Migran, Kulyabov, Dmitry, Korolkova, Anna, and Sevastianov, Leonid
- Subjects
AUTOMATION ,ALGORITHMS ,ALGEBRA software ,STOCHASTIC systems ,DIFFERENTIAL equations - Abstract
SymPy computer algebra library is used for automatic generation of ordinary and stochastic systems of differential equations from the schemes of kinetic interaction. Schemes of this type are used not only in chemical kinetics but also in biological, ecological and technical models. This paper describes the automatic generation algorithm with an emphasis on application details. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
27. A CLT FOR INFINITELY STRATIFIED ESTIMATORS, WITH APPLICATIONS TO DEBIASED MLMC.
- Author
-
ZHENG, ZEYU and GLYNN, PETER W.
- Subjects
CENTRAL limit theorem ,MONTE Carlo method ,ALGORITHMS ,ESTIMATION theory ,CONFIDENCE intervals - Abstract
This paper develops a general central limit theorem (CLT) for post-stratified Monte Carlo estimators with an associated infinite number of strata. In addition, consistency of the corresponding variance estimator is established in the same setting. With these results in hand, one can then construct asymptotically valid confidence interval procedures for such infinitely stratified estimators. We then illustrate our general theory, by applying it to the specific case of debiased multi-level Monte Carlo (MLMC) algorithms. This leads to the first asymptotically valid confidence interval procedure for such stratified debiased MLMC procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
28. Validation of the MCNP6 electron-photon transport algorithm: multiple-scattering of 13- and 20-MeV electrons in thin foils.
- Author
-
Dixon, David A. and Hughes, H. Grady
- Subjects
MONTE Carlo method ,ELECTRON scattering ,ALGORITHMS ,ALUMINUM foil ,PHOTON scattering - Abstract
This paper presents a validation test comparing angular distributions from an electron multiple-scattering experiment with those generated using the MCNP6 Monte Carlo code system. In this experiment, a 13- and 20-MeV electron pencil beam is deflected by thin foils with atomic numbers from 4 to 79. To determine the angular distribution, the fluence is measured down range of the scattering foil at various radii orthogonal to the beam line. The characteristic angle (the angle for which the max of the distribution is reduced by 1/e) is then determined from the angular distribution and compared with experiment. Multiple scattering foils tested herein include beryllium, carbon, aluminum, copper, and gold. For the default electron-photon transport settings, the calculated characteristic angle was statistically distinguishable from measurement and generally broader than the measured distributions. The average relative difference ranged from 5.8% to 12.2% over all of the foils, source energies, and physics settings tested. This validation illuminated a deficiency in the computation of the underlying angular distributions that is well understood. As a result, code enhancements were made to stabilize the angular distributions in the presence of very small substeps. However, the enhancement only marginally improved results indicating that additional algorithmic details should be studied. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
29. Consistent neutron-physical and thermal-physical calculations of fuel rods of VVER type reactors.
- Author
-
Tikhomirov, Georgy, Saldikov, Ivan, Ternovykh, Mikhail, and Gerasimov, Alexander
- Subjects
NUCLEAR reactors ,ISOTOPIC analysis ,NUCLEAR fuel rods ,THERMAL conductivity ,ALGORITHMS - Abstract
For modeling the isotopic composition of fuel, and maximum temperatures at different moments of time, one can use different algorithms and codes. In connection with the development of new types of fuel assemblies and progress in computer technology, the task makes important to increase accuracy in modeling of the above characteristics of fuel assemblies during the operation. Calculations of neutron-physical characteristics of fuel rods are mainly based on models using averaged temperature, thermal conductivity factors, and heat power density. In this paper, complex approach is presented, based on modern algorithms, methods and codes to solve separate tasks of thermal conductivity, neutron transport, and nuclide transformation kinetics. It allows to perform neutron-physical and thermal-physical calculation of the reactor with detailed temperature distribution, with account of temperature-depending thermal conductivity and other characteristics. It was applied to studies of fuel cell of the VVER-1000 reactor. When developing new algorithms and programs, which should improve the accuracy of modeling the isotopic composition and maximum temperature in the fuel rod, it is necessary to have a set of test tasks for verification. The proposed approach can be used for development of such verification base for testing calculation of fuel rods of VVER type reactors. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
30. Computational methods of Gaussian Particle Swarm Optimization (GPSO) and Lagrange Multiplier on economic dispatch issues (case study on electrical system of Java-Bali IV area).
- Author
-
Komsiyah, S.
- Subjects
ELECTRIC power production ,PARTICLE swarm optimization ,LAGRANGE multiplier ,ALGORITHMS ,STOCHASTIC processes ,PROBABILITY theory - Abstract
The objective in this paper is about economic dispatch problem of electric power generation where scheduling the committed generating units outputs so as to meet the required load demand at minimum operating cost, while satisfying all units and system equality and inequality constraint. In the operating of electric power system, an economic planning problem is one of variables that its must be considered since economically planning will give more efficiency in operational cost. In this paper the economic dispatch problem which has non linear cost function solved by using swarm intelligent method is Gaussian Particle Swarm Optimization (GPSO) and Lagrange Multiplier. GPSO is a population-based stochastic algorithms which their moving inspired by swarm intelligent and probabilities theories. To analize its accuracy, the economic dispatch solution by GPSO method will be compared with Lagrange multiplier method. From the running test result the GPSO method give economically planning calculation which it better than Lagrange multiplier method and the GPSO method faster to getting error convergence. Therefore the GPSO method have better performance to getting global best solution than the Lagrange method. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
31. Computationally efficient algorithm for sound speed imaging in pulse-echo ultrasound.
- Author
-
Karwat, Piotr
- Subjects
SPEED of sound ,COMPUTER simulation ,ALGORITHMS ,TISSUES ,ULTRASONIC imaging - Abstract
Precise information about the spatial distribution of sound speed in tissue has diagnostic value in itself, and also enables effective aberration correction in standard ultrasonic imaging. An algorithm called Computed Ultrasound Tomography in Echo mode (CUTE) makes it possible to reconstruct quantitative sound speed images. However, the computational cost is high, which is an obstacle to CUTE implementation in real-time imaging systems. This paper presents an improved version of the CUTE algorithm called Quick-CUTE (QCUTE). The CUTE algorithm uses the inverse transformation matrix to reconstruct the sound speed spatial distribution. The Q-CUTE algorithm is based on simplified model with unified integration paths which enables solving the inverse problem without use of a large transformation matrix. The Q-CUTE algorithm was verified through numerical simulations. The obtained results differ from those of the CUTE algorithm but maintain the quantitative character of sound speed imaging. The computational complexity of the QCUTE algorithm is proportional to N while in case of the CUTE it is proportional to N squared (where N is a number of pixels in the sound speed image). This means that the Q-CUTE algorithm allows the quantitative sound speed imaging to operate in real time. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
32. Online watershed boundary delineation: sharing models through Spatial Data Infrastructures.
- Author
-
SQUIVIDANT, H., BERA, R., AUROUSSEAU, P., and CUDENNEC, C.
- Subjects
SPATIAL data infrastructures ,CARTOGRAPHIC services ,HYDROLOGICAL databases ,ALGORITHMS ,WATERSHED management - Abstract
The proposal in this paper is to make accessible the hydrology analysis tools that were developed by our research team in the past years through an interoperable Spatial Data Infrastructure. To this aim we chose to develop add-ons for the geOrchestra OGC-compliant platform. Such add-ons trigger algorithms and retrieve their output in real time through OGC standard WPS. We then introduce a watershed WPS add-on and its functioning modes. In so doing we exemplify the fact that the use of OGC standards make it straightforward (and transparent to the user operating a common web browser) to remotely trigger a process on a distant server, then apply it to distant data present on a remote cartographic server, and drop the outcome onto a third-party cartographic server while visualizing it all on a browser. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
33. Adaptive wavelet schemes and finite volumes.
- Author
-
Del Sarto, Daniele, Deriaz, Erwan, Lhebrard, Xavier, and Rigal, Mathieu
- Subjects
WAVELETS (Mathematics) ,FINITE volume method ,COMMUNICATION ,ALGORITHMS ,NON-uniform flows (Fluid dynamics) - Abstract
Copyright of ESAIM: Proceedings & Surveys is the property of EDP Sciences and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2021
- Full Text
- View/download PDF
34. Simulation of complex free surface flows.
- Author
-
Benyo, Krisztian, Charhabil, Ayoub, Debyaoui, Mohamed-Ali, and Penel, Yohan
- Subjects
SHALLOW-water equations ,HYDROSTATIC pressure ,ALGORITHMS ,SHEAR waves ,NONLINEAR equations - Abstract
Copyright of ESAIM: Proceedings & Surveys is the property of EDP Sciences and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2021
- Full Text
- View/download PDF
35. Application of Algorithms for Placement of Orthogonal Polyhedrons for Solving the Problems of Packing Objects of Complex Geometric Shape.
- Author
-
Nadykto, A., Aleksic, N., Lima, P., Pivkin, P., Uvarova, L., Jiang, X., Zelensky, A., Chekanin, Vladislav, and Chekanin, Alexander
- Subjects
ALGORITHMS ,POLYHEDRA ,GEOMETRIC shapes ,NONLINEAR programming ,ORTHOGONAL polynomials - Abstract
The article is devoted to the development and research of algorithms for placing objects of complex geometric shapes. To solve the placement problem is proposed an approach which consists in transforming the shape of all objects and further application of the developed algorithm for placing orthogonal polyhedrons of arbitrary dimension to the resulting transformed objects. In the process of transforming the shape of the objects being placed, they are initially voxelized, after which the developed decomposition algorithm is applied to the resulting voxelized objects, which provides the formation of orthogonal polyhedrons consisting of the largest possible orthogonal objects. The proposed model of potential containers is used to describe the free space of containers as a set of orthogonal areas. The developed algorithm for the placement of orthogonal polyhedrons provides a fast solution to NP-hard problems of placing objects of complex geometric shapes without resorting to the use of time-consuming nonlinear programming methods. Examples of the practical application of the developed algorithms for modeling the dense layout of parts of complex geometric shapes on the platform of a 3D printer are given. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. Parallel in Time Algorithms for Nonlinear Iterative Methods.
- Author
-
Grigori, L., Japhet, C., Moireau, P., Gaja, Mustafa, and Gorynina, Olga
- Subjects
ALGORITHMS ,ITERATIVE methods (Mathematics) ,QUASISTATIC processes ,STRUCTURAL analysis (Engineering) ,HIGH performance computing - Abstract
Copyright of ESAIM: Proceedings & Surveys is the property of EDP Sciences and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2018
- Full Text
- View/download PDF
37. INDEPENDENT RETRIEVAL OF AEROSOL TYPE FROM LIDAR.
- Author
-
Nicolae, Doina, Vasilescu, Jeni, Talianu, Camelia, and Dandocsi, Alexandru
- Subjects
ATMOSPHERIC aerosols ,LIDAR ,ALGORITHMS ,ARTIFICIAL neural networks ,OPTICAL properties - Abstract
This paper presents an algorithm for aerosol typing from multiwavelength lidar data, based on Artificial Neural Networks. The aerosol model used to simulate optical properties for the training of the network is described. The algorithm is tested on real observations from ESA-CALIPSO database. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
38. New Algorithm of Seed Finding for Track Reconstruction.
- Author
-
Baranov, Dmitry, Merts, Sergei, Ososkov, Gennady, and Rogachevsky, Oleg
- Subjects
ALGORITHMS ,ENERGY measurement ,CHARGED particle accelerators ,KALMAN filtering ,CONTROL theory (Engineering) ,MAGNETIC fields ,MATHEMATICAL models ,EDUCATION - Abstract
Event reconstruction is a fundamental problem in the high energy physics experiments. It consists of track finding and track fitting procedures in the experiment tracking detectors. This requires a tremendous search of detector responses belonging to each track aimed at obtaining so-called "seeds", i.e. initial approximations of track parameters of charged particles. In the paper we propose a new algorithm of the seedfinding procedure for the BM@N experiment. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
39. New Approach to the Simulation of Complex Systems.
- Author
-
Bogdanov, Alexander, Degtyarev, Alexander, and Korkhov, Vladimir
- Subjects
MATHEMATICAL programming ,ALGORITHMS ,COMPUTER engineering ,NUMERICAL calculations ,VECTOR algebra ,EDUCATION - Abstract
The paper analyzes the problems of scalability of modern computational systems, and offers a new paradigm for solving complex problems on them. It implies (1) Creating a virtual computing cluster with shared virtual memory, (2) Selecting a representation for the problem that minimizes the interaction between computing threads and (3) Configuring the virtual computer system for optimal mapping of the pertinent algorithm on it. Arguments for optimizing virtual clusters are given and test results on them are shown. We discuss the challenges that can be addressed most effectively within the framework of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
40. MINI-SYMPOSIUM ON AUTOMATIC DIFFERENTIATION AND ITS APPLICATIONS IN THE FINANCIAL INDUSTRY.
- Author
-
GEERAERT, SÉBASTIEN, LEHALLE, CHARLES-ALBERT, PEARLMUTTER, BARAK A., PIRONNEAU, OLIVIER, and REGHAI, ADIL
- Subjects
AUTOMATIC differentiation ,APPLIED mathematics ,MONTE Carlo method ,ALGORITHMS ,ALGEBRA software - Abstract
Automatic differentiation has been involved for long in applied mathematics as an alternative to finite difference to improve the accuracy of numerical computation of derivatives. Each time a numerical minimization is involved, automatic differentiation can be used. In between formal derivation and standard numerical schemes, this approach is based on software solutions applying mechanically the chain rule formula to obtain an exact value for the desired derivative. It has a cost in memory and cpu consumption. For participants of financial markets (banks, insurances, financial intermediaries, etc), computing derivatives is needed to obtain the sensitivity of their exposure to well-defined potential market moves. It is a way to understand variations of their balance sheets in specific cases. Since the 2008 crisis, regulation demands to compute this kind of exposure to many different cases, to be sure that market participants are aware and ready to face a wide spectrum of market configurations. This paper shows how automatic differentiation provides a partial answer to this recent explosion of computations to be performed. One part of the answer is a straightforward application of Adjoint Algorithmic Differentiation (AAD), but it is not enough. Since financial sensitivities involve specific functions and mix differentiation with Monte-Carlo simulations, dedicated tools and associated theoretical results are needed. We give here short introductions to typical cases arising when one uses AAD on financial markets. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
41. Recent Results from the CMS Experiment at the LHC.
- Author
-
Isildak, Bora
- Subjects
QUANTUM chromodynamics ,HIGGS bosons ,PHYSICS research ,COMPACT muon solenoid experiment ,STANDARD model (Nuclear physics) ,ALGORITHMS - Abstract
Numerous studies on Jet Production, Vector Boson Production, V+Jets Production and Multi-Boson Production have been carried out by the Compact Muon Solenoid (CMS) Collaboration to test perturbative quantum chromodynamics (QCD) predictions, and to put more stringent constraints on PDFs (Parton Distribution Functions). In this paper, some of these experimental results will be presented, and their possible impacts on Higgs physics and new physics searches will be discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
42. HelioTrope: An innovative and efficient prototype for solar power production.
- Author
-
Papageorgiou, George, Maimaris, Athanasios, Hadjixenophontos, Savvas, and Ioannou, Petros
- Subjects
SOLAR energy ,PHOTOVOLTAIC power generation ,SOLAR technology ,PROTOTYPES ,STIRLING engines ,ALGORITHMS ,CIRCULAR motion - Abstract
The solar energy alternative could provide us with all the energy we need as it exist in vast quantities all around us. We only should be innovative enough in order to improve the efficiency of our systems in capturing and converting solar energy in usable forms of power. By making a case for the solar energy alternative, we identify areas where efficiency can be improved and thereby Solar Energy can become a competitive energy source. This paper suggests an innovative approach to solar energy power production, which is manifested in a prototype given the name HelioTrope. The Heliotrope Solar Energy Production prototype is tested on its' capabilities to efficiently covert solar energy to generation of electricity and other forms of energy for storage or direct use. HelioTrope involves an innovative Stirling engine design and a parabolic concentrating dish with a sun tracking system implementing a control algorithm to maximize the capturing of solar energy. Further, it utilizes a patent developed by the authors where a mechanism is designed for the transmission of reciprocating motion of variable amplitude into unidirectional circular motion. This is employed in our prototype for converting linear reciprocating motion into circular for electricity production, which gives a significant increase in efficiency and reduces maintenance costs. Preliminary calculations indicate that the Heliotrope approach constitutes a competitive solution to solar power production. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
43. New target detection algorithms for volumetric synthetic aperture sonar data.
- Author
-
Williams, David P. and Brown, Daniel C.
- Subjects
ALGORITHMS ,SYNTHETIC aperture sonar ,SEDIMENTS ,ACOUSTICS ,PHYSICS - Abstract
A fast algorithm for the automated detection of buried and proud objects in three-dimensional (3-d) volumetric synthetic aperture sonar (SAS) imagery is proposed. The method establishes the positions of underwater targets by finding localized volumes of strong acoustic returns on or within the sediment. The algorithm relies on an important data-normalization step that is grounded in principled physics-based arguments, and it greatly reduces the amount of data that must be passed to a follow-on classification stage. The promise of the approach is demonstrated for man-made objects present in real, measured SAS data cubes collected at multiple aquatic sites by an experimental volumetric sonar system. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
44. Retrieval of Aerosol Optical Properties Based on High Spectral Resolution Lidar.
- Author
-
Liu, D., Wang, Y., Wu, Y., Gross, B., Moshary, F., Xiao, Da, Zhong, Tianfen, Shen, Xue, Wang, Nanchao, Rong, Yuhang, Liu, Chong, Zhang, Yupeng, and Liu, Dong
- Subjects
AEROSOLS ,OPTICAL properties ,DOPPLER lidar ,CLIMATE research ,REMOTE sensing ,ALGORITHMS - Abstract
The detection of clouds and aerosols is important for climate research. Lidar has been widely used in atmospheric remote sensing research because of its high spatial and temporal resolution and ability to detect profiles. High spectral resolution lidar (HSRL) accurately calculates the optical properties of aerosols and clouds without relying on any assumptions. Based on the 532nm iodine HSRL system, the lidar ratio of the urban aerosol in Hangzhou is 40-50sr, and the average lidar ratio of the cirrus is 24.79sr, demonstrating that the HSRL system and retrieval algorithms accurately obtain the optical properties of clouds and aerosols. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
45. Long-Term Analyses of Aerosol Optical Thickness Using Caliop.
- Author
-
Liu, D., Wang, Y., Wu, Y., Gross, B., Moshary, F., Fujikawa, Masahiro, Kudo, Rei, Nishizawa, Tomoaki, Oikawa, Eiji, Higurashi, Akiko, and Okamoto, Hajime
- Subjects
AEROSOLS ,ALGORITHMS ,SPECTRORADIOMETER ,OPTICAL properties ,SOOT - Abstract
We developed an algorithm to derive extinction coefficients for four aerosol components (water-soluble, dust, sea salt, black carbon) from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) data. The algorithm was applied to the nine-year data for 2007–2015 and the results were compared to CALIOP standard product (CALIOP-ST) and MODerate resolution Imaging Spectroradiometer (MODIS) standard product (MODIS-ST). Comparisons of the total aerosol optical thickness (AOT) showed that MODIS-ST was the largest, followed by CALIOP-ST (Ver.4), and our product. CALIOP-ST (Ver.3) showed a similar magnitude to ours. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
46. Renormalization Approach to the Gribov Process: Numerical Evaluation of Critical Exponents in Two Subtraction Schemes.
- Author
-
Adam, Gh., Buša, J., Hnatič, M., Adzhemyan, Loran Ts., Hnatič, Michal, Ivanova, Ella, Kompaniets, Mikhail V., Lučivjanský, Tomáš, and Mižišin, Lukáš
- Subjects
RENORMALIZATION (Physics) ,NUMERICAL calculations ,INTEGRALS ,DOCUMENT clustering ,ALGORITHMS - Abstract
We study universal quantities characterizing the second order phase transition in the Gribov process. To this end, we use numerical methods for the calculation of the renormalization group functions up to two-loop order in perturbation theory in the famous ε-expansion. Within this procedure the anomalous dimensions are evaluated using two different subtraction schemes: the minimal subtraction scheme and the null-momentum scheme. Numerical calculation of integrals was done on the HybriLIT cluster using the Vegas algorithm from the CUBA library. The comparison with existing analytic calculations shows that the minimal subtraction scheme yields more precise results. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
47. Geometry description and mesh construction from medical imaging.
- Author
-
Calvez, Vincent, Grandmont, Céline, Lochërbach, Eva, Poignard, Clair, Ribot, Magali, Vauchelet, Nicolas, Carlino, Michele Giuliano, Ricka, Philippe, Phan, Minh, Bertoluzza, Silvia, Pennacchio, Micol, Patanè, Giuseppe, and Spagnuolo, Michela
- Subjects
IMAGE segmentation ,MESH networks ,ALGORITHMS ,MEDICAL imaging systems ,COMPUTER simulation - Abstract
Copyright of ESAIM: Proceedings & Surveys is the property of EDP Sciences and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2020
- Full Text
- View/download PDF
48. Updating the subsidence map of Emilia-Romagna region (Italy) by integration of SAR interferometry and GNSS time series: the 2011–2016 period.
- Author
-
Bitelli, Gabriele, Bonsignore, Flavio, Del Conte, Sara, Franci, Francesca, Lambertini, Alessandro, Novali, Fabrizio, Severi, Paolo, and Vittuari, Luca
- Subjects
TIME series analysis ,INTERFEROMETRY ,LAND subsidence ,ALGORITHMS ,IMAGE processing ,SODIC soils - Abstract
The analysis of the vertical movements of the soil in the Po River plane of the Emilia-Romagna Region (Italy) was updated through an interferometric analysis referred to the 2011–2016 time-span. This activity is a continuation of previous studies on the state of knowledge of vertical soil movements in the same area, analyzed firstly by levelling and GNSS and more recently by SAR interferometry for the periods 1992–2000, 2002–2006, 2006–2011, on behalf of the Emilia-Romagna Region. The survey area analysed was approximately 13 300 km 2 , which corresponds to the territory of the regional plain. The interferometric dataset was calibrated through the use of velocity time series of several permanent GNSS stations. Among the 36 stations analysed, 22 were included in the study area: 16 were used for the calibration and 6 as check points). The velocities required for the calibration of the SAR analysis were calculated in the period following the important seismic events that struck the territory of the Emilia Romagna Region in May 2012. The interferometric analysis was carried out by TRE ALTAMIRA using the SqueeSAR™ technology. In particular, in order to update the interferometric dataset to 2016, it was necessary to perform a joint processing of the available RADARSAT-1 data and of the data acquired by the RADARSAT-2 satellite using a specific operating mode of the SqueeSAR™ algorithm known as stitching; this approach allowed the joint processing of images acquired in the same geometry by these two satellites. The study of the time series of the GNSS permanent stations used to provide the velocity datum to the interferometric analysis, is described, and the results of the SqueeSAR™ interferometric processing are reported. Statistical analyses on the spatial distribution and the type of scatterers have been performed during the screening and validation procedures of the dataset, and for the identification and removal of the outliers. Finally, the resulting map is described in order to analyse the measured soil movements with respect to the results obtained in past analyses, and the possible geological and human-induced causes, which could have produced them. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
49. Study of Neural Network Size Requirements for Approximating Functions Relevant to HEP.
- Author
-
Stietzel, Jessica, Lannon, Kevin, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
PARTICLE physics ,ARTIFICIAL neural networks ,DATA analysis ,INFORMATION retrieval ,ALGORITHMS - Abstract
A new event data format has been designed and prototyped by the CMS collaboration to satisfy the needs of a large fraction of physics analyses (at least 50%) with a per event size of order 1 kB. This new format is more than a factor of 20 smaller than the MINIAOD format and contains only top level information typically used in the last steps of the analysis. The talk will review the current analysis strategy from the point of view of event format in CMS (both skims and formats such as RECO, AOD, MINIAOD, NANOAOD) and will describe the design guidelines for the new NANOAOD format. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. Belle II Track Reconstruction and Results from first Collisions.
- Author
-
Hauth, Thomas, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
COLLISIONS (Nuclear physics) ,PARTICLE detectors ,LUMINOSITY ,IMAGE reconstruction ,ALGORITHMS - Abstract
In April 2018, e
+ e- collisions of the SuperKEKB B-Factory have been recorded by the Belle II detector in Tsukuba (Japan) for the first time. The new accelerator and detector represent a major upgrade from the previous Belle experiment and will achieve a 40 times higher instantaneous luminosity. Special considerations and challenges arise for track reconstruction at Belle II due to multiple factors. This high luminosity configuration of the collider increases the beam-induced background by many factors compared to Belle and a new track reconstruction software has been developed from scratch to achieve an excellent physics performance in this busy environment. Even though on average only eleven signal tracks are present in one event, all of them need to be reconstructed down to a transverse momentum of 50 MeV and no fake tracks should be present in the event. Many analyses at Belle II rely on the advantage that the initial state in B-factories is well known and a clean event reconstruction is possible if no tracks are left after assigning all tracks to particle hypotheses. This contribution will introduce the concepts and algorithms of the Belle II tracking software. Special emphasis will be put on the mitigation techniques developed to perform track reconstruction in high-occupancy events. First results from the data-taking with the Belle II detector will be presented. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.