42,210 results on '"smoothing"'
Search Results
2. Prediction of macroeconomic variables of Pakistan: Combining classic and artificial network smoothing methods
- Author
-
Sabri, Rabia, Tabash, Mosab I., Rahrouh, Maha, Alnaimat, Bayan Habis, Ayubi, Sharique, and AsadUllah, Muhammad
- Published
- 2023
- Full Text
- View/download PDF
3. Data modelling recipes for SARS-CoV-2 wastewater-based epidemiology
- Author
-
Rauch, Wolfgang, Schenk, Hannes, Insam, Heribert, Markt, Rudolf, and Kreuzinger, Norbert
- Published
- 2022
- Full Text
- View/download PDF
4. Simultaneous Clustering and Dimension Reduction of Functional Data
- Author
-
Rocci, Roberto, Gattone, S. Antonio, Pollice, Alessio, editor, and Mariani, Paolo, editor
- Published
- 2025
- Full Text
- View/download PDF
5. Efficient Parametric Tests in Semiparametric Regression with Differential Regularization
- Author
-
Cavazzutti, Michele, Arnone, Eleonora, Ferraccioli, Federico, Finos, Livio, Sangalli, Laura M., Pollice, Alessio, editor, and Mariani, Paolo, editor
- Published
- 2025
- Full Text
- View/download PDF
6. Uncertainty Reduction in a Class of Dependent Dirichlet Processes
- Author
-
Ascolani, Filippo, Damato, Stefano, Ruggiero, Matteo, Pollice, Alessio, editor, and Mariani, Paolo, editor
- Published
- 2025
- Full Text
- View/download PDF
7. Constrained Global Optimization by Smoothing
- Author
-
Norkin, Vladimir, Pichler, Alois, Kozyriev, Anton, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Sergeyev, Yaroslav D., editor, Kvasov, Dmitri E., editor, and Astorino, Annabella, editor
- Published
- 2025
- Full Text
- View/download PDF
8. Theory of Nonsmooth Optimization
- Author
-
Bagirov, Adil, Karmitsa, Napsu, Taheri, Sona, Celebi, M. Emre, Series Editor, Bagirov, Adil, Karmitsa, Napsu, and Taheri, Sona
- Published
- 2025
- Full Text
- View/download PDF
9. Combining Frequency-Based Smoothing and Salient Masking for Performant and Imperceptible Adversarial Samples
- Author
-
Soares de Souza, Amon, Meißner, Andreas, Geierhos, Michaela, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Antonacopoulos, Apostolos, editor, Chaudhuri, Subhasis, editor, Chellappa, Rama, editor, Liu, Cheng-Lin, editor, Bhattacharya, Saumik, editor, and Pal, Umapada, editor
- Published
- 2025
- Full Text
- View/download PDF
10. Leveraging Exponential Smoothing for Time Series Analysis of Wireless Sensor Networks
- Author
-
Alam, Intekhab, Ojha, Ananta, Verma, Tushar K., Venkatachalam, Amirtha Preeya, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Kumar, Amit, editor, Gunjan, Vinit Kumar, editor, Senatore, Sabrina, editor, and Hu, Yu-Chen, editor
- Published
- 2025
- Full Text
- View/download PDF
11. Identifying the Roles of Accounting Accruals in Corporate Financial Reporting.
- Author
-
Dutta, Sunil, Patatoukas, Panos N., and Wang, Annika Yu
- Subjects
CORPORATE accounting ,ACCRUAL basis accounting ,CASH flow ,AMORTIZATION ,DEPRECIATION - Abstract
Research in corporate financial reporting identifies two important roles of accounting accruals. First, accruals smooth fluctuations in operating cash flows. Second, accruals allow recognition of losses in an asymmetric timely manner. While these two roles imply different relations between individual accrual components and operating cash flow news, prior research often focuses on the properties of aggregate accruals. We investigate the role of individual accrual components and identify asymmetry in the relation of investment with operating cash flow news as a confounding factor. We show that this investment factor operates through depreciation and amortization accruals, which typically account for the bulk of aggregate accruals. Overall, our article demonstrates the importance of adopting a granular approach to identifying the different roles of individual accrual components. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Right-censored nonparametric regression with measurement error: Right-censored nonparametric regression with...: D. Aydın et al.
- Author
-
Aydın, Dursun, Yılmaz, Ersin, Chamidah, Nur, Lestari, Budi, and Budiantara, I. Nyoman
- Subjects
- *
PARAMETER estimation , *STATISTICAL smoothing , *PROBLEM solving , *RESEARCH personnel , *REGRESSION analysis , *MEASUREMENT errors - Abstract
This study focuses on estimating a nonparametric regression model with right-censored data when the covariate is subject to measurement error. To achieve this goal, it is necessary to solve the problems of censorship and measurement error ignored by many researchers. Note that the presence of measurement errors causes biased and inconsistent parameter estimates. Moreover, non-parametric regression techniques cannot be applied directly to right-censored observations. In this context, we consider an updated response variable using the Buckley–James method (BJM), which is essentially based on the Kaplan–Meier estimator, to solve the censorship problem. Then the measurement error problem is handled using the kernel deconvolution method, which is a specialized tool to solve this problem. Accordingly, three denconvoluted estimators based on BJM are introduced using kernel smoothing, local polynomial smoothing, and B-spline techniques that incorporate both the updated response variable and kernel deconvolution.The performances of these estimators are compared in a detailed simulation study. In addition, a real-world data example is presented using the Covid-19 dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
13. Pitfalls of the E‐Ref Procedure: Tie Values and the Proportion of the Abnormal Data.
- Author
-
Tachiyama, Keisuke, Kanbayashi, Takamichi, Kawabata, Akiko, Hoshino, Satoshi, Miyaji, Yosuke, Kobayashi, Shunsuke, Maruyama, Hirofumi, and Sonoo, Masahiro
- Subjects
- *
CARPAL tunnel syndrome , *REFERENCE values , *MEDIAN nerve , *DATABASES , *PROBLEM solving - Abstract
ABSTRACT Introduction Methods Results Discussion Extrapolated reference values (E‐Ref) procedure is a new method for determining the cutoff value without collecting the control data. We tried to apply this method to determine the cutoff value for the distal motor latency of the median nerve (median DML). During this process, we found two pitfalls of the E‐Ref method. First, the E‐Ref procedure did not correctly work when the DML values measured with 0.1 ms accuracy frequently took on tie values. Second, the result was influenced by the proportion of abnormal values. This study investigated these issues.Data of the median DML were extracted from our laboratory database. To solve the problem of tie values, we tried a wider post‐smoothing window in the original E‐Ref method. We also devised a modified method conducting pre‐smoothing. To see the effect of the proportion of abnormal data, we simulated many datasets having different proportion of abnormal data.In total, 1016 DML values were identified. False deflections due to tie values were often identified as the E‐Ref point using the original methods even using a wider window, resulting in unrealistically low values. Modified method was free from this drawback. For all methods, the E‐Ref value increased as the proportion of abnormal values increased.The problem of tie values, a pitfall of the E‐Ref method, might be solved by pre‐smoothing the data. The E‐Ref value is influenced by the proportion of the normal data, and datasets containing less than 20% abnormal data may achieve appropriate results. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
14. A Martingale-Free Introduction to Conditional Gaussian Nonlinear Systems.
- Author
-
Andreou, Marios and Chen, Nan
- Subjects
- *
DATA assimilation , *MARGINAL distributions , *GAUSSIAN distribution , *NONLINEAR systems , *ATMOSPHERIC models - Abstract
The conditional Gaussian nonlinear system (CGNS) is a broad class of nonlinear stochastic dynamical systems. Given the trajectories for a subset of state variables, the remaining follow a Gaussian distribution. Despite the conditionally linear structure, the CGNS exhibits strong nonlinearity, thus capturing many non-Gaussian characteristics observed in nature through its joint and marginal distributions. Desirably, it enjoys closed analytic formulae for the time evolution of its conditional Gaussian statistics, which facilitate the study of data assimilation and other related topics. In this paper, we develop a martingale-free approach to improve the understanding of CGNSs. This methodology provides a tractable approach to proving the time evolution of the conditional statistics by deriving results through time discretization schemes, with the continuous-time regime obtained via a formal limiting process as the discretization time-step vanishes. This discretized approach further allows for developing analytic formulae for optimal posterior sampling of unobserved state variables with correlated noise. These tools are particularly valuable for studying extreme events and intermittency and apply to high-dimensional systems. Moreover, the approach improves the understanding of different sampling methods in characterizing uncertainty. The effectiveness of the framework is demonstrated through a physics-constrained, triad-interaction climate model with cubic nonlinearity and state-dependent cross-interacting noise. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
15. A generalized least-squares filter designed for GNSS data processing.
- Author
-
Hou, Pengyu and Zhang, Baocheng
- Abstract
The Kalman filter stands as one of the most widely used methods for recursive parameter estimation. However, its standard formulation typically assumes that all state parameters avail initial values and dynamic models, an assumption that may not always hold true in certain applications, particularly in global navigation satellite system (GNSS) data processing. To address this issue, Teunissen et al. (2021) introduced a generalized Kalman filter that eliminates the need for initial values and allows linear functions of parameters to have dynamic models. This work proposes a least-squares approach to reformulate the generalized Kalman filter, enhancing its applicability to GNSS data processing when the parameter dimension varies with satellite visibility changes. The reformulated filter, named generalized least-squares filter, is equivalent to the generalized Kalman filter when all state parameters are recursively estimated. In this case, we demonstrate how both the generalized Kalman filter and the generalized least-squares filter adaptatively manage newly introduced or removed parameters. Specifically, when estimation is limited to parameters with dynamic models, the generalized least-squares filter extends the generalized Kalman filter by allowing the dimension of estimated parameters to vary over time. Moreover, we introduce a new element of least-squares smoothing, creating a comprehensive system for prediction, filtering, and smoothing. To verify, we conduct simulated kinematic and vehicle-borne kinematic GNSS positioning using the proposed generalized least-squares filter and compare the results with those from the standard Kalman filter. Our findings show that the generalized least-squares filter delivers better results, maintaining the positioning errors at the centimeter level, whereas the Kalman-filter-based positioning errors exceed several decimeters in some epochs due to improper initial values and dynamic models. Moreover, the normal equation reduction strategy employed in the generalized least-squares filter improves computational efficiency by 23% and 32% in simulated kinematic and vehicle-borne kinematic positioning, respectively. The generalized least-squares filter also allows for the flexible adjustment of smoothing window lengths, facilitating successful ambiguity resolution in several epochs. In conclusion, the proposed generalized least-squares filter offers flexibility for various GNSS data processing scenarios, ensuring both theoretical rigor and computational efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
16. Almost sure asymptotic representation for the conditional copula estimator.
- Author
-
Veraverbeke, Noël
- Subjects
- *
DESIGN - Abstract
This note establishes the almost sure asymptotic representation for the conditional copula estimator based on multivariate observations and a random covariate. Some auxiliary results on almost sure convergence are of independent interest. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Pre-processing Applied to Instrumental Data in Analytical Chemistry: A Brief Review of the Methods and Examples.
- Author
-
Dayananda, B., Owen, S., Kolobaric, A., Chapman, J., and Cozzolino, D.
- Subjects
- *
ANALYTICAL chemistry techniques , *NEAR infrared spectroscopy , *ANALYTICAL chemistry , *CHEMOMETRICS , *CHROMATOGRAPHIC analysis - Abstract
The field of analytical chemistry has been significantly advanced by the availability of state-of-the-art instrumentation, allowing for the development of novel applications in this field. However, in many cases, the direct interpretation of the recorded data is often not straightforward, hence some level of pre-processing is required (e.g., baseline correction, derivatives, normalization, smoothing). These techniques have become a critical first step for the successful analysis of the data recorded, and it is recommended to use them before the application of chemometrics (e.g., classification, calibration development). The aim of this paper is to provide with an overview of the most used pre-processing methods applied to instrumental analytical methods (e.g., spectroscopy, chromatography). Examples of their application in near infrared and UV-VIS spectroscopy as well as in gas chromatography will be also discussed. Overall, this paper provides with a comprehensive understanding of pre-processing techniques in analytical chemistry, highlighting their importance during the analysis and interpretation of data, as well as during the development of accurate and reliable chemometric models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Filtering coupled Wright–Fisher diffusions.
- Author
-
Boetti, Chiara and Ruggiero, Matteo
- Abstract
Coupled Wright–Fisher diffusions have been recently introduced to model the temporal evolution of finitely-many allele frequencies at several loci. These are vectors of multidimensional diffusions whose dynamics are weakly coupled among loci through interaction coefficients, which make the reproductive rates for each allele depend on its frequencies at several loci. Here we consider the problem of filtering a coupled Wright–Fisher diffusion with parent-independent mutation, when this is seen as an unobserved signal in a hidden Markov model. We assume individuals are sampled multinomially at discrete times from the underlying population, whose type configuration at the loci is described by the diffusion states, and adapt recently introduced duality methods to derive the filtering and smoothing distributions. These respectively provide the conditional distribution of the diffusion states given past data, and that conditional on the entire dataset, and are key to be able to perform parameter inference on models of this type. We show that for this model these distributions are countable mixtures of tilted products of Dirichlet kernels, and describe their mixing weights and how these can be updated sequentially. The evaluation of the weights involves the transition probabilities of the dual process, which are not available in closed form. We lay out pseudo codes for the implementation of the algorithms, discuss how to handle the unavailable quantities, and briefly illustrate the procedure with synthetic data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Pre-r/l breaking in English and the diphthongal bias.
- Author
-
Starčević, Attila
- Subjects
- *
EARLY modern history , *HISTORICAL linguistics , *ENGLISH language , *PHONOLOGY , *VOWELS - Abstract
We analyse j / w + r sequences in the history Early Modern English (EMoE), the predecessor of Standard (Reference) British English (SSBE) and its most current version, Current British English (CUBE) to arrive at the nature of historical pre-r (and also pre-l) breaking, a process (together with r-deletion and smoothing) responsible for the phonemic contrast between schwa-final and any other diphthongs (and long vowels): bear ɛə vs bay ej (or eɪ), lore ɔə vs law ɔː vs low ow at the beginning of the twentieth century, found as bɛː vs bɛj , loː law / lore vs ləw low CUBE. The main thrust of the argument presented here is that (i) (historical) diphthongs and the long high monophthongs are uniformly represented as vowel+glide sequences, giving 'bias' to the title and (ii) that breaking is consonant prevocalisation (CP) of r/ɫ in j / w + r/ɫ sequences (lejr > lejər lair , fajɫ > fajəɫ file). This is followed by r-deletion and smoothing (leər > lɛə > lɛː lair), which are unrelated to breaking 'proper'. The analysis of breaking as consonant prevocalisation builds on the framework developed by Operstein (2010), with earlier precursors in articulatory phonology, historical linguistics (e.g. Howell 1991a, 1991b), and frameworks using privative melodic features, such as government phonology (e.g. Kaye, Lowenstamm & Vergnaud 1985), dependency phonology (Anderson & Ewen 1987), particle phonology (Schane 1984), element theory (Backley 2011), etc. The article introduces the theoretical background in section 1, followed in section 2 by a discussion of breaking in the history of English. We then look at a classical interpretation of breaking and discuss its shortcomings in section 3. We then look at why the long high monophthongs are better analysed as diphthongs in section 4 before we look at how these vowels fit into the bigger canvas of the diphthongs in section 5. The next step in sections 6 and 7 takes us to the analysis of breaking as CP happening in jr , wr and jɫ sequences. In section 8 we look at CP undone in the later history of the language and some of the consequences for earlier English that follow from the modern distribution of j / w + r sequences. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Development of a Comprehensive Model of Assessment of Surface Roughness after Surface Plastic Deformation.
- Author
-
Rastorguev, D. A., Bobrovskii, I. N., and Bobrovskii, N. M.
- Abstract
The article discusses the methodology for constructing a mathematical model based on machine learning methods for predicting roughness of the machined surface after smoothing and rolling. The proposed approach enables adjustment for various machining process conditions (workpiece material, tool type and parameters, machining modes). The technique also provides the option to retrain the model on new data for different machining conditions, which ensures the maximum degree of generalization of the predictive model. A technique for preprocessing input parameters depending on their type is presented. Two approaches to compilation of the training dataset are considered: (1) based on experimental data and (2) based on theoretical dependencies. Several different approaches to generation of the generalized model are investigated: individual models, including bootstrap-based models (linear regression, support vector machines, decision trees, Gaussian process regression), and ensemble methods based on bagging (boosted trees). The results confirm the applicability of the proposed approach to creation of generalized models that simplify design planning of technological processes in the conditions of multi-product manufacturing, which entails high variety of both input process conditions and machined workpieces. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. PM2.5 Time Series Imputation with Moving Averages, Smoothing, and Linear Interpolation.
- Author
-
Flores, Anibal, Tito-Chura, Hugo, Cuentas-Toledo, Osmar, Yana-Mamani, Victor, and Centty-Villafuerte, Deymor
- Subjects
LONG short-term memory ,MOVING average process ,TIME series analysis ,PARTICULATE matter ,STATISTICAL models ,MISSING data (Statistics) - Abstract
In this work, a novel model for hourly PM2.5 time series imputation is proposed for the estimation of missing values in different gap sizes, including 1, 3, 6, 12, and 24 h. The proposed model is based on statistical techniques such as moving averages, linear interpolation smoothing, and linear interpolation. For the experimentation stage, two datasets were selected in Ilo City in southern Peru. Also, five benchmark models were implemented to compare the proposed model results; the benchmark models include exponential weighted moving average (EWMA), autoregressive integrated moving average (ARIMA), long short-term memory (LSTM), gated recurrent unit (GRU), and bidirectional GRU (BiGRU). The results show that, in terms of average MAPEs, the proposed model outperforms the best deep learning model (GRU) between 26.61% and 90.69%, and the best statistical model (ARIMA) between 2.33% and 6.67%. So, the proposed model is a good alternative for the estimation of missing values in PM2.5 time series. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Supervised learning via ensembles of diverse functional representations: the functional voting classifier.
- Author
-
Riccio, Donato, Maturo, Fabrizio, and Romano, Elvira
- Abstract
Many conventional statistical and machine learning methods face challenges when applied directly to high dimensional temporal observations. In recent decades, Functional Data Analysis (FDA) has gained widespread popularity as a framework for modeling and analyzing data that are, by their nature, functions in the domain of time. Although supervised classification has been extensively explored in recent decades within the FDA literature, ensemble learning of functional classifiers has only recently emerged as a topic of significant interest. Thus, the latter subject presents unexplored facets and challenges from various statistical perspectives. The focal point of this paper lies in the realm of ensemble learning for functional data and aims to show how different functional data representations can be used to train ensemble members and how base model predictions can be combined through majority voting. The so-called Functional Voting Classifier (FVC) is proposed to demonstrate how different functional representations leading to augmented diversity can increase predictive accuracy. Many real-world datasets from several domains are used to display that the FVC can significantly enhance performance compared to individual models. The framework presented provides a foundation for voting ensembles with functional data and can stimulate a highly encouraging line of research in the FDA context. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Smooth digital terrain modeling in irregular domains using finite element thin plate splines and adaptive refinement
- Author
-
Lishan Fang
- Subjects
smoothing ,thin plate spline ,mixed finite element ,adaptive mesh refinement ,digital terrain model ,Mathematics ,QA1-939 - Abstract
Digital terrain models (DTMs) are created using elevation data collected in geological surveys using varied sampling techniques like airborne lidar and depth soundings. This often leads to large data sets with different distribution patterns, which may require smooth data approximations in irregular domains with complex boundaries. The thin plate spline (TPS) interpolates scattered data and produces visually pleasing surfaces, but it is too computationally expensive for large data sizes. The finite element thin plate spline (TPSFEM) possesses smoothing properties similar to those of the TPS and interpolates large data sets efficiently. This article investigated the performance of the TPSFEM and adaptive mesh refinement in irregular domains. Boundary conditions are critical for the accuracy of the solution in domains with arbitrary-shaped boundaries and were approximated using the TPS with a subset of sampled points. Numerical experiments were conducted on aerial, terrestrial, and bathymetric surveys. It was shown that the TPSFEM works well in square and irregular domains for modeling terrain surfaces and adaptive refinement significantly improves its efficiency. A comparison of the TPSFEM, TPS, and compactly supported radial basis functions indicates its competitiveness in terms of accuracy and cost.
- Published
- 2024
- Full Text
- View/download PDF
24. Design and Optimization of Non-Coplanar Orbits for Orbital Photovoltaic Panel Cleaning Robots.
- Author
-
Zhao, Yingjie, Qi, Yuming, and Xie, Bing
- Subjects
ROBOT motion ,TRAJECTORY optimization ,ORBITS (Astronomy) ,NONLINEAR programming ,LEGAL motions - Abstract
Aiming at the problem that it is difficult for an orbital photovoltaic panel cleaning robot to span a large distance between photovoltaic panels, a method of designing and optimizing a non-coplanar orbit based on Bezier curves is proposed. Firstly, the robot's motion law is analyzed to obtain trajectory data for a single work cycle. Then, Bezier curves are utilized for trajectory design to ensure a smooth transition during the spanning motion phase. Thirdly, with the average value of the minimum distance between the Bezier curve and the point set data of the spanning motion phase as the optimization objective function, the nonlinear planning based on the SQP algorithm was adopted for the optimization of the upper and lower trajectories. Finally, the results of the case calculations indicate that the standard deviation of the optimized upper and lower trajectories was reduced by 35.63% and 40.57%, respectively. Additionally, the ADAMS simulation validation demonstrates that the trajectory errors of the four wheels decreased by a maximum of 8.79 mm, 23.78 mm, 10.11 mm, and 14.97 mm, respectively, thereby confirming the effectiveness of the trajectory optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Modification of wavelet-based downsampling method to reduce computational error in nonlinear dynamic analysis.
- Author
-
Majidi, Noorollah, Riahi, Hossein Tajmir, and Zandi, Sayed Mahdi
- Subjects
- *
DISCRETE wavelet transforms , *NONLINEAR analysis , *NONLINEAR dynamical systems , *STRUCTURAL frames , *DEGREES of freedom - Abstract
One of the significant obstacles in conducting linear and non-linear time history analysis is its time-consuming nature. In this article, a new downsamlping method based on discrete wavelet transform (DWT) and smoothing is proposed to overcome this problem. In order to assess the precision of this approach, 50,000 linear and non-linear dynamic analyses of single degree of freedom (SDOF) systems and 300 nonlinear dynamic analyses of frame structures have been performed. One hundred Fema440 records were utilized to generate approximate waves up to the third level and the outcomes of this method were then contrasted with those of DWT. It has been demonstrated that the third-level approximate wave produced by DWT, previously considered dependable in other research, generates significant errors in results and the average error (absolute error percentage of the acceleration spectrum) of its third-level approximate wave is approximately 17.5%. On the other hand, the proposed method generated approximate waves with an average error of less than 4.5% across all behavior coefficients and periods and the error rate decreases as the period and behavior coefficient increase. Analysis of steel moment-resisting frames indicated that the lowest error in both methods is achieved for the base shear and across different engineering demand parameters, the average error rate for the proposed method was below 7.5%. Furthermore, caution must be exercised when employing the proposed method for structures with periods shorter than 0.5 s. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. 基于逐点修匀的 Ball-IDW 网格变形方法.
- Author
-
黄浩宇, 昌继海, 曹杰, 付营建, and 关振群
- Abstract
Copyright of Journal of Computer-Aided Design & Computer Graphics / Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao is the property of Gai Kan Bian Wei Hui and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
27. Smoothness and monotonicity constraints for neural networks using ICEnet.
- Author
-
Richman, Ronald and Wüthrich, Mario V.
- Subjects
ARTIFICIAL neural networks ,CONDITIONAL expectations ,ACTUARIAL risk ,ACTUARIES ,GRADUATION (Education) - Abstract
Deep neural networks have become an important tool for use in actuarial tasks, due to the significant gains in accuracy provided by these techniques compared to traditional methods, but also due to the close connection of these models to the generalized linear models (GLMs) currently used in industry. Although constraining GLM parameters relating to insurance risk factors to be smooth or exhibit monotonicity is trivial, methods to incorporate such constraints into deep neural networks have not yet been developed. This is a barrier for the adoption of neural networks in insurance practice since actuaries often impose these constraints for commercial or statistical reasons. In this work, we present a novel method for enforcing constraints within deep neural network models, and we show how these models can be trained. Moreover, we provide example applications using real-world datasets. We call our proposed method ICEnet to emphasize the close link of our proposal to the individual conditional expectation model interpretability technique. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Smooth digital terrain modeling in irregular domains using finite element thin plate splines and adaptive refinement.
- Author
-
Fang, Lishan
- Subjects
DIGITAL elevation models ,RADIAL basis functions ,BIG data ,FINITE element method ,STATISTICAL smoothing - Abstract
Digital terrain models (DTMs) are created using elevation data collected in geological surveys using varied sampling techniques like airborne lidar and depth soundings. This often leads to large data sets with different distribution patterns, which may require smooth data approximations in irregular domains with complex boundaries. The thin plate spline (TPS) interpolates scattered data and produces visually pleasing surfaces, but it is too computationally expensive for large data sizes. The finite element thin plate spline (TPSFEM) possesses smoothing properties similar to those of the TPS and interpolates large data sets efficiently. This article investigated the performance of the TPSFEM and adaptive mesh refinement in irregular domains. Boundary conditions are critical for the accuracy of the solution in domains with arbitrary-shaped boundaries and were approximated using the TPS with a subset of sampled points. Numerical experiments were conducted on aerial, terrestrial, and bathymetric surveys. It was shown that the TPSFEM works well in square and irregular domains for modeling terrain surfaces and adaptive refinement significantly improves its efficiency. A comparison of the TPSFEM, TPS, and compactly supported radial basis functions indicates its competitiveness in terms of accuracy and cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Feature-based deformation for flow visualization.
- Author
-
Straub, Alexander, Sadlo, Filip, and Ertl, Thomas
- Abstract
We present an approach that supports the analysis of flow dynamics in the neighborhood of curved line-type features, such as vortex core lines, attachment lines, and trajectories. We achieve this with continuous deformation to the flow field to straighten such features. This provides "deformed frames of reference", within which qualitative flow dynamics are better observable with respect to the feature. Our approach operates at interactive rates on graphics hardware, and supports exploration of large and complex datasets by continuously navigating the additional degree of freedom of deformation. We demonstrate the properties and the utility of our approach using synthetic and simulated flow fields, with a focus on the application to vortex core lines. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Smoothing Estimates for Weakly Nonlinear Internal Waves in a Rotating Ocean.
- Author
-
Erdoğan, M. Burak and Tzirakis, Nikolaos
- Subjects
KORTEWEG-de Vries equation ,INTERNAL waves ,PERTURBATION theory ,NONLINEAR waves ,NONLINEAR equations - Abstract
Copyright of Journal of Mathematical Physics, Analysis, Geometry (18129471) is the property of B Verkin Institute for Low Temperature Physics & Engineering and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
31. Preprocessing of spectroscopic data to highlight spectral features of materials.
- Author
-
Esquivel, Francisco Javier, Romero‐Béjar, José Luis, and Esquivel, José Antonio
- Subjects
MATHEMATICAL transformations ,PRINCIPAL components analysis ,AFFINE transformations ,CLUSTER analysis (Statistics) ,SOUND recordings ,BIG data - Abstract
The study of the extensive data sets generated by spectrometers, which are of the type commonly referred to as big data, plays a crucial role in extracting valuable information on mineral composition in various fields, such as chemistry, geology, archaeology, pharmacy and anthropology. The analysis of these spectroscopic data falls into the category of big data, which requires the application of advanced statistical methods such as principal component analysis and cluster analysis. However, the large amount of data (big data) recorded by spectrometers makes it very difficult to obtain reliable results from raw data. The usual method is to carry out different mathematical transformations of the raw data. Here, we propose to use the affine transformation for highlight the underlying features for each sample. Finally, an application to spectroscopic data collected from minerals or rocks recorded by NASA's Jet Propulsion Laboratory is performed. An illustrative example has been included by analysing three mineral samples, which have different diageneses and parageneses and belong to different mineralogical groups. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. A DC programming to two-level hierarchical clustering with ℓ1 norm.
- Author
-
Gabissa, Adugna Fita and Obsu, Legesse Lemecha
- Subjects
HIERARCHICAL clustering (Cluster analysis) ,MATHEMATICAL optimization ,PROBLEM solving ,ALGORITHMS - Abstract
The main challenge in solving clustering problems using mathematical optimization techniques is the non-smoothness of the distance measure used. To overcome this challenge, we used Nesterov's smoothing technique to find a smooth approximation of the ίγ norm. In this study, we consider a bi-level hierarchical clustering problem where the similarity distance measure is induced from the ίγ norm. As a result, we are able to design algorithms that provide optimal cluster centers and headquarter (HQ) locations that minimize the total cost, as evidenced by the obtained numerical results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Spatial Confounding and Spatial+ for Nonlinear Covariate Effects.
- Author
-
Dupont, Emiko and Augustin, Nicole H.
- Subjects
- *
FOREST health , *REGRESSION analysis , *FORESTS & forestry , *TOPOGRAPHY , *ADDITIVES - Abstract
Regression models for spatially varying data use spatial random effects to reflect spatial correlation structure. Such random effects, however, may interfere with the covariate effect estimates and make them unreliable. This problem, known as spatial confounding, is complex and has only been studied for models with linear covariate effects. However, as illustrated by a forestry example in which we assess the effect of soil, climate, and topography variables on tree health, the covariate effects of interest are in practice often unknown and nonlinear. We consider, for the first time, spatial confounding in spatial models with nonlinear effects implemented in the generalised additive models (GAMs) framework. We show that spatial+, a recently developed method for alleviating confounding in the linear case, can be adapted to this setting. In practice, spatial+ can then be used both as a diagnostic tool for investigating whether covariate effect estimates are affected by spatial confounding and for correcting the estimates for the resulting bias when it is present. Supplementary materials accompanying this paper appear online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. A spectral approach for the dynamic Bradley–Terry model.
- Author
-
Tian, Xinyu, Shi, Jian, Shen, Xiaotong, and Song, Kai
- Subjects
- *
BASKETBALL , *ASYMPTOTIC distribution , *SPORTS team ranking , *MARKOV processes , *DYNAMIC models - Abstract
Summary: The dynamic ranking, due to its increasing importance in many applications, is becoming crucial, especially with the collection of voluminous time‐dependent data. One such application is sports statistics, where dynamic ranking aids in forecasting the performance of competitive teams, drawing on historical and current data. Despite its usefulness, predicting and inferring rankings pose challenges in environments necessitating time‐dependent modelling. This paper introduces a spectral ranker called Kernel Rank Centrality, designed to rank items based on pairwise comparisons over time. The ranker operates via kernel smoothing in the Bradley–Terry model, utilising a Markov chain model. Unlike the maximum likelihood approach, the spectral ranker is nonparametric, demands fewer model assumptions and computations and allows for real‐time ranking. We establish the asymptotic distribution of the ranker by applying an innovative group inverse technique, resulting in a uniform and precise entrywise expansion. This result allows us to devise a new inferential method for predictive inference, previously unavailable in existing approaches. Our numerical examples showcase the ranker's utility in predictive accuracy and constructing an uncertainty measure for prediction, leveraging data from the National Basketball Association (NBA). The results underscore our method's potential compared with the gold standard in sports, the Arpad Elo rating system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Sign‐flip inference for spatial regression with differential regularisation.
- Author
-
Cavazzutti, Michele, Arnone, Eleonora, Ferraccioli, Federico, Galimberti, Cristina, Finos, Livio, and Sangalli, Laura M.
- Subjects
- *
REGRESSION analysis , *RESPECT - Abstract
Summary: We address the problem of performing inference on the linear and nonlinear terms of a semiparametric spatial regression model with differential regularisation. For the linear term, we propose a new resampling procedure, based on (partial) sign‐flipping of an appropriate transformation of the residuals of the model. The proposed resampling scheme can mitigate the bias effect induced by the differential regularisation. We prove that the proposed test is asymptotically exact. Moreover, we show, by simulation studies, that it enjoys very good control of Type‐I error also in small sample scenarios, differently from parametric alternatives. Additionally, we show that the proposed test has higher power with respect than recently proposed nonparametric tests on the linear term of semiparametric regression models with differential regularisation. Concerning the nonlinear term, we develop three different inference approaches: a parametric one and two nonparametric alternatives. The nonparametric tests are based on a sign‐flip approach. One of these is proved to be asymptotically exact, while the other is proved to be exact also for finite samples. Simulation studies highlight the good control of Type‐I error of the nonparametric approaches with respect the parametric test, while retaining high power. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Virtual Critical Regularity of Mapping Class Group Actions on the Circle.
- Author
-
Kim, Sang-hyun, Koberda, Thomas, and Rivas, Cristóbal
- Abstract
We show that if G
1 and G2 are non-solvable groups, then no C1,τ action of (G 1 × G 2) ∗ ℤ on S1 is faithful for τ > 0. As a corollary, if S is an orientable surface of complexity at least three then the critical regularity of an arbitrary finite index subgroup of the mapping class group Mod(S) with respect to the circle is at most one, thus strengthening a result of the first two authors with Baik. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
37. Assessing the 3D Parameter Sdr of a Digital Surface Model after Surface Plastic Deformation.
- Author
-
Popov, A. N., Grechnikova, A. F., and Bobrovskii, N. M.
- Abstract
An algorithm is developed for assessing the developed interfacial area ratio S
dr for a scale-limited microrelief surface after smoothing and 3D modeling. The results of numerical modeling are compared with experimental data. A correlation is found between the results given by the algorithm and the experimental changes in the surface. On that basis, the quality of the machined surfaces may be predicted, and the smoothing process may be optimized at the design stage. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
38. Device for Ultrasonic Smoothing of Workpiece Surfaces and Information Collection.
- Author
-
Shuvaev, V. G. and Tkachev, R. A.
- Abstract
Ultrasonic smoothing of workpiece surfaces is considered. Additional sensors improve the informational efficiency of the tool. A device is proposed for experimental determination of informative parameters, providing the basis for assessment of the surface quality in the course of smoothing. Measurements of parameters such as the mechanical power and the resistance of the load permit assessment of the effectiveness of smoothing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. On the Affinity of Image Smoothing and Segmentation Problems
- Author
-
Chochia, P. A.
- Published
- 2025
- Full Text
- View/download PDF
40. Return smoothing in pooled annuity products: Return smoothing in pooled annuity products
- Author
-
Kabuche, Doreen, Sherris, Michael, Villegas, Andrés M., and Ziveyi, Jonathan
- Published
- 2025
- Full Text
- View/download PDF
41. Application of noise-cancelling and smoothing techniques in road pavement vibration monitoring data
- Author
-
Amir Shtayat, Sara Moridpour, Berthold Best, and Hussein Daoud
- Subjects
Noise-cancelling ,Smoothing ,Road pavement ,Degradation ,Vibration ,Transportation engineering ,TA1001-1280 - Abstract
Road pavement surfaces need routine and regular monitoring and inspection to keep the surface layers in high-quality condition. However, the population growth and the increases in the number of vehicles and the length of road networks worldwide have required researchers to identify appropriate and accurate road pavement monitoring techniques. The vibration-based technique is one of the effective techniques used to measure the condition of pavement degradation and the level of pavement roughness. The consistency of pavement vibration data is directly proportional to the intensity of surface roughness. Intense fluctuations in vibration signals indicate possible defects at certain points of road pavement. However, vibration signals typically need a series of pre-processing techniques such as filtering, smoothing, segmentation, and labelling before being used in advanced processing and analyses. This research reports the use of noise-cancelling and data-smoothing techniques, including high pass filter, moving average method, median, Savitzky-Golay filter, and extracting peak envelope method, to enhance raw vibration signals for further processing and classification. The results show significant variations in the impact of noise-cancelling and data-smoothing techniques on raw pavement vibration signals. According to the results, the high pass filter is a more accurate noise-cancelling and data smoothing technique on road pavement vibration data compared to other data filtering and data smoothing methods.
- Published
- 2024
- Full Text
- View/download PDF
42. Approximation of Bivariate Functions by Generalized Wendland Radial Basis Functions.
- Author
-
Kouibia, Abdelouahed, González, Pedro, Pasadas, Miguel, Mustafa, Bassim, Yakhlef, Hossain Oulad, and Omri, Loubna
- Subjects
- *
SMOOTHNESS of functions , *GENERALIZED spaces , *INTERPOLATION - Abstract
In this work, we deal with two approximation problems in a finite-dimensional generalized Wendland space of compactly supported radial basis functions. Namely, we present an interpolation method and a smoothing variational method in this space. Next, the theory of the presented method is justified by proving the corresponding convergence result. Likewise, to illustrate this method, some graphical and numerical examples are presented in R 2 , and a comparison with another work is analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. An Efficient Algorithm to Identify Strong Pulse-Like Ground Motions Based on the Smoothed Significant Velocity Half-Cycles.
- Author
-
Li, Cuihua, Meng, Ke, and Guo, Yu
- Subjects
- *
GROUND motion , *VELOCITY , *EARTHQUAKE engineering , *ALGORITHMS , *DATABASES , *MOTION - Abstract
This study proposes a new algorithm based on the smoothed Significant Velocity Half-Cycles for efficient and accurate identification of pulse-like velocity ground motions. The empirical-mode decomposition (EMD) based algorithm is initially employed with providing some valuable insights into its failure. Next, a concept of riding waves is presented and two additional criteria are established. The proposed algorithm is demonstrated to successfully exclude the ambiguous records with insignificant pulse characteristics when compared with six benchmark identification methods. This article also provides a dataset of 325 strong pulse-like records identified from the NGA-West2 database, which is practical importance in earthquake engineering field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Locally Self-Adjustive Smoothing for Measurement Noise Reduction with Application to Automated Peak Detection.
- Author
-
Ozawa, Keisuke, Itakura, Tomoya, and Ono, Taisuke
- Subjects
- *
NOISE control , *SIGNAL processing - Abstract
Smoothing is a widely used approach for measurement noise reduction in spectral analysis. However, it suffers from signal distortion caused by peak suppression. A locally self-adjustive smoothing method is developed that retains sharp peaks and less distorted signals. The proposed method uses only one parameter that determines the global smoothness of data, while balancing the local smoothness using the data itself. Simulation and real experiments in comparison with existing convolution-based smoothing methods indicate both qualitatively and quantitatively improved noise reduction performance in practical scenarios. We also discuss parameter selection and demonstrate an application for the automated smoothing and detection of a given number of peaks from noisy measurement data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Optimization algorithm for minimizing railway energy consumption in hybrid powertrain architectures: A direct method approach using a novel two-dimensional efficiency map approximation.
- Author
-
Radhakrishnan, Rahul and Schenker, Moritz
- Abstract
SEnSOR (Smart Energy Speed Optimizer Rail) is a direct method based optimization algorithm developed at DLR for determining minimum energy speed trajectories for railway vehicles. This paper aims to reduce model error and improve this algorithm for any alternative powertrain architecture. Model simplifications such as projecting the efficiency maps of different train components onto one-dimensional space can lead to inaccuracies and non-optimalities in reality. In this work, 2D section-wise Chua functional representation was used to capture the complete behavior of efficiency maps and discuss its benefits. For this purpose, a new smoothing method was developed. It was observed that there is an average of 6% error in the energy calculation when both, 1D and 2D, models are compared against each other. Previously, solving for different powertrain architectures was time consuming with the requirement of manual modifications to the optimization problem. With a modular approach, the algorithm was modified to flexibly adapt the problem formulation to automatically take into account any changes in powertrain architectures with minimum user input. The benefit is demonstrated by performing optimization on a bi-mode train with three different power sources as developed within the EU-project FCH2RAIL. The advanced algorithm is now capable to adapt to such complex architectures and provide feasible optimization results within a reasonable time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Automated detection of microfilariae parasite in blood smear using OCR-NURBS image segmentation.
- Author
-
Kumar, Priyanka and Babulal, Kanojia Sindhuben
- Subjects
BLOOD parasites ,IMAGE analysis ,FILARIASIS ,SYMPTOMS - Abstract
Lymphatic Filariasis (LF) is a vector-borne ailment that creates disfiguring and disabling in the forbearer. This gruesome disease is generally acquired at the childhood stage through the intervention of a microfilariae (mf) parasite in the host's bloodstream, but its clinical manifestation arises subsequently in life. The conventional approach used for detecting mf within the blood smear is done by a pathologist through a microscope. Because the procedure of such inspection is manual, it might lead to inconsistent outcomes. This research study proposed a new OCR-NURBS image segmentation technique employing a curve-fitting algorithm for the microscopic image of the mf parasite. To achieve the aforementioned objective, five different procedures have been applied: (i) Filtering of image for removal of artifacts; (ii) Application of Dilation and Erosion morphological operation for image enrichment near edges of mf; (iii) A pixel clustering segmentation technique to segregate the parasite from the image backdrop; (iv) Smoothing and discretization on the boundary curve to preserve the edge points; (v) NURBS functional network for the construction of curves of mf from the border points. The results of the proposed method demonstrate phenomenal performance in terms of visual interpretation of segmented images. Additionally, the efficacy of the proposed model is assessed through the evaluation of the image quality index and morphological parameters of the image. Furthermore, a comparison is made between the parameters of manually segmented images and those generated by the proposed model in order to assess the error incurred by the model. The results demonstrate that the proposed model exhibits robustness in performing segmentation of mf parasites. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Model-Based Smoothing with Integrated Wiener Processes and Overlapping Splines.
- Author
-
Zhang, Ziang, Stringer, Alex, Brown, Patrick, and Stafford, Jamie
- Subjects
- *
WIENER processes , *GAUSSIAN processes , *DERIVATIVES (Mathematics) , *FINITE element method , *SMOOTHNESS of functions - Abstract
In many applications that involve the inference of an unknown smooth function, the inference of its derivatives is also important. To make joint inferences of the function and its derivatives, a class of Gaussian processes called pth order Integrated Wiener's Process (IWP), is considered. Methods for constructing a finite element (FEM) approximation of an IWP exist but only focus on the case p = 2 and do not allow appropriate inference for derivatives. In this article, we propose an alternative FEM approximation with overlapping splines (O-spline). The O-spline approximation applies for any order p ∈ Z + , and provides consistent and efficient inference for all derivatives up to order p – 1. It is shown both theoretically and empirically that the O-spline approximation converges to the IWP as the number of knots increases. We further provide a unified and interpretable way to define priors for the smoothing parameter based on the notion of predictive standard deviation, which is invariant to the order p and the knot placement. Finally, we demonstrate the practical use of the O-spline approximation through an analysis of COVID death rates where the inference of derivative has an important interpretation in terms of the course of the pandemic. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Smooth Multi-Period Forecasting With Application to Prediction of COVID-19 Cases.
- Author
-
Tuzhilina, Elena, Hastie, Trevor J., McDonald, Daniel J., Tay, J. Kenneth, and Tibshirani, Robert
- Subjects
- *
FORECASTING methodology , *FIX-point estimation , *COVID-19 pandemic , *FORECASTING , *COVID-19 , *QUANTILE regression - Abstract
Forecasting methodologies have always attracted a lot of attention and have become an especially hot topic since the beginning of the COVID-19 pandemic. In this article we consider the problem of multi-period forecasting that aims to predict several horizons at once. We propose a novel approach that forces the prediction to be "smooth" across horizons and apply it to two tasks: point estimation via regression and interval prediction via quantile regression. This methodology was developed for real-time distributed COVID-19 forecasting. We illustrate the proposed technique with the COVIDcast dataset as well as a small simulation example. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Effect of Data-Processing Methods on Acceleration Summary Metrics of GNSS Devices in Elite Australian Football.
- Author
-
Ellens, Susanne, Carey, David L., Gastin, Paul B., and Varley, Matthew C.
- Subjects
- *
ACCELERATION (Mechanics) , *AUSTRALIAN football , *GLOBAL Positioning System , *AUSTRALIAN football players , *ELITE athletes - Abstract
This study aimed to measure the differences in commonly used summary acceleration metrics during elite Australian football games under three different data processing protocols (raw, custom-processed, manufacturer-processed). Estimates of distance, speed and acceleration were collected with a 10-Hz GNSS tracking technology device from fourteen matches of 38 elite Australian football players from one team. Raw and manufacturer-processed data were exported from respective proprietary software and two common summary acceleration metrics (number of efforts and distance within medium/high-intensity zone) were calculated for the three processing methods. To estimate the effect of the three different data processing methods on the summary metrics, linear mixed models were used. The main findings demonstrated that there were substantial differences between the three processing methods; the manufacturer-processed acceleration data had the lowest reported distance (up to 184 times lower) and efforts (up to 89 times lower), followed by the custom-processed distance (up to 3.3 times lower) and efforts (up to 4.3 times lower), where raw data had the highest reported distance and efforts. The results indicated that different processing methods changed the metric output and in turn alters the quantification of the demands of a sport (volume, intensity and frequency of the metrics). Coaches, practitioners and researchers need to understand that various processing methods alter the summary metrics of acceleration data. By being informed about how these metrics are affected by processing methods, they can better interpret the data available and effectively tailor their training programs to match the demands of competition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Divergence Parametric Smoothing in Image Compression Algorithms.
- Author
-
Gashnikov, M. V.
- Abstract
The paper elaborates on methods of digital image compression. The focus is on the compression method that represents a raster image as a set of multiply thinned sub-images. Sub-images are processed consecutively to generate special reference images. The difference between the synthesized reference image and original sub-image forms a divergence array. The algorithm introduces a discrete error into the divergence array to provide the actual bit-depth reduction. However, the introduction of the error inevitably impairs the quality of the decompressed image. The aim is to make sure that the parametric smoothing of divergence arrays can lessen this quality impairment without changing the bit depth reduction originally provided by the method. Numerical experiments on real digital images are carried out to prove that the use of parametric smoothing improves noticeably the efficiency of the image compression method under discussion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.