953 results on '"penalization"'
Search Results
2. Application of explicit energy bounds in optimization of 3D elastic structures.
- Author
-
Burazin, Krešimir and Crnjac, Ivana
- Abstract
We present a novel numerical method for calculating optimal design in topology optimization problems for 3D linear elastic structures. The algorithm is based on necessary conditions of optimality for problem which was obtained by relaxing the original one via the homogenization method in the sense of operators (G- or H-convergence), and can be implemented for self-adjoint problems. The method relies on recently obtained explicit expressions for the lower Hashin–Shtrikman bound on complementary energy and information on the microstructure that saturates the bound. We tested the algorithm on two benchmark examples, namely the cantilever and the bridge problem. The algorithm provides the solution in a first few iterations and the true composites appear in the optimal design. We also implement a penalization procedure to obtain classical design with slight increase of the cost functional. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Cartografía de lo invisible: cuestiones metodológicas sobre deuda, inclusión y violencia.
- Author
-
Cavallero, Luci, Gago, Verónica, and Perosino, Celeste
- Subjects
FINANCIAL inclusion ,GENDER-based violence ,DOMESTIC violence ,DEBT relief ,INFORMATION policy ,SOCIAL reproduction - Abstract
Copyright of Realidad Economica is the property of Realidad Economica and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
4. Sensitivity Analysis and Filtering of Machinable Parts Using Density-Based Topology Optimization.
- Author
-
Vadillo Morillas, Abraham, Meneses Alonso, Jesús, Bustos Caballero, Alejandro, and Castejón Sisamón, Cristina
- Subjects
STRUCTURAL optimization ,SENSITIVITY analysis ,TOPOLOGY ,POPULARITY ,MACHINING - Abstract
Topology optimization has become a popular tool for designing optimal shapes while meeting specific objectives and restrictions. However, the resulting shape from the optimization process may not be easy to manufacture using typical methods like machining and may require interpretation and validation. Additionally, the final shape depends on chosen parameters. In this study, we conduct a sensitivity analysis of the main parameters involved in 3D topology optimization—penalization and filter radius—focusing on the density-based method. We analyze the features and characteristics of the results, concluding that a machinable and low interpretable part is not an attainable result in by-default topology optimization. Therefore, we propose a new method for obtaining more manufacturable and easily interpretable parts. The main goal is to assist designers in choosing appropriate parameters and understanding what to consider when seeking optimized shapes, giving them a new plug-and-play tool for manufacturable designs. We chose the density-based topology optimization method due to its popularity in commercial packages, and the conclusions may directly influence designers' work. Finally, we verify the study results through different cases to ensure the validity of the conclusions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. An Adaptive Neural Network Regression Method for Structure Identification.
- Author
-
Shin, Jae-Kyung, Bak, Kwan-Young, and Koo, Ja-Yong
- Subjects
- *
MULTIVARIATE analysis , *FUNCTIONAL analysis , *ANALYSIS of variance , *ACCOUNTING methods , *HOMOGENEITY - Abstract
This article reports a study on a flexible neural network regression method within the functional analysis of variance framework that aims to adapt to the underlying structure of the target function. We develop a novel penalization scheme where a concept of node impurity is introduced in the neural network framework. The node impurity in neural networks represents the homogeneity of the effects of the inputs on the node. We first define the effect of individual input on node and in turn, measure the node impurity based on the effects of inputs on node. We adopt the sum of node impurities as a penalty function whose usage makes the connections from inputs to nodes sparse, which improves estimation accuracy by reducing unnecessary complexity and enables data-adaptive structure identification. Our method takes into account of a large parameter space of the networks ranging from a fully-connected structure to sparsely connected structures. Among possible node connection structures, an optimal model is selected based purely on observed data. Numerical studies based on simulated and real datasets show that the proposed method performs well in identifying the inherent structure of the regression function and produces good estimation accuracy. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Comparing Bayesian Variable Selection to Lasso Approaches for Applications in Psychology.
- Author
-
Bainter, Sierra A, McCauley, Thomas G, Fahmy, Mahmoud M, Goodman, Zachary T, Kupis, Lauren B, and Rao, J Sunil
- Subjects
Bayesian ,lasso ,penalization ,regression ,shrinkage priors ,stochastic search variable selection ,variable selection ,Applied Mathematics ,Psychology ,Social Sciences Methods - Abstract
In the current paper, we review existing tools for solving variable selection problems in psychology. Modern regularization methods such as lasso regression have recently been introduced in the field and are incorporated into popular methodologies, such as network analysis. However, several recognized limitations of lasso regularization may limit its suitability for psychological research. In this paper, we compare the properties of lasso approaches used for variable selection to Bayesian variable selection approaches. In particular we highlight advantages of stochastic search variable selection (SSVS), that make it well suited for variable selection applications in psychology. We demonstrate these advantages and contrast SSVS with lasso type penalization in an application to predict depression symptoms in a large sample and an accompanying simulation study. We investigate the effects of sample size, effect size, and patterns of correlation among predictors on rates of correct and false inclusion and bias in the estimates. SSVS as investigated here is reasonably computationally efficient and powerful to detect moderate effects in small sample sizes (or small effects in moderate sample sizes), while protecting against false inclusion and without over-penalizing true effects. We recommend SSVS as a flexible framework that is well-suited for the field, discuss limitations, and suggest directions for future development.
- Published
- 2023
7. Penalization of stationary Navier–Stokes equations and applications in topology optimization
- Author
-
Murea, Cornel Marius and Tiba, Dan
- Published
- 2024
- Full Text
- View/download PDF
8. A bias-reduced generalized estimating equation approach for proportional odds models with small-sample longitudinal ordinal data
- Author
-
Yukio Tada and Tosiya Sato
- Subjects
Bias reduction ,Marginal model ,Categorical data ,Penalization ,Firth’s adjustment ,Medicine (General) ,R5-920 - Abstract
Abstract Background Longitudinal ordinal data are commonly analyzed using a marginal proportional odds model for relating ordinal outcomes to covariates in the biomedical and health sciences. The generalized estimating equation (GEE) consistently estimates the regression parameters of marginal models even if the working covariance structure is misspecified. For small-sample longitudinal binary data, recent studies have shown that the bias of regression parameters may result from the GEE and have addressed the issue by applying Firth’s adjustment for the likelihood score equation to the GEE as if generalized estimating functions were likelihood score functions. In this manuscript, for the proportional odds model for longitudinal ordinal data, the small-sample properties of the GEE were investigated, and a bias-reduced GEE (BR-GEE) was derived. Methods By applying the adjusted function originally derived for the likelihood score function of the proportional odds model to the GEE, we produced the BR-GEE. We investigated the small-sample properties of both GEE and BR-GEE through simulation and applied them to a clinical study dataset. Results In simulation studies, the BR-GEE had a bias closer to zero, smaller root mean square error than the GEE with coverage probability of confidence interval near or above the nominal level. The simulation also showed that BR-GEE maintained a type I error rate near or below the nominal level. Conclusions For the analysis of longitudinal ordinal data involving a small number of subjects, the BR-GEE is advantageous for obtaining estimates of the regression parameters of marginal proportional odds models.
- Published
- 2024
- Full Text
- View/download PDF
9. A bias-reduced generalized estimating equation approach for proportional odds models with small-sample longitudinal ordinal data.
- Author
-
Tada, Yukio and Sato, Tosiya
- Subjects
- *
GENERALIZED estimating equations , *STANDARD deviations , *FALSE positive error - Abstract
Background: Longitudinal ordinal data are commonly analyzed using a marginal proportional odds model for relating ordinal outcomes to covariates in the biomedical and health sciences. The generalized estimating equation (GEE) consistently estimates the regression parameters of marginal models even if the working covariance structure is misspecified. For small-sample longitudinal binary data, recent studies have shown that the bias of regression parameters may result from the GEE and have addressed the issue by applying Firth's adjustment for the likelihood score equation to the GEE as if generalized estimating functions were likelihood score functions. In this manuscript, for the proportional odds model for longitudinal ordinal data, the small-sample properties of the GEE were investigated, and a bias-reduced GEE (BR-GEE) was derived. Methods: By applying the adjusted function originally derived for the likelihood score function of the proportional odds model to the GEE, we produced the BR-GEE. We investigated the small-sample properties of both GEE and BR-GEE through simulation and applied them to a clinical study dataset. Results: In simulation studies, the BR-GEE had a bias closer to zero, smaller root mean square error than the GEE with coverage probability of confidence interval near or above the nominal level. The simulation also showed that BR-GEE maintained a type I error rate near or below the nominal level. Conclusions: For the analysis of longitudinal ordinal data involving a small number of subjects, the BR-GEE is advantageous for obtaining estimates of the regression parameters of marginal proportional odds models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Utilizing latent connectivity among mediators in high‐dimensional mediation analysis.
- Author
-
Hu, Jia Yuan, DeSimone, Marley, and Wang, Qing
- Subjects
- *
GESTATIONAL age , *STATISTICAL hypothesis testing , *REGRESSION analysis , *POISONS , *BIOMARKERS - Abstract
Mediation analysis intends to unveil the underlying relationship between an outcome variable and an exposure variable through one or more intermediate variables called mediators. In recent decades, research on mediation analysis has been focusing on multivariate mediation models, where the number of mediating variables is possibly of high dimension. This paper concerns high‐dimensional mediation analysis and proposes a three‐step algorithm that extracts and utilizes inter‐connectivity among candidate mediators. More specifically, the proposed methodology starts with a screening procedure to reduce the dimensionality of the initial set of candidate mediators, followed by a penalized regression model that incorporates both parameter‐ and group‐wise regularization, and ends with fitting a multivariate mediation model and identifying active mediating variables through a joint significance test. To showcase the performance of the proposed algorithm, we conducted two simulation studies in high‐dimensional and ultra‐high‐dimensional settings, respectively. Furthermore, we demonstrate the practical applications of the proposal using a real data set that uncovers the possible impact of environmental toxicants on women's gestational age at delivery through 61 biomarkers that belong to 7 biological pathways. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Minimizing Compositions of Differences-of-Convex Functions with Smooth Mappings.
- Author
-
Le Thi, Hoai An, Huynh, Van Ngai, and Dinh, Tao Pham
- Subjects
SMOOTHNESS of functions ,DIFFERENTIABLE functions ,CONVEX sets - Abstract
We address the so-called DC (difference-of-convex functions) composite minimization problems (or DC composite programs) whose objective function is a composition of a DC function with a continuously differentiable mapping. We first develop an algorithm named DC composite algorithm (DCCA in short) for unconstrained DC composite programs and further extend to DC composite programs with constraints of inclusion associated with a smooth mapping and a closed convex set. The convergence analysis of the proposed algorithms is investigated. Applications of DCCA for two different problems, computation of the numerical radius of a square matrix and minimization of composite energies, are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Coherent extrapolation of mortality rates at old ages applied to long term care.
- Author
-
Le Bastard, Léonie
- Abstract
In an insurance context, Long-Term Care (LTC) products cover the risk of permanent loss of autonomy, which is defined by the impossibility or difficulty of performing alone all or part of the activities of daily living (ADL). From an actuarial point of view, knowledge of risk depends on knowledge of the underlying biometric laws, including the mortality of autonomous insureds and the mortality of disabled insureds. Due to the relatively short history of LTC products and the age limit imposed at underwriting, insurers lack information at advanced ages. This represents a challenge for actuaries, making it difficult to estimate those biometric laws. In this paper, we propose to complete the missing information at advanced ages on the mortality of autonomous and disabled insured populations using information on the global mortality of the portfolio. In fact, the three previous mortality laws are linked since the portfolio is composed only of autonomous and disabled policyholders. We model the two mortality laws (deaths in autonomy and deaths in LTC) in a Poisson Generalized Linear Model framework, additionally using the P-Splines smoothing method. A constraint is then included to link the mortality laws of the two groups and the global mortality of the portfolio. This new method allows for estimating and extrapolating both mortality laws simultaneously in a consistent manner. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. The use of information and telecommunication networks for criminal purposes: regulatory accounting and prospects for expanding criminal law authority
- Author
-
Nina Yu. Skripchenko
- Subjects
method of committing a crime ,differentiation of criminal liability ,corpus delicti ,public danger ,rules of legal drafting technique ,penalization ,Law - Abstract
The rapid digital transformation of crime determines the high importance of criminal law regulation, which, according to some scholars, requires modernization to ensure a stricter state censure of persons encroaching on security in the information and communication space. Critically assessing this proposal, whose implementation will entail numerous norms that are constructively inconsistent with the requirements of non-characteristic property of a qualifying feature for acts, which threatens to artificially increase the danger of crime, the author proposes to amend the law, ensuring a unified definition of the relevant objective feature. The totality and multivariability of the use of information and telecommunication networks for criminal purposes actualizes the issue of determining the boundaries within which the involvement of communication reserves of the relevant technologies will form a mechanism for a criminal act, the significance of which has increased due to the broad interpretation by the Plenum of the Supreme Court of the Russian Federation of the relevant objective feature in relation to the composition paragraph "b" part 2 Article 228.1 of the Criminal Code of the Russian Federation. The systematic implementation of the law may serve as a basis for a broad definition of this feature also within the framework of other compositions, posing a threat of judicial penalization of acts. The repressive nature of criminal law regulation precludes hasty reforming of the law, devoid of criminological substantiation. The decision to expand the differentiating meaning of the analyzed objective feature may be dictated by the emergence in society of social relations that are not regulated by law, giving rise to acts that pose danger, or to ensure a unified normative definition of related criminal acts. The methodological basis is made up of general scientific (analysis and synthesis, dialectics) and private scientific research methods (criminal-statistical, systemic structural, formal legal).
- Published
- 2024
- Full Text
- View/download PDF
14. Robust data integration from multiple external sources for generalized linear models with binary outcomes.
- Author
-
Choi, Kyuseong, Taylor, Jeremy M G, and Han, Peisong
- Subjects
- *
DATA integration , *MAXIMUM likelihood statistics , *LOGISTIC regression analysis , *REGRESSION analysis , *PROSTATE cancer - Abstract
We aim to estimate parameters in a generalized linear model (GLM) for a binary outcome when, in addition to the raw data from the internal study, more than 1 external study provides summary information in the form of parameter estimates from fitting GLMs with varying subsets of the internal study covariates. We propose an adaptive penalization method that exploits the external summary information and gains efficiency for estimation, and that is both robust and computationally efficient. The robust property comes from exploiting the relationship between parameters of a GLM and parameters of a GLM with omitted covariates and from downweighting external summary information that is less compatible with the internal data through a penalization. The computational burden associated with searching for the optimal tuning parameter for the penalization is reduced by using adaptive weights and by using an information criterion when searching for the optimal tuning parameter. Simulation studies show that the proposed estimator is robust against various types of population distribution heterogeneity and also gains efficiency compared to direct maximum likelihood estimation. The method is applied to improve a logistic regression model that predicts high-grade prostate cancer making use of parameter estimates from 2 external models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Homogeneity pursuit and variable selection in regression models for multivariate abundance data.
- Author
-
Hui, Francis K C, Maestrini, Luca, and Welsh, Alan H
- Subjects
- *
REGRESSION analysis , *HOMOGENEITY , *GENERALIZED estimating equations , *PARSIMONIOUS models , *OCEAN bottom , *RECORD collecting - Abstract
When building regression models for multivariate abundance data in ecology, it is important to allow for the fact that the species are correlated with each other. Moreover, there is often evidence species exhibit some degree of homogeneity in their responses to each environmental predictor, and that most species are informed by only a subset of predictors. We propose a generalized estimating equation (GEE) approach for simultaneous homogeneity pursuit (ie, grouping species with similar coefficient values while allowing differing groups for different covariates) and variable selection in regression models for multivariate abundance data. Using GEEs allows us to straightforwardly account for between-response correlations through a (reduced-rank) working correlation matrix. We augment the GEE with both adaptive fused lasso- and adaptive lasso-type penalties, which aim to cluster the species-specific coefficients within each covariate and encourage differing levels of sparsity across the covariates, respectively. Numerical studies demonstrate the strong finite sample performance of the proposed method relative to several existing approaches for modeling multivariate abundance data. Applying the proposed method to presence–absence records collected along the Great Barrier Reef in Australia reveals both a substantial degree of homogeneity and sparsity in species-environmental relationships. We show this leads to a more parsimonious model for understanding the environmental drivers of seabed biodiversity, and results in stronger out-of-sample predictive performance relative to methods that do not accommodate such features. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Robust Inference and Modeling of Mean and Dispersion for Generalized Linear Models.
- Author
-
Ponnet, Jolien, Segaert, Pieter, Van Aelst, Stefan, and Verdonck, Tim
- Subjects
- *
DISTRIBUTION (Probability theory) , *INFERENCE (Logic) , *DISPERSION (Chemistry) , *EXPONENTIAL families (Statistics) , *LIKELIHOOD ratio tests - Abstract
Generalized Linear Models (GLMs) are a popular class of regression models when the responses follow a distribution in the exponential family. In real data the variability often deviates from the relation imposed by the exponential family distribution, which results in over- or underdispersion. Dispersion effects may even vary in the data. Such datasets do not follow the traditional GLM distributional assumptions, leading to unreliable inference. Therefore, the family of double exponential distributions has been proposed, which models both the mean and the dispersion as a function of covariates in the GLM framework. Since standard maximum likelihood inference is highly susceptible to the possible presence of outliers, we propose the robust double exponential (RDE) estimator. Asymptotic properties and robustness of the RDE estimator are discussed. A generalized robust quasi-deviance measure is introduced which constitutes the basis for a stable robust test. Simulations for binomial and Poisson models show the excellent performance of the RDE estimator and corresponding robust tests. Penalized versions of the RDE estimator are developed for sparse estimation with high-dimensional data and for flexible estimation via generalized additive models (GAMs). Real data applications illustrate the relevance of robust inference for dispersion effects in GLMs and GAMs. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Pepal: Penalizing multimedia breaches and partial leakages.
- Author
-
Mangipudi, Easwar Vivek, Rao, Krutarth, Clark, Jeremy, and Kate, Aniket
- Subjects
- *
LEAKAGE , *BLOCKCHAINS , *CRYPTOCURRENCIES , *WATERMARKS - Abstract
Storage of media files by users at a third party, like cloud services or escrows, is increasing every day along with the risk of stored files being leaked through breaches from third parties. In this article, we study the problem of handling either intentional or unintentional multimedia storage breaches by the entity hosting the data. To address the problem, we design the Pepal: protocol where the sender forwarding multimedia data to a receiver can penalize the receiver through loss of cryptocurrency even for partial data leakage. Pepal: achieves this by augmenting a blockchain on-chain smart contract between the two parties with an off-chain cryptographic protocol. The protocol involves a new primitive doubly oblivious transfer (DOT), which, when combined with robust watermarking and a claim-or-refund blockchain contract, provides the necessary framework for a provably secure protocol. Any public data leakage by the receiver leads to the sender learning the receiver's crypto-currency secret key, which allows him to transfer the claim-or-refund deposit of the receiver. The Pepal: protocol also ensures that the malicious sender cannot steal the deposit, even by leaking the original multimedia document in any form. We analyze our DOT-based design against partial adversarial leakages and show it to be robust against even small leakages. The prototype implementation of our Pepal: protocol shows our system to be efficient and easy to deploy in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Sensitivity Analysis and Filtering of Machinable Parts Using Density-Based Topology Optimization
- Author
-
Abraham Vadillo Morillas, Jesús Meneses Alonso, Alejandro Bustos Caballero, and Cristina Castejón Sisamón
- Subjects
topology optimization ,manufacture filter ,penalization ,filter radius ,machining ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Topology optimization has become a popular tool for designing optimal shapes while meeting specific objectives and restrictions. However, the resulting shape from the optimization process may not be easy to manufacture using typical methods like machining and may require interpretation and validation. Additionally, the final shape depends on chosen parameters. In this study, we conduct a sensitivity analysis of the main parameters involved in 3D topology optimization—penalization and filter radius—focusing on the density-based method. We analyze the features and characteristics of the results, concluding that a machinable and low interpretable part is not an attainable result in by-default topology optimization. Therefore, we propose a new method for obtaining more manufacturable and easily interpretable parts. The main goal is to assist designers in choosing appropriate parameters and understanding what to consider when seeking optimized shapes, giving them a new plug-and-play tool for manufacturable designs. We chose the density-based topology optimization method due to its popularity in commercial packages, and the conclusions may directly influence designers’ work. Finally, we verify the study results through different cases to ensure the validity of the conclusions.
- Published
- 2024
- Full Text
- View/download PDF
19. On the Bayesian Interpretation of Penalized Statistical Estimators
- Author
-
Kalina, Jan, Peštová, Barbora, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Rutkowski, Leszek, editor, Scherer, Rafał, editor, Korytkowski, Marcin, editor, Pedrycz, Witold, editor, Tadeusiewicz, Ryszard, editor, and Zurada, Jacek M., editor
- Published
- 2023
- Full Text
- View/download PDF
20. Amblyopia Management
- Author
-
Özkan, Seyhan B., Özdek, Şengül, editor, Berrocal, Audina, editor, and Spandau, Ulrich, editor
- Published
- 2023
- Full Text
- View/download PDF
21. Lewy-Stampacchia inequality for noncoercive parabolic obstacle problems
- Author
-
Fernando Farroni, Gioconda Moscariello, and Gabriella Zecca
- Subjects
noncoercive evolution problems ,obstacle problems ,penalization ,lewy–stampacchia inequality ,marcinkiewicz spaces ,Applied mathematics. Quantitative methods ,T57-57.97 - Abstract
We investigate the obstacle problem for a class of nonlinear and noncoercive parabolic variational inequalities whose model is a Leray–Lions type operator having singularities in the coefficients of the lower order terms. We prove the existence of a solution to the obstacle problem satisfying a Lewy-Stampacchia type inequality.
- Published
- 2023
- Full Text
- View/download PDF
22. Bayesian Regularized SEM: Current Capabilities and Constraints
- Author
-
Sara van Erp
- Subjects
structural equation modeling ,Bayesian ,regularization ,penalization ,shrinkage prior ,Psychology ,BF1-990 - Abstract
An important challenge in statistical modeling is to balance how well our model explains the phenomenon under investigation with the parsimony of this explanation. In structural equation modeling (SEM), penalization approaches that add a penalty term to the estimation procedure have been proposed to achieve this balance. An alternative to the classical penalization approach is Bayesian regularized SEM in which the prior distribution serves as the penalty function. Many different shrinkage priors exist, enabling great flexibility in terms of shrinkage behavior. As a result, different types of shrinkage priors have been proposed for use in a wide variety of SEMs. However, the lack of a general framework and the technical details of these shrinkage methods can make it difficult for researchers outside the field of (Bayesian) regularized SEM to understand and apply these methods in their own work. Therefore, the aim of this paper is to provide an overview of Bayesian regularized SEM, with a focus on the types of SEMs in which Bayesian regularization has been applied as well as available software implementations. Through an empirical example, various open-source software packages for (Bayesian) regularized SEM are illustrated and all code is made available online to aid researchers in applying these methods. Finally, reviewing the current capabilities and constraints of Bayesian regularized SEM identifies several directions for future research.
- Published
- 2023
- Full Text
- View/download PDF
23. Space-Time Mixed System Formulation of Phase-Field Fracture Optimal Control Problems.
- Author
-
Khimin, Denis, Steinbach, Marc Christian, and Wick, Thomas
- Subjects
- *
SPACETIME , *NEWTON-Raphson method , *FUNCTION spaces , *CRACK propagation (Fracture mechanics) - Abstract
In this work, space-time formulations and Galerkin discretizations for phase-field fracture optimal control problems are considered. The fracture irreversibility constraint is formulated on the time-continuous level and is regularized by means of penalization. The optimization scheme is formulated in terms of the reduced approach and then solved with a Newton method. To this end, the state, adjoint, tangent, and adjoint Hessian equations are derived. The key focus is on the design of appropriate function spaces and the rigorous justification of all Fréchet derivatives that require fourth-order regularizations. Therein, a second-order time derivative on the phase-field variable appears, which is reformulated as a mixed first-order-in-time system. These derivations are carefully established for all four equations. Finally, the corresponding time-stepping schemes are derived by employing a dG(r ) discretization in time. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. A joint estimation for the high-dimensional regression modeling on stratified data.
- Author
-
Yimiao Gao and Yuehan Yang
- Subjects
- *
REGRESSION analysis , *DATA modeling , *GENE expression - Abstract
This paper considers the estimation of regression models when data is collected in a stratified mode using a categorical variable. This kind of data appears in fields frequently since data is collected from various sources. Most of the literature analyzes the data assuming that the stratified information is known, while this information is not always attainable. In this paper, we assume the stratified information is unknown. The proposed joint estimation combines the clustering technique and penalized regression modeling, so that it can be applied to high-dimensional stratified data without specific information. We show that the proposed method enjoys asymptotic properties. Simulations and empirical studies confirm that our method outperforms the methods without stratification. We apply the proposed method to gene expression data and temperature data, obtaining some meaningful results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Composing Diverse Ensembles of Convolutional Neural Networks by Penalization.
- Author
-
Harangi, Balazs, Baran, Agnes, Beregi-Kovacs, Marcell, and Hajdu, Andras
- Subjects
- *
CONVOLUTIONAL neural networks , *IMAGE recognition (Computer vision) , *PUNISHMENT , *ERROR functions , *IMAGE analysis - Abstract
Ensemble-based systems are well known to have the capacity to outperform individual approaches if the ensemble members are sufficiently accurate and diverse. This paper investigates how an efficient ensemble of deep convolutional neural networks (CNNs) can be created by forcing them to adjust their parameters during the training process to increase diversity in their decisions. As a new theoretical approach to reach this aim, we join the member neural architectures via a fully connected layer and insert a new correlation penalty term in the loss function to obstruct their similar operation. With this complementary term, we implement the standard guideline of ensemble creation to increase the members' diversity for CNNs in a more detailed and flexible way than similar existing techniques. As for applicability, we show that our approach can be efficiently used in various classification tasks. More specifically, we demonstrate its performance in challenging medical image analysis and natural image classification problems. Besides the theoretical considerations and foundations, our experimental findings suggest that the proposed technique is competitive. Namely, on the one hand, the classification rate of the ensemble trained in this way outperformed all the individual accuracies of the state-of-the-art member CNNs according to the standard error functions of these application domains. On the other hand, it is also validated that the ensemble members get more diverse and their accuracies are raised by adding the penalization term. Moreover, we performed a full comparative analysis, including other state-of-the-art ensemble-based approaches recommended for the same classification tasks. This comparative study also confirmed the superiority of our method, as it overcame the current solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Directional and approximate efficiency in set optimization.
- Author
-
Durea, Marius and Florea, Elena-Andreea
- Abstract
We investigate, in the framework of set optimization, some issues that are well studied in vectorial setting, that is, penalization procedures, properness of solutions and optimality conditions on primal spaces. Therefore, with this study we aim at completing the literature dedicated to set optimization with some results that have well established correspondence in the classical vector optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Backward doubly-stochastic differential equations with mean reflection.
- Author
-
Hongchao Qian and Jun Peng
- Subjects
DIFFERENTIAL equations ,STOCHASTIC differential equations ,STOCHASTIC partial differential equations - Abstract
The given text discusses the study of a class of mean-reflected backward doubly stochastic differential equations (MR-BDSDEs). The authors establish the existence and uniqueness of solutions for these equations, with a focus on the penalization method. They also provide a probabilistic interpretation of the classical solutions of mean-reflected stochastic partial differential equations (MR-SPDEs) in terms of MR-BDSDEs. The article contributes to the broader field of probability, uncertainty, and quantitative risk. [Extracted from the article]
- Published
- 2023
- Full Text
- View/download PDF
28. Shrinkage estimators of the spatial relative risk function.
- Author
-
Hazelton, Martin L.
- Subjects
- *
DENSITY - Abstract
The spatial relative risk function describes differences in the geographical distribution of two types of points, such as locations of cases and controls in an epidemiological study. It is defined as the ratio of the two underlying densities. Estimation of spatial relative risk is typically done using kernel estimates of these densities, but this procedure is often challenging in practice because of the high degree of spatial inhomogeneity in the distributions. This makes it difficult to obtain estimates of the relative risk that are stable in areas of sparse data while retaining necessary detail elsewhere, and consequently difficult to distinguish true risk hotspots from stochastic bumps in the risk function. We study shrinkage estimators of the spatial relative risk function to address these problems. In particular, we propose a new lasso‐type estimator that shrinks a standard kernel estimator of the log‐relative risk function towards zero. The shrinkage tuning parameter can be adjusted to help quantify the degree of evidence for the existence of risk hotspots, or selected to optimize a cross‐validation criterion. The performance of the lasso estimator is encouraging both on a simulation study and on real‐world examples. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Variable selection through adaptive elastic net for proportional odds model
- Author
-
Wang, Chunxiang, Li, Nan, Diao, Hongbin, and Lu, Lanqing
- Published
- 2024
- Full Text
- View/download PDF
30. Sparse-penalized deep neural networks estimator under weak dependence
- Author
-
Kengne, William and Wade, Modou
- Published
- 2024
- Full Text
- View/download PDF
31. A novel penalized inverse‐variance weighted estimator for Mendelian randomization with applications to COVID‐19 outcomes.
- Author
-
Xu, Siqi, Wang, Peng, Fung, Wing Kam, and Liu, Zhonghua
- Subjects
- *
RANDOMIZATION (Statistics) , *COVID-19 , *CONFOUNDING variables , *PERIPHERAL vascular diseases , *BODY mass index , *GENETIC variation - Abstract
Mendelian randomization utilizes genetic variants as instrumental variables (IVs) to estimate the causal effect of an exposure variable on an outcome of interest even in the presence of unmeasured confounders. However, the popular inverse‐variance weighted (IVW) estimator could be biased in the presence of weak IVs, a common challenge in MR studies. In this article, we develop a novel penalized inverse‐variance weighted (pIVW) estimator, which adjusts the original IVW estimator to account for the weak IV issue by using a penalization approach to prevent the denominator of the pIVW estimator from being close to zero. Moreover, we adjust the variance estimation of the pIVW estimator to account for the presence of balanced horizontal pleiotropy. We show that the recently proposed debiased IVW (dIVW) estimator is a special case of our proposed pIVW estimator. We further prove that the pIVW estimator has smaller bias and variance than the dIVW estimator under some regularity conditions. We also conduct extensive simulation studies to demonstrate the performance of the proposed pIVW estimator. Furthermore, we apply the pIVW estimator to estimate the causal effects of five obesity‐related exposures on three coronavirus disease 2019 (COVID‐19) outcomes. Notably, we find that hypertensive disease is associated with an increased risk of hospitalized COVID‐19; and peripheral vascular disease and higher body mass index are associated with increased risks of COVID‐19 infection, hospitalized COVID‐19, and critically ill COVID‐19. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Using information criteria to select smoothing parameters when analyzing survival data with time-varying coefficient hazard models.
- Author
-
Luo, Lingfeng, He, Kevin, Wu, Wenbo, and Taylor, Jeremy MG
- Subjects
- *
AKAIKE information criterion , *PANCREATIC cancer , *CONFIDENCE intervals , *HAZARDS - Abstract
Analyzing the large-scale survival data from the National Cancer Institute's Surveillance, Epidemiology, and End Results (SEER) Program may help guide the management of cancer. Detecting and characterizing the time-varying effects of factors collected at the time of diagnosis could reveal important and useful patterns. However, fitting a time-varying effect model by maximizing the partial likelihood with such large-scale survival data is not feasible with most existing software. Moreover, estimating time-varying coefficients using spline based approaches requires a moderate number of knots, which may lead to unstable estimation and over-fitting issues. To resolve these issues, adding a penalty term greatly aids estimation. The selection of penalty smoothing parameters is difficult in this time-varying setting, as traditional ways like using Akaike information criterion do not work, while cross-validation methods have a heavy computational burden, leading to unstable selections. We propose modified information criteria to determine the smoothing parameter and a parallelized Newton-based algorithm for estimation. We conduct simulations to evaluate the performance of the proposed method. We find that penalization with the smoothing parameter chosen by a modified information criteria is effective at reducing the mean squared error of the estimated time-varying coefficients. Compared to a number of alternatives, we find that the estimates of the variance derived from Bayesian considerations have the best coverage rates of confidence intervals. We apply the method to SEER head-and-neck, colon, prostate, and pancreatic cancer data and detect the time-varying nature of various risk factors. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Bayesian Regularized SEM: Current Capabilities and Constraints.
- Author
-
van Erp, Sara
- Subjects
- *
BAYESIAN analysis , *SCANNING electron microscopy , *EXPANSION & contraction of concrete , *INFECTION prevention , *MENTAL health - Abstract
An important challenge in statistical modeling is to balance how well our model explains the phenomenon under investigation with the parsimony of this explanation. In structural equation modeling (SEM), penalization approaches that add a penalty term to the estimation procedure have been proposed to achieve this balance. An alternative to the classical penalization approach is Bayesian regularized SEM in which the prior distribution serves as the penalty function. Many different shrinkage priors exist, enabling great flexibility in terms of shrinkage behavior. As a result, different types of shrinkage priors have been proposed for use in a wide variety of SEMs. However, the lack of a general framework and the technical details of these shrinkage methods can make it difficult for researchers outside the field of (Bayesian) regularized SEM to understand and apply these methods in their own work. Therefore, the aim of this paper is to provide an overview of Bayesian regularized SEM, with a focus on the types of SEMs in which Bayesian regularization has been applied as well as available software implementations. Through an empirical example, various open-source software packages for (Bayesian) regularized SEM are illustrated and all code is made available online to aid researchers in applying these methods. Finally, reviewing the current capabilities and constraints of Bayesian regularized SEM identifies several directions for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Robust estimation in regression and classification methods for large dimensional data.
- Author
-
Zhang, Chunming, Zhu, Lixing, and Shen, Yanbo
- Subjects
OUTLIER detection ,MACHINE learning ,REGRESSION analysis ,STATISTICS ,DATA analysis ,CLASSIFICATION - Abstract
Statistical data analysis and machine learning heavily rely on error measures for regression, classification, and forecasting. Bregman divergence (BD ) is a widely used family of error measures, but it is not robust to outlying observations or high leverage points in large- and high-dimensional datasets. In this paper, we propose a new family of robust Bregman divergences called "robust- BD " that are less sensitive to data outliers. We explore their suitability for sparse large-dimensional regression models with incompletely specified response variable distributions and propose a new estimate called the "penalized robust- BD estimate" that achieves the same oracle property as ordinary non-robust penalized least-squares and penalized-likelihood estimates. We conduct extensive numerical experiments to evaluate the performance of the proposed penalized robust- BD estimate and compare it with classical approaches, and show that our proposed method improves on existing approaches. Finally, we analyze a real dataset to illustrate the practicality of our proposed method. Our findings suggest that the proposed method can be a useful tool for robust statistical data analysis and machine learning in the presence of outliers and large-dimensional data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Comparing Bayesian Variable Selection to Lasso Approaches for Applications in Psychology.
- Author
-
Bainter, Sierra A., McCauley, Thomas G., Fahmy, Mahmoud M., Goodman, Zachary T., Kupis, Lauren B., and Rao, J. Sunil
- Subjects
RANDOM variables ,PSYCHOLOGICAL research ,PSYCHOLOGY ,SAMPLE size (Statistics) ,MENTAL depression ,FALSE memory syndrome - Abstract
In the current paper, we review existing tools for solving variable selection problems in psychology. Modern regularization methods such as lasso regression have recently been introduced in the field and are incorporated into popular methodologies, such as network analysis. However, several recognized limitations of lasso regularization may limit its suitability for psychological research. In this paper, we compare the properties of lasso approaches used for variable selection to Bayesian variable selection approaches. In particular we highlight advantages of stochastic search variable selection (SSVS), that make it well suited for variable selection applications in psychology. We demonstrate these advantages and contrast SSVS with lasso type penalization in an application to predict depression symptoms in a large sample and an accompanying simulation study. We investigate the effects of sample size, effect size, and patterns of correlation among predictors on rates of correct and false inclusion and bias in the estimates. SSVS as investigated here is reasonably computationally efficient and powerful to detect moderate effects in small sample sizes (or small effects in moderate sample sizes), while protecting against false inclusion and without over-penalizing true effects. We recommend SSVS as a flexible framework that is well-suited for the field, discuss limitations, and suggest directions for future development. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Lewy-Stampacchia inequality for noncoercive parabolic obstacle problems.
- Author
-
Farroni, Fernando, Moscariello, Gioconda, and Zecca, Gabriella
- Subjects
MATHEMATICAL singularities ,NONLINEAR analysis ,PROBLEM solving ,COEFFICIENTS (Statistics) ,MATHEMATICAL formulas - Abstract
We investigate the obstacle problem for a class of nonlinear and noncoercive parabolic variational inequalities whose model is a Leray-Lions type operator having singularities in the coefficients of the lower order terms. We prove the existence of a solution to the obstacle problem satisfying a Lewy-Stampacchia type inequality. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Fixed Effects Testing in High-Dimensional Linear Mixed Models
- Author
-
Bradic, Jelena, Claeskens, Gerda, and Gueuning, Thomas
- Subjects
Misspecification ,Penalization ,p-Values ,Random effects ,Robustness ,stat.ME ,cs.LG ,math.ST ,stat.ML ,stat.TH ,Statistics & Probability ,Statistics ,Econometrics ,Demography - Abstract
Many scientific and engineering challenges -- ranging from pharmacokineticdrug dosage allocation and personalized medicine to marketing mix (4Ps)recommendations -- require an understanding of the unobserved heterogeneity inorder to develop the best decision making-processes. In this paper, we developa hypothesis test and the corresponding p-value for testing for thesignificance of the homogeneous structure in linear mixed models. A robustmatching moment construction is used for creating a test that adapts to thesize of the model sparsity. When unobserved heterogeneity at a cluster level isconstant, we show that our test is both consistent and unbiased even when thedimension of the model is extremely high. Our theoretical results rely on a newfamily of adaptive sparse estimators of the fixed effects that do not requireconsistent estimation of the random effects. Moreover, our inference results donot require consistent model selection. We showcase that moment matching can beextended to nonlinear mixed effects models and to generalized linear mixedeffects models. In numerical and real data experiments, we find that thedeveloped method is extremely accurate, that it adapts to the size of theunderlying model and is decidedly powerful in the presence of irrelevantcovariates.
- Published
- 2020
38. Treatment of amblyopia: an update
- Author
-
Swarna Biseria Gupta, Yuri Kashiv, and Himanshu Gaikwad
- Subjects
amblyopia ,binocular vision ,monocular vision ,newer strategies ,occlusion ,penalization ,Medicine - Abstract
Amblyopia, the primary cause of one-sided vision impairment among children globally, occurs at a rate of 3.7%. It stems from early visual deprivation or inadequate focusing in one eye, creating an unevenness in visual information sent to the brain’s visual cortex. Consequently, vision diminishes in the affected eye, impacting the coordination between both eyes. When signals from one eye are unclear, the brain inhibits input from that eye, disrupting the visual pathway. Apart from impacting visual coordination, amblyopia can influence tasks like eye-hand coordination, reading, and an individual’s self-perception. Several therapies have emerged for treating amblyopia in both children and adults. Traditional treatments mainly penalized the stronger eye through patching or medicinal penalization. Yet, these methods have limitations concerning their effectiveness and patient comfort, affecting individuals and their families. Recent studies indicate that people with amblyopia retain binocular cortical mechanisms responsive to varying visual stimuli levels. As a result, a more practical approach might involve simultaneously stimulating both eyes to enhance vision in the weaker eye, diminish suppression, and bolster binocular vision.
- Published
- 2023
- Full Text
- View/download PDF
39. Variable selection in regression‐based estimation of dynamic treatment regimes.
- Author
-
Bian, Zeyu, Moodie, Erica E. M., Shortreed, Susan M., and Bhatnagar, Sahir
- Subjects
- *
LEAST squares , *PROGNOSIS , *THERAPEUTICS , *INDIVIDUALIZED medicine - Abstract
Dynamic treatment regimes (DTRs) consist of a sequence of decision rules, one per stage of intervention, that aim to recommend effective treatments for individual patients according to patient information history. DTRs can be estimated from models which include interactions between treatment and a (typically small) number of covariates which are often chosen a priori. However, with increasingly large and complex data being collected, it can be difficult to know which prognostic factors might be relevant in the treatment rule. Therefore, a more data‐driven approach to select these covariates might improve the estimated decision rules and simplify models to make them easier to interpret. We propose a variable selection method for DTR estimation using penalized dynamic weighted least squares. Our method has the strong heredity property, that is, an interaction term can be included in the model only if the corresponding main terms have also been selected. We show our method has both the double robustness property and the oracle property theoretically; and the newly proposed method compares favorably with other variable selection approaches in numerical studies. We further illustrate the proposed method on data from the Sequenced Treatment Alternatives to Relieve Depression study. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. 15-M Mobilizations and the penalization of counter-hegemonic protest in contemporary Spain.
- Author
-
Calvo, Kerman and Romeo Echeverría, Aitor
- Subjects
- *
POLITICAL participation , *AUSTERITY , *CRIMINOLOGY , *PUBLIC demonstrations , *ANXIETY , *POLICE - Abstract
This article discusses 15-M and anti-austerity mobilizations in Spain from the perspective of repression and penalization. The literature has paid a great deal of attention to the consequences of this cycle of protest in relation to the quality of democratic participation and governance; it could be argued that the 15-M movement has raised the standards for key aspects of Spanish democracy. In articulating new counter-hegemonic claims, however, 15-M mobilizations have created an opportunity for new forms of repression. Drawing on criminology, socio-legal studies, and mobilization literature, we argue that this cycle of protest has been penalized. This involves a combination of technologies of repression that include invasive policing, securitization, and criminalization. Penalization needs to be seen as a dissent-suppressing mechanism, a negative response by political authorities and private actors that thrives when societies suffer from widespread anxieties about insecurity and crime. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. The extent of partially resolving uncertainty in assessing coherent conditional plausibilities.
- Author
-
Petturiti, Davide and Vantaggi, Barbara
- Subjects
- *
CONDITIONAL probability - Abstract
Handling uncertainty and reasoning under partial knowledge are challenging tasks that require to deal with coherent assessments and their extensions. Plausibility theory is shown to rest upon the principle of partially resolving uncertainty due to Jaffray, together with a systematically optimistic behavior. This means that we allow situations in which the agent may only acquire the information that a non-impossible event occurs, without knowing which is the true state of the world. This leads to assume that a target event is plausibly true if it is compatible with the acquired piece of information. The aim of the paper is to provide coherence conditions for a conditional plausibility assessment (namely, Pl-coherence), by referring to a suitable axiomatic definition based on the Dempster's rule of conditioning. We provide different equivalent notions of Pl-coherence in terms of consistency, betting scheme, and penalization that, as a by-product, highlight different interpretations. We then specialize the Pl-coherence conditions to the subclasses of (finitely additive) conditional probabilities and (finitely maxitive) conditional possibilities. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. SPARSE AND LOW-RANK MATRIX QUANTILE ESTIMATION WITH APPLICATION TO QUADRATIC REGRESSION.
- Author
-
Wenqi Lu, Zhongyi Zhu, and Heng Lian
- Subjects
QUANTILE regression ,LOW-rank matrices ,SPARSE matrices - Abstract
This study examines matrix quantile regression where the covariate is a matrix and the response is a scalar. Although the statistical estimation of matrix regression is an active field of research, few studies examine quantile regression with matrix covariates. We propose an estimation procedure based on convex regularizations in a high-dimensional setting. In order to reduce the dimensionality, the coefficient matrix is assumed to be low rank and/or sparse. Thus, we impose two regularizers to encourage different low-dimensional structures. We develop the asymptotic properties and an implementation based on the incremental proximal gradient algorithm. We then apply the proposed estimator to quadratic quantile regression, and demonstrate its advantages using simulations and a real-data analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Kinetic plasma-wall interaction using immersed boundary conditions
- Author
-
Yann Munschy, Emily Bourne, Guilhem Dif-Pradalier, Peter Donnel, Philippe Ghendrih, Virginie Grandgirard, and Yanick Sarazin
- Subjects
Vlasov-Poisson system ,immersed boundary conditions ,penalization ,gyrokinetics ,kinetic sheath ,kinetic plasma wall interaction ,Nuclear and particle physics. Atomic energy. Radioactivity ,QC770-798 - Abstract
The interaction between a plasma and a solid surface is studied in a (1D-1V) kinetic approach using immersed boundary conditions and penalization to model the wall. Two solutions for the penalized wall region are investigated that either allow currents to flow within the material boundary or not. Essential kinetic aspects of sheath physics are recovered in both cases and their parametric dependencies investigated. Importantly, we show how the two approaches can be reconciled when accounting for relevant kinetic effects. Non-Maxwellian features of the ion and electron distribution functions are essential to capture the value of the potential drop in the sheath. These features lead to a sheath heat transmission factor for ions 60% larger than usually predicted and 35% for electrons. The role of collisions is discussed and means of incorporating minimally-relevant kinetic sheath physics in the gyrokinetic framework are discussed.
- Published
- 2024
- Full Text
- View/download PDF
44. The Role of Power in the Process of Criminalization- Penalization
- Author
-
Taher Tohidi and Mohammad Ashouri
- Subjects
power ,public policy ,criminalization ,penalization ,Law ,Criminal law and procedure ,K5000-5582 - Abstract
By reflecting on social relations, the footprints of power will be revealed, and in other words, power has a fluid presence in all matters of human life. Human societies have accepted the power of Mehr by establishing a political system for the order of affairs, and by establishing various institutions, they have tried to manage their affairs. By accepting the principle of separation of powers in a society or sovereign territory, the legislative, executive and judicial institutions will work together in harmony, and it is obvious that these institutions will also be affected by the ruling political context. The institution of criminal legislation in every society will explain the legislative policies and determine the normative boundaries and protect the value models of the citizens, and without a doubt, the determination of this value territory is also a function of power considerations, the foundations of which are established in the general policy of the country. The fluid power in the public policy of a country will determine the direction of the criminalization and punishment processes in the context of legislative criminal policy and with this description, the influence of power on the criminalization and punishment processes will be revealed. Power has manifested itself in different forms and on this basis, the degree of influence on various categories in the administration of a society will also be different. Political power, military power, royal power or religious power, media power and other examples, depending on the type and nature of the ruling regime of a society, can affect the legislative framework and the regime of crimes and punishments against illegal behaviors. Therefore, it is reasonable to maintin that the legislative system will be affected by the context of the ruling power. Realizing that power, whether obtained through legitimate means or through force and domination over subordinates, ultimately affects the legislative system and the processes of criminalization and determining punishments, and this issue will be the beginning of a way to another research: how this affects the institutions responsible for determining crimes and punishments will be determined. On this basis, in terms of revealing the new discourse of influence of power in the hidden layers of the legislative policy of the countries, which in the future and in the hands of the governments, will become a power in the direction of controlling and restraining the subjects that make up the discourse of power, the present article has been written in order to present an answer to the important question of "How does power influence the processes of criminalization and punishment?" Undoubtedly, today, with the ever-increasing development of the "government" institution in its many forms, it has made people see more clearly the influence of economic, military powers at the national and international levels. In some cases, the influence of political power in the approval or non-approval of punitive laws is so obvious that the role of expediency can be clearly seen. Expediency in supporting and protecting the interests of a limited number of people or belonging to a specific group causes the approval of laws that are completely contrary to the criterion of "public interest" and makes the color and shape of some laws so clear that other than protection. It does not serve any purpose of special group interests. It should not be forgotten that in such cases, the law is passed in the name of protecting the interests of the general public of a society. In many cases, it can be seen that in authoritarian government systems, the people are not at all aware of the mass of approved laws, and the people's representatives are also under the direct influence of economic, military, and media power, etc.The question of "How does power affect the processes of criminalization and punishment?" is a question that needs to be analyzed due to the lack of research writings in the scientific bases of the country, and we have tried to understand its hidden layers with an analytical-descriptive method and using theoretical sources. In this regard, uncovering the role of power in the process of criminalization and that the system of crimes and punishments determined for them is itself a function of the foundations of the ruling power, is considered the achievement of the article, because it puts a seal of approval on the fact that; the type and even the amount of crimes are based on the ruling powers and their beliefs, and therefore, it is not always the case that the interest of the individual is the basis of criminalization, and the appeal to the concept of protection of the "public good" is itself a sign of the superiority of political power and its obvious influence in determining the system of crimes and punishments. In other words, in many cases, the political governments of countries act through the system of criminalization-punishment to maintain their power and in this way keep the citizens in the center of power.
- Published
- 2023
- Full Text
- View/download PDF
45. An iterative optimization scheme to accommodate inequality constraints in air quality geostatistical estimation of multivariate PM
- Author
-
Maxime Beauchamp and Bertrand Bessagnet
- Subjects
Air quality ,Cokriging ,Optimization ,Penalization ,Multivariate particulate matter ,Chemistry-transport model ,Science (General) ,Q1-390 ,Social sciences (General) ,H1-99 - Abstract
The kriging-based estimation of the different types of atmospheric particulate matter (PM) pollutions defined in the air quality regulation raises some operational problems because the (co)kriging equations are obtained by minimizing a linear combination of the estimation variances subject to unbiasedness constraints. As a consequence, the estimation process can result in total PM10 concentrations that are less than the PM2.5 concentrations which would be physically impossible. In a previous publication, it was shown that a convenient external drift modeling can reduce the number of spatial locations where the inequality constraint is not satisfied, without completely solving the problem. In this work, the formulation of the cokriging system is modified, inspired by previous works focusing on positive kriging. The introduction of additional constraints on the cokriging weights are presented, leading to a unique and optimal solution to the problem of cokriging under inequality constraints between two variables. Some computational and algorithmic details are introduced. An evaluation of the penalized cokriging is provided by using the European PM monitoring sites dataset: some maps and performance scores are given to assess the relevance of our iterative optimization scheme.
- Published
- 2023
- Full Text
- View/download PDF
46. Online adaptive group-wise sparse Penalized Recursive Exponentially Weighted N-way Partial Least Square for epidural intracranial BCI.
- Author
-
Moly, Alexandre, Aksenov, Alexandre, Martel, Félix, and Aksenova, Tetiana
- Subjects
LEAST squares ,ONLINE algorithms ,COMPUTER interfaces ,ROBOTIC exoskeletons ,ONLINE education ,TRANSFER of training - Abstract
Introduction: Motor Brain--Computer Interfaces (BCIs) create new communication pathways between the brain and external effectors for patients with severe motor impairments. Control of complex effectors such as robotic arms or exoskeletons is generally based on the real-time decoding of high- resolution neural signals. However, high-dimensional and noisy brain signals pose challenges, such as limitations in the generalization ability of the decoding model and increased computational demands. Methods: The use of sparse decodersmay offer a way to address these challenges. A sparsity-promoting penalization is a common approach to obtaining a sparse solution. BCI features are naturally structured and grouped according to spatial (electrodes), frequency, and temporal dimensions. Applying group-wise sparsity, where the coefficients of a group are set to zero simultaneously, has the potential to decrease computational time and memory usage, as well as simplify data transfer. Additionally, online closed-loop decoder adaptation (CLDA) is known to be an efficient procedure for BCI decoder training, taking into account neuronal feedback. In this study, we propose a new algorithm for online closed-loop training of group-wise sparse multilinear decoders using L
p -Penalized Recursive Exponentially Weighted N-way Partial Least Square (PREW-NPLS). Three types of sparsity-promoting penalization were explored using Lp with p = 0., 0.5, and 1. Results: The algorithms were tested offline in a pseudo-online manner for features grouped by spatial dimension. A comparison study was conducted using an epidural ECoG dataset recorded from a tetraplegic individual during long-term BCI experiments for controlling a virtual avatar (left/right-hand 3D translation). Novel algorithms showed comparable or better decoding performance than conventional REW-NPLS, which was achieved with sparse models. The proposed algorithms are compatible with real-time CLDA. Discussion: The proposed algorithm demonstrated good performance while drastically reducing the computational load and the memory consumption. However, the current study is limited to offline computation on data recorded with a single patient, with penalization restricted to the spatial domain only. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
47. Penalization approaches in the conditional maximum likelihood and Rasch modelling context.
- Author
-
Gürer, Can and Draxler, Clemens
- Subjects
- *
RASCH models , *PUNISHMENT , *SAMPLE size (Statistics) , *POSSIBILITY - Abstract
Recent detection methods for Differential Item Functioning (DIF) include approaches like Rasch Trees, DIFlasso, GPCMlasso and Item Focussed Trees, all of which ‐ in contrast to well established methods ‐ can handle metric covariates inducing DIF. A new estimation method shall address their downsides by mainly aiming at combining three central virtues: the use of conditional likelihood for estimation, the incorporation of linear influence of metric covariates on item difficulty and the possibility to detect different DIF types: certain items showing DIF, certain covariates inducing DIF, or certain covariates inducing DIF in certain items. Each of the approaches mentioned lacks in two of these aspects. We introduce a method for DIF detection, which firstly utilizes the conditional likelihood for estimation combined with group Lasso‐penalization for item or variable selection and L1‐penalization for interaction selection, secondly incorporates linear effects instead of approximation through step functions, and thirdly provides the possibility to investigate any of the three DIF types. The method is described theoretically, challenges in implementation are discussed. A dataset is analysed for all DIF types and shows comparable results between methods. Simulation studies per DIF type reveal competitive performance of cmlDIFlasso, particularly when selecting interactions in case of large sample sizes and numbers of parameters. Coupled with low computation times, cmlDIFlasso seems a worthwhile option for applied DIF detection. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Regularization approaches in clinical biostatistics: A review of methods and their applications.
- Author
-
Friedrich, Sarah, Groll, Andreas, Ickstadt, Katja, Kneib, Thomas, Pauly, Markus, Rahnenführer, Jörg, and Friede, Tim
- Subjects
- *
TIKHONOV regularization , *RANDOM effects model , *BIOMETRY , *OPTIMAL stopping (Mathematical statistics) , *COMPUTERS in education , *DATA science - Abstract
A range of regularization approaches have been proposed in the data sciences to overcome overfitting, to exploit sparsity or to improve prediction. Using a broad definition of regularization, namely controlling model complexity by adding information in order to solve ill-posed problems or to prevent overfitting, we review a range of approaches within this framework including penalization, early stopping, ensembling and model averaging. Aspects of their practical implementation are discussed including available R-packages and examples are provided. To assess the extent to which these approaches are used in medicine, we conducted a review of three general medical journals. It revealed that regularization approaches are rarely applied in practical clinical applications, with the exception of random effects models. Hence, we suggest a more frequent use of regularization approaches in medical research. In situations where also other approaches work well, the only downside of the regularization approaches is increased complexity in the conduct of the analyses which can pose challenges in terms of computational resources and expertise on the side of the data analyst. In our view, both can and should be overcome by investments in appropriate computing facilities and educational resources. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Exploring dimension learning via a penalized probabilistic principal component analysis.
- Author
-
Deng, Wei Q. and Craiu, Radu V.
- Subjects
- *
PRINCIPAL components analysis , *CONSTRAINED optimization , *LEARNING strategies , *GENE expression - Abstract
Establishing a low-dimensional representation of the data leads to efficient data learning strategies. In many cases, the reduced dimension needs to be explicitly stated and estimated from the data. We explore the estimation of dimension in finite samples as a constrained optimization problem, where the estimated dimension is a maximizer of a penalized profile likelihood criterion within the framework of a probabilistic principal components analysis. Unlike other penalized maximization problems that require an 'optimal' penalty tuning parameter, we propose a data-averaging procedure whereby the estimated dimension emerges as the most favourable choice over a range of plausible penalty parameters. The proposed heuristic is compared to a large number of alternative criteria in simulations and an application to gene expression data. Extensive simulation studies reveal that none of the methods uniformly dominate the other and highlight the importance of subject-specific knowledge in choosing statistical methods for dimension learning. Our application results also suggest that gene expression data have a higher intrinsic dimension than previously thought. Overall, our proposed heuristic strikes a good balance and is the method of choice when model assumptions deviated moderately. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Distributed Sparse Regression via Penalization.
- Author
-
Yao Ji, Scutari, Gesualdo, Ying Sun, and Honnappa, Harsha
- Subjects
- *
STATISTICAL errors , *UNDIRECTED graphs , *DISTRIBUTED algorithms , *SAMPLE size (Statistics) , *DILEMMA - Abstract
We study sparse linear regression over a network of agents, modeled as an undirected graph (with no centralized node). The estimation problem is formulated as the minimization of the sum of the local LASSO loss functions plus a quadratic penalty of the consensus constraint--the latter being instrumental to obtain distributed solution methods. While penalty-based consensus methods have been extensively studied in the optimization literature, their statistical and computational guarantees in the high dimensional setting remain unclear. This work provides an answer to this open problem. Our contribution is two-fold. First, we establish statistical consistency of the estimator: under a suitable choice of the penalty parameter, the optimal solution of the penalized problem achieves near optimal minimax rate O(s log d/N) in l2-loss, where s is the sparsity value, d is the ambient dimension, and N is the total sample size in the network--this matches centralized sample rates. Second, we show that the proximal-gradient algorithm applied to the penalized problem, which naturally leads to distributed implementations, converges linearly up to a tolerance of the order of the centralized statistical error--the rate scales as O(d), revealing an unavoidable speed-accuracy dilemma. Numerical results demonstrate the tightness of the derived sample rate and convergence rate scalings. [ABSTRACT FROM AUTHOR]
- Published
- 2023
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.