2,241 results on '"extrapolation"'
Search Results
2. A new map-polynomial fitting extrapolation method of data in the low scattering vector region for dilute polydisperse spherical systems in SAXS
- Author
-
Chen, Rongchao, Li, Zhihong, and He, Jianhua
- Published
- 2025
- Full Text
- View/download PDF
3. Latency correction in sparse neuronal spike trains with overlapping global events
- Author
-
Mariani, Arturo, Senocrate, Federico, Mikiel-Hunter, Jason, McAlpine, David, Beiderbeck, Barbara, Pecka, Michael, Lin, Kevin, and Kreuz, Thomas
- Published
- 2025
- Full Text
- View/download PDF
4. Optimal policy for behavioral financial crises
- Author
-
Fontanier, Paul
- Published
- 2025
- Full Text
- View/download PDF
5. Seismic data extrapolation based on multi-scale dynamic time warping
- Author
-
Li, Jie-Li, Huang, Wei-Lin, and Zhang, Rui-Xiang
- Published
- 2024
- Full Text
- View/download PDF
6. The need for standardization in ecological modeling for decision support: Lessons from ecological risk assessment
- Author
-
Forbes, Valery E.
- Published
- 2024
- Full Text
- View/download PDF
7. Choice of Gaussian Process kernels used in LSG models for flood inundation predictions
- Author
-
Lu, Jiabo, Wang, Quan J., Fraehr, Niels, Xiang, Xiaohua, and Wu, Xiaoling
- Published
- 2025
- Full Text
- View/download PDF
8. Unconditionally energy stable and second-order accurate one-parameter ESAV schemes with non-uniform time stepsizes for the functionalized Cahn-Hilliard equation.
- Author
-
Tan, Zengqiang
- Subjects
- *
SEPARATION of variables , *EXPONENTIAL stability , *ENERGY consumption , *EXTRAPOLATION , *EQUATIONS - Abstract
This paper studies linear and unconditionally energy stable schemes for the functionalized Cahn-Hilliard (FCH) equation. Such schemes are built on the exponential scalar auxiliary variable (ESAV) approach and the one-parameter time discretizations as well as the extrapolation for the nonlinear term, and can arrive at second-order accuracy in time. It is shown that the derived schemes are uniquely solvable and unconditionally energy stable by using an algebraic identity derived by the method of undetermined coefficients. Importantly, such one-parameter ESAV schemes are extended to those with non-uniform time stepsizes, which are also shown to be unconditionally energy stable by an analogous algebraic identity. The energy stability results can be easily extended to the fully discrete schemes, where the Fourier pseudo-spectral method is employed in space. Moreover, based on the derived schemes with non-uniform time stepsizes, an adaptive time-stepping strategy is introduced to improve the computational efficiency for the long time simulations of the FCH equation. Several numerical examples are conducted to validate the computational accuracy and energy stability of our schemes as well as the effectiveness and computational efficiency of the derived adaptive time-stepping algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. Combined high order compact schemes for non-self-adjoint nonlinear Schrödinger equations.
- Author
-
Kong, Linghua, Ouyang, Songpei, Gao, Rong, and Liang, Haiyan
- Subjects
- *
NONLINEAR Schrodinger equation , *MATRIX multiplications , *TWO-dimensional bar codes , *EXTRAPOLATION , *BANDWIDTHS - Abstract
Some combined high order compact (CHOC) schemes are proposed for non-self-adjoint and nonlinear Schrödinger equation (NSANLSE). There are first order and second order spatial derivatives u x ‾ , u x x in the NSANLSE. If one uses classical high order compact schemes to approximate u x x and u x ‾ separately, it will widen the bandwidth in practical coding due to matrix multiplication. This will partly counteract the advantages of high order compact. To overcome the deficiency, one solves the spatial derivatives simultaneously by combining them. In other words, it solves u x j n and u x x j n simultaneously in terms of u j. The idea is applied to discretize NSANLSE in space. Two efficient numerical schemes are proposed for NSANLSE. The stability and convergence of the new schemes are analyzed theoretically. Numerical experiments are reported to verify the new schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. An inertial hybrid DFPM-based algorithm for constrained nonlinear equations with applications.
- Author
-
Ma, Guodong, Zhang, Wei, Jian, Jinbao, Huang, Zefeng, and Mo, Jingyi
- Subjects
- *
NONLINEAR equations , *LIPSCHITZ continuity , *COMPRESSED sensing , *MATHEMATICAL optimization , *EXTRAPOLATION , *CONJUGATE gradient methods - Abstract
The derivative-free projection method (DFPM) is an effective and classic approach for solving the system of nonlinear monotone equations with convex constraints, but the global convergence or convergence rate of the DFPM is typically analyzed under the Lipschitz continuity. This observation motivates us to propose an inertial hybrid DFPM-based algorithm, which incorporates a modified conjugate parameter utilizing a hybridized technique, to weaken the convergence assumption. By integrating an improved inertial extrapolation step and the restart procedure into the search direction, the resulting direction satisfies the sufficient descent and trust region properties, which independent of line search choices. Under weaker conditions, we establish the global convergence and Q-linear convergence rate of the proposed algorithm. To the best of our knowledge, this is the first analysis of the Q-linear convergence rate under the condition that the mapping is locally Lipschitz continuous. Finally, by applying the Bayesian hyperparameter optimization technique, a series of numerical experiment results demonstrate that the new algorithm has advantages in solving nonlinear monotone equation systems with convex constraints and handling compressed sensing problems. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
11. A derivative-free projection method with double inertial effects for solving nonlinear equations.
- Author
-
Ibrahim, Abdulkarim Hassan and Al-Homidan, Suliman
- Subjects
- *
NONLINEAR equations , *EXTRAPOLATION - Abstract
Recent research has highlighted the significant performance of multi-step inertial extrapolation in a wide range of algorithmic applications. This paper introduces a derivative-free projection method (DFPM) with a double-inertial extrapolation step for solving large-scale systems of nonlinear equations. The proposed method's global convergence is established under the assumption that the underlying mapping is Lipschitz continuous and satisfies a certain generalized monotonicity assumption (e.g., it can be pseudo-monotone). This is the first convergence result for a DFPM with double inertial step to solve nonlinear equations. Numerical experiments are conducted using well-known test problems to show the proposed method's effectiveness and robustness compared to two existing methods in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
12. Convergence analysis of higher-order approximation of singularly perturbed 2D semilinear parabolic PDEs with non-homogeneous boundary conditions.
- Author
-
Yadav, Narendra Singh and Mukherjee, Kaushik
- Subjects
- *
NONLINEAR equations , *BOUNDARY layer (Aerodynamics) , *FINITE differences , *EXTRAPOLATION , *TRANSPORT equation - Abstract
This article focuses on developing and analyzing an efficient higher-order numerical approximation of singularly perturbed two-dimensional semilinear parabolic convection-diffusion problems with time-dependent boundary conditions. We approximate the governing nonlinear problem by an implicit fitted mesh method (FMM), which combines an alternating direction implicit scheme in the temporal direction together with a higher-order finite difference scheme in the spatial directions. Since the solution possesses exponential boundary layers, a Cartesian product of piecewise-uniform Shishkin meshes is used to discretize in space. To begin our analysis, we establish the stability corresponding to the continuous nonlinear problem, and obtain a-priori bounds for the solution derivatives. Thereafter, we pursue the stability analysis of the discrete problem, and prove ε -uniform convergence in the maximum-norm. Next, for enhancement of the temporal accuracy, we use the Richardson extrapolation technique solely in the temporal direction. In addition, we investigate the order reduction phenomenon naturally occurring due to the time-dependent boundary data and propose a suitable approximation to tackle this effect. Finally, we present the computational results to validate the theoretical estimates. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Unconditionally energy stable high-order BDF schemes for the molecular beam epitaxial model without slope selection.
- Author
-
Kang, Yuanyuan, Wang, Jindi, and Yang, Yin
- Subjects
- *
MOLECULAR beams , *ENERGY dissipation , *ORDER picking systems , *EXTRAPOLATION - Abstract
In this paper, we consider a class of k-order (3 ≤ k ≤ 5) backward differentiation formulas (BDF-k) for the molecular beam epitaxial (MBE) model without slope selection. Convex splitting technique along with k-th order Douglas-Dupont regularization term τ n k (− Δ) k D _ k ϕ n ( D _ k represents a truncated BDF-k formula) is added to the numerical schemes to ensure unconditional energy stability. The stabilized convex splitting BDF-k (3 ≤ k ≤ 5) methods are unique solvable unconditionally. Then the modified discrete energy dissipation laws are established by using the discrete gradient structures of BDF-k (3 ≤ k ≤ 5) formulas and processing k-th order explicit extrapolations of the concave term. In addition, based on the discrete energy technique, the L 2 norm stability and convergence of the stabilized BDF-k (3 ≤ k ≤ 5) schemes are obtained by means of the discrete orthogonal convolution kernels and the convolution type Young inequalities. Numerical results are carried out to verify our theory and illustrate the validity of the proposed schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Balancing validity and reliability as a function of sampling variability in forensic voice comparison.
- Author
-
Wang, Bruce Xiao and Hughes, Vincent
- Subjects
SCIENCE journalism ,FORENSIC scientists ,FORENSIC sciences ,EXTRAPOLATION ,CONSULTANTS - Abstract
• Three generations of ASR systems evaluated for validity and reliability. • Advanced systems yield better validity but not necessarily higher reliability. • Forensic ASR system validation should focus on both discrimination and reliability. • Forensic scientists need to develop measurement processes. • Forensic scientists need to establish tolerable variation. In forensic comparison sciences, experts are required to compare samples of known and unknown origin to evaluate the strength of the evidence assuming they came from the same- and different-sources. The application of valid (if the method measures what it is intended to) and reliable (if that method produces consistent results) forensic methods is required across many jurisdictions, such as the England & Wales Criminal Practice Directions 19A and UK Crown Prosecution Service and highlighted in the 2009 National Academy of Sciences report and by the President's Council of Advisors on Science and Technology in 2016. The current study uses simulation to examine the effect of number of speakers and sampling variability and on the evaluation of validity and reliability using different generations of automatic speaker recognition (ASR) systems in forensic voice comparison (FVC). The results show that the state-of-the-art system had better overall validity compared with less advanced systems. However, better validity does not necessarily lead to high reliability, and very often the opposite is true. Better system validity and higher discriminability have the potential of leading to a higher degree of uncertainty and inconsistency in the output (i.e. poorer reliability). This is particularly the case when dealing with small number of speakers, where the observed data does not adequately support density estimation, resulting in extrapolation, as is commonly expected in FVC casework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Performance extrapolation of an ultraviolet-based photocatalytic air purifier against near-ambient-level formaldehyde.
- Author
-
Ha, Seung-Ho, Szulejko, Jan E., Ahmadi, Younes, Shin, Hye-Jin, and Kim, Ki-Hyun
- Subjects
VOLATILE organic compounds ,FORMALDEHYDE ,TITANIUM dioxide ,EXTRAPOLATION ,HONEYCOMB structures - Abstract
[Display omitted] • The feasibility of a commercial air purifier (AP)-based filtration system was assessed. • Removal kinetics of formaldehyde was monitored using a near real-time sensor. • The photocatalytic performance of AP was compared between formulated and commercial filters. • Faster decrease in formaldehyde concentration was observed with adsorption/catalytic dual system. • Clean air delivery rate was used as a key metrics for performance evaluation. The practical utility of an air purifier (AP) with built-in adsorbent/catalyst-based filtration systems is assessed for the treatment of indoor volatile organic compounds (VOCs) through extrapolation of its performance based on a lab-scale chamber study. The feasibility of a commercial prototype AP unit with TiO 2 -based filters is tested in this work against 0.5–5 ppm formaldehyde (FA) in a 17 L chamber with an air recirculation rate of 565 h
−1 under the control of key process variables (e.g., initial FA concentration, flow rate, and dark/light conditions). The performance of the AP unit is evaluated by the clean air delivery rate (CADR), quantum yield, and space time yield in diverse operation settings (e.g., photocatalysis only or along with adsorption). The effects of ultraviolet (UV) irradiation are evident as the CADR value of 5 ppm FA sharply increases from 5.67 (UV-off) to 15 L/min (UV-on): however, the CADR values increase only slightly (12 to 15 L/min) as the recirculation flow rate changes from 100 to 160 L/min, respectively. Further, FA concentration vs. time relationship exhibit an apparent bimodal exponential decay, with the fastest and near 100 % removal at [FA] < 0.5 ppm. The performance of the proposed AP platform in relatively large real-world volume (e.g., 4 m3 , 2.4 h−1 air recirculation rate) is further estimated through extrapolation to offer valuable guidelines for the construction of AP systems in real-world applications. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
16. A reduced-dimension extrapolation two-grid Crank-Nicolson finite element method of unknown solution coefficient vectors for spatial fractional nonlinear Allen-Cahn equations.
- Author
-
Li, Huanrong, Li, Yuejie, Zeng, Yihui, and Luo, Zhendong
- Subjects
- *
FINITE element method , *NONLINEAR equations , *EXTRAPOLATION , *PROPER orthogonal decomposition - Abstract
This paper mainly focuses on the dimensionality reduction of unknown finite element (FE) solution coefficient vectors in two-grid Crank-Nicolson FE (TGCNFE) method for the spatial fractional nonlinear Allen-Cahn (SFNAC) equations. For this reason, a new TGCNFE method for the SFNAC equations is first established and the unconditional stability and convergence (errors) of TGCNFE solutions are demonstrated. Subsequently, the most important thing is to use a proper orthogonal decomposition to reduce the dimension of the unknown FE solution coefficient vectors of the TGCNFE method for the SFNAC equations and to construct a new reduced-dimension extrapolation TGCNFE (RDETGCNFE) method, and demonstrate the unconditional stability and errors of RDETGCNFE solutions. Finally, the correctness of the obtained theoretical results of unconditional stability and errors is validated by some numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Extrapolation methods as nonlinear Krylov subspace methods.
- Author
-
McCoid, Conor and Gander, Martin J.
- Subjects
- *
EXTRAPOLATION , *KRYLOV subspace , *QUASI-Newton methods , *EQUATIONS - Abstract
When applied to linear vector sequences, extrapolation methods are equivalent to Krylov subspace methods. Both types of methods can be expressed as particular cases of the multisecant equations, the secant method generalized to higher dimensions. Through these equations, there is also equivalence with a variety of quasi-Newton methods. This paper presents a framework to connect these various methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. SGML: A Python library for solution-guided machine learning
- Author
-
Wang, Ruijin, Du, Yuchen, Dai, Chunchun, Deng, Yang, Leng, Jiantao, and Chang, Tienchong
- Published
- 2025
- Full Text
- View/download PDF
19. Assessing the Performance of Alternative Methods for Estimating Long-Term Survival Benefit of Immuno-oncology Therapies.
- Author
-
Monnickendam, Giles
- Subjects
- *
TECHNOLOGY assessment , *SURVIVAL analysis (Biometry) , *SURVIVAL rate , *IMMUNE checkpoint inhibitors , *MEDICAL technology , *PARAMETRIC modeling , *EXTRAPOLATION - Abstract
This study aimed to determine the accuracy and consistency of established methods of extrapolating mean survival for immuno-oncology (IO) therapies, the extent of any systematic biases in estimating long-term clinical benefit, what influences the magnitude of any bias, and the potential implications for health technology assessment. A targeted literature search was conducted to identify published long-term follow-up from clinical trials of immune-checkpoint inhibitors. Earlier published results were identified and Kaplan-Meier estimates for short- and long-term follow-up were digitized and converted to pseudo–individual patient data using an established algorithm. Six standard parametric, 5 flexible parametric, and 2 mixture-cure models (MCMs) were used to extrapolate long-term survival. Mean and restricted mean survival time (RMST) were estimated and compared between short- and long-term follow-up. Predicted RMST from extrapolation of early data underestimated observed RMST in long-term follow-up for 184 of 271 extrapolations. All models except the MCMs frequently underestimated observed RMST. Mean survival estimates increased with longer follow-up in 196 of 270 extrapolations. The increase exceeded 20% in 122 extrapolations. Log-logistic and log-normal models showed the smallest change with additional follow-up. MCM performance varied substantially with functional form. Standard and flexible parametric models frequently underestimate mean survival for IO treatments. Log-logistic and log-normal models may be the most pragmatic and parsimonious solutions for estimating IO mean survival from immature data. Flexible parametric models may be preferred when the data used in health technology assessment are more mature. MCMs fitted to immature data produce unreliable results and are not recommended. • Treatment with immune-checkpoint inhibitors can result in a proportion of patients achieving durable response, generating a long tail to the survival distribution. Conventional extrapolation methods applied to short-term data may not perform well in estimating mean survival in these circumstances. • Both standard parametric and spline-based flexible parametric models (FPMs) tend to underestimate survival benefit for immuno-oncology therapies. Performance seems to worsen as the proportion of durable responders increases. More flexible models perform better than restrictive models, but with some risk of overfitting to short-term data. Mixture-cure models fitted to early data cuts produce unreliable results and may substantially overestimate mean survival. • Log-logistic and log-normal models may be the most pragmatic and parsimonious solutions for estimating mean survival from short-term data when the proportion of durable responders is expected to be low. FPMs using odds or normal scale may be better options when the data used in health technology assessment are more mature. When the expected proportion of durable responders is high, neither standard nor FPMs are expected to perform well and methods integrating external information should be considered. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Error analysis of vector penalty-projection method with second order accuracy for incompressible magnetohydrodynamic system.
- Author
-
Du, Zijun, Su, Haiyan, and Feng, Xinlong
- Subjects
- *
VECTOR analysis , *MAGNETIC fields , *FINITE element method , *EXTRAPOLATION - Abstract
The article introduces a new method, called the second-order vector penalty-projection (VPP) method, for solving incompressible magneto-hydrodynamic (MHD) equations. This method utilizes both semi-implicit and extrapolation techniques to handle the nonlinear terms in the hydrodynamic and magnetic equations. Compared to standard projection method, the VPP method offers a second-order rate of accuracy for pressure convergence and enables velocity and magnetic fields approximately satisfy the divergence-free conditions. Through some rigorous analysis, we demonstrate that our method exhibits stability and optimal error estimates in its semi-discrete form. Furthermore, we confirm the precision and stability of our method through some numerical simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Extrapolating animal consciousness.
- Author
-
Baetu, Tudor M.
- Subjects
- *
CONSCIOUSNESS , *EXTRAPOLATION - Abstract
I argue that the question of animal consciousness is an extrapolation problem and, as such, is best tackled by deploying currently accepted methodology for validating experimental models of a phenomenon of interest. This methodology relies on an assessment of similarities and dissimilarities between experimental models, the partial replication of findings across complementary models, and evidence from the successes and failures of explanations, technologies and medical applications developed by extrapolating and aggregating findings from multiple models. Crucially important, this methodology does not require a commitment to any particular theory or construct of consciousness, thus avoiding theory-biased reinterpretations of empirical findings rampant in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Option pricing under multifactor Black–Scholes model using orthogonal spline wavelets.
- Author
-
Černá, Dana and Fiňková, Kateřina
- Subjects
- *
BLACK-Scholes model , *NUMERICAL solutions to equations , *SPLINES , *CRANK-nicolson method , *PRICES , *PARTIAL differential equations , *SPLINE theory , *EXTRAPOLATION - Abstract
The paper focuses on pricing European-style options on multiple underlying assets under the Black–Scholes model represented by a nonstationary partial differential equation. The numerical solution of such equations is challenging in dimensions exceeding three, primarily due to the so-called curse of dimensionality. The main contribution of the paper is the design and analysis of the method based on combining the sparse wavelet-Galerkin method and the Crank–Nicolson scheme with Rannacher time-stepping enhanced by Richardson extrapolation, which helps overcome the curse of dimensionality. The next contribution is constructing a new orthogonal cubic spline wavelet basis on the interval and a sparse tensor product wavelet basis on the unit cube, which is suitable for the proposed method. The resulting method brings the following important advantages. The method is higher-order convergent with respect to both temporal and spatial variables, and the number of basis functions is significantly reduced compared to a full grid. Furthermore, many matrices involved in the computation are identity matrices, which results in a considerable simplification of the algorithm. Moreover, we prove that the condition numbers of discretization matrices are uniformly bounded and do not depend on the dimension, even without preconditioning, which leads to a small number of iterations when solving the resulting linear system. Numerical experiments are presented for several types of European-style options. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Richardson extrapolation and strain energy based partition of unity method for analysis of composite FG plates.
- Author
-
Jeyakarthikeyan, P.V., Subramaniam, Siddarth, Charuasia, Vikalp, Vengatesan, S., and Bui, Tinh Quoc
- Subjects
- *
PARTITION of unity method , *STRAIN energy , *COMPOSITE plates , *FREE vibration , *EXTRAPOLATION , *NUMERICAL analysis - Abstract
This work focuses on numerical analysis of static bending and free vibration of Functionally Graded plates (FGP) using Reissner–Mindlin theory with interior holes employing an effective Richardson extrapolation-based reduced integration (REQ) approach and strain energy. The partition of unity method is used to articulate stabilizing function locally, which possesses the local stabilizing ability to avoid zero-energy or hourglass effects, for each sub-quadrilateral or sub-cells (SCs) in the REQ scheme. The current technique computes element stiffness matrices and mass matrices for the bending and shear energies of the FGPs at the location (0,0) in the standard mapped (ξ , η) plane using reduced integration. It identifies the required local stability and refinement for field variable function from sub-cells to parent quadrilaterals, enhances the computational accuracy at a low cost, possesses a better convergence rate, and avoids shear-locking phenomenon in the analysis. Numerous benchmark numerical examples are considered for static bending and free vibration analysis with different types of boundary conditions with cutouts. The calculated final normalized central deflection and frequency values are compared with reference data available in literature, which confirms the accuracy, rate of convergence, shear-locking issues and computational performance of the developed REQ method. Investigations are conducted on FGPs with internal holes using various material gradient indices, boundary conditions, and geometric aspect ratios. • Richardson extrapolation based integration scheme simplifies the computation process using reduced integration scheme. • The computational time is significantly reduced as compared to the conventional quadrature. • Quite efficient method for analyzing with plate elements without much burden on number of integration points in the element. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Perils of Randomized Controlled Trial Survival Extrapolation Assuming Treatment Effect Waning: Why the Distinction Between Marginal and Conditional Estimates Matters.
- Author
-
Jennings, Angus C., Rutherford, Mark J., Latimer, Nicholas R., Sweeting, Michael J., and Lambert, Paul C.
- Subjects
- *
EXTRAPOLATION , *RANDOMIZED controlled trials , *TREATMENT effectiveness , *TREATMENT effect heterogeneity , *SURVIVAL rate , *NOMOGRAPHY (Mathematics) , *TECHNOLOGY assessment - Abstract
A long-term, constant, protective treatment effect is a strong assumption when extrapolating survival beyond clinical trial follow-up; hence, sensitivity to treatment effect waning is commonly assessed for economic evaluations. Forcing a hazard ratio (HR) to 1 does not necessarily estimate loss of individual-level treatment effect accurately because of HR selection bias. A simulation study was designed to explore the behavior of marginal HRs under a waning conditional (individual-level) treatment effect and demonstrate bias in forcing a marginal HR to 1 when the estimand is "survival difference with individual-level waning". Data were simulated under 4 parameter combinations (varying prognostic strength of heterogeneity and treatment effect). Time-varying marginal HRs were estimated in scenarios where the true conditional HR attenuated to 1. Restricted mean survival time differences, estimated having constrained the marginal HR to 1, were compared with true values to assess bias induced by marginal constraints. Under loss of conditional treatment effect, the marginal HR took a value >1 because of covariate imbalances. Constraining this value to 1 lead to restricted mean survival time difference bias of up to 0.8 years (57% increase). Inflation of effect size estimates also increased with the magnitude of initial protective treatment effect. Important differences exist between survival extrapolations assuming marginal versus conditional treatment effect waning. When a marginal HR is constrained to 1 to assess efficacy under individual-level treatment effect waning, the survival benefits associated with the new treatment will be overestimated, and incremental cost-effectiveness ratios will be underestimated. • Noncollapsibility of odds/hazard ratios (HRs) has been widely discussed in causal inference literature. Discussion of the impact in randomized controlled trial extrapolations is scarcer. This article explores the importance of understanding the difference between marginal and conditional HRs, specifically in the context of an economic evaluation/health technology assessment using randomized controlled trial data, carrying out an extrapolation assuming treatment effect waning. • If the individual-level treatment effect truly wanes after a period of protective treatment effect, a marginal (eg, unadjusted) HR > 1 from this point is to be expected. This reflects covariate imbalances between arms induced by the treatment's effect and demonstrates the difficulty in interpreting a HR that is dependent on both participant-level hazards and trial-level covariate distributions (and how these shift over time). • Forcing a marginal HR to 1 from the end of trial follow-up is not equivalent to individual-level treatment effect loss (inducing bias in restricted mean survival time difference of up to 0.8 years, increasing with treatment strength and prognostic power of marginal covariates). To most accurately assess sensitivity to treatment effect waning, constraints on the HR should be applied to an estimate that is conditional on all feasible prognostic factors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. L-stable spectral deferred correction methods and applications to phase field models.
- Author
-
Yao, Lin, Xia, Yinhua, and Xu, Yan
- Subjects
- *
MOLECULAR beam epitaxy , *CRANK-nicolson method , *LINEAR operators , *EXTRAPOLATION - Abstract
This paper presents the L-stable spectral deferred correction (SDC) methods with low stages. These schemes are initiated by the Crank-Nicolson method. We adopt the linear stabilization approach for the phase field models to obtain the linear implicit SDC scheme. This is done by adding and subtracting the linear stabilization operators that are provided for the different phase field problems. Without loss of the low-stage property, the extrapolation technique is also used in the prediction step of the semi-implicit SDC method. Numerical experiments are given to validate the high-order accuracy and the energy decay property of the proposed semi-implicit SDC methods for the Allen-Cahn, Cahn-Hilliard, and molecular beam epitaxy equations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Evidence of mechanisms in evidence-based policy.
- Author
-
Pérez-González, Saúl
- Subjects
- *
EVIDENCE-based medicine , *EXTRAPOLATION , *SOCIAL science research - Abstract
Evidence-based policy has achieved great relevance in policy-making and social research. Nonetheless, over the past few years, several problematic aspects of this approach have been identified. This paper discusses whether, and to what extent, evidence of mechanisms could contribute to addressing certain difficulties faced by evidence-based policy. I argue that it could play a crucial role in the assessment of the efficacy of interventions, the extrapolation of interventions to target populations, and the identification of side effects. For analysing the potential contribution of evidence of mechanisms, the previous debate on the pluralist approach to evidence-based medicine is taken as reference. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Variable sample-size operator extrapolation algorithm for stochastic mixed variational inequalities.
- Author
-
Yang, Zhen-Ping, Xie, Shuilian, Zhao, Yong, and Lin, Gui-Hua
- Subjects
- *
TRAFFIC assignment , *ASSIGNMENT problems (Programming) , *ALGORITHMS , *VARIATIONAL inequalities (Mathematics) , *EXTRAPOLATION , *SAMPLE size (Statistics) - Abstract
In this paper, we present a variable sample-size operator extrapolation algorithm for solving a class of stochastic mixed variational inequalities. One distinctive feature of our algorithm is that it updates a single search sequence by solving a prox-mapping subproblem and computing an evaluation of the expected mapping at each iteration and hence it may significantly reduce computation load. In particular, the iteration sequence generated by our algorithm always belongs to the feasible region. We show that, under some moderate conditions, the proposed algorithm can achieve O (1 / T) ergodic convergence rate in terms of the expected restricted gap function, where T denotes the number of iterations. We derive some results related to the convergence rate of the Bregman distance between iterates and solutions, the iteration complexity, and the oracle complexity for the proposed algorithm when the sample size increases at a geometric rate. We also investigate the sublinear convergence rate in terms of the residual function under the generalized monotonicity condition. Numerical experiments on stochastic network game, stochastic sparse traffic assignment problems and sparse classification problem indicate that the proposed algorithm is promising compared with some existing algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Evaluation of Flexible Parametric Relative Survival Approaches for Enforcing Long-Term Constraints When Extrapolating All-Cause Survival.
- Author
-
Lee, Sangyu, Lambert, Paul C., Sweeting, Michael J., Latimer, Nicholas R., and Rutherford, Mark J.
- Subjects
- *
TECHNOLOGY assessment , *SURVIVAL analysis (Biometry) , *RANDOMIZED controlled trials , *PARAMETER estimation , *DEATH rate , *EXTRAPOLATION , *NATURAL selection - Abstract
Parametric models are used to estimate the lifetime benefit of an intervention beyond the range of trial follow-up. Recent recommendations have suggested more flexible survival approaches and the use of external data when extrapolating. Both of these can be realized by using flexible parametric relative survival modeling. The overall aim of this article is to introduce and contrast various approaches for applying constraints on the long-term disease-related (excess) mortality including cure models and evaluate the consequent implications for extrapolation. We describe flexible parametric relative survival modeling approaches. We then introduce various options for constraining the long-term excess mortality and compare the performance of each method in simulated data. These methods include fitting a standard flexible parametric relative survival model, enforcing statistical cure, and forcing the long-term excess mortality to converge to a constant. We simulate various scenarios, including where statistical cure is reasonable and where the long-term excess mortality persists. The compared approaches showed similar survival fits within the follow-up period. However, when extrapolating the all-cause survival beyond trial follow-up, there is variation depending on the assumption made about the long-term excess mortality. Altering the time point from which the excess mortality is constrained enables further flexibility. The various constraints can lead to applying explicit assumptions when extrapolating, which could lead to more plausible survival extrapolations. The inclusion of general population mortality directly into the model-building process, which is possible for all considered approaches, should be adopted more widely in survival extrapolation in health technology assessment. • Extrapolation of all-cause survival from randomized controlled trial data is often required in the context of economic evaluation of novel interventions. External information can be used to guide long-term survival trends when extrapolating all-cause survival. This article investigates relative survival modeling approaches to incorporate general population mortality rates into long-term extrapolation. In doing so, we expand on existing approaches by clearly detailing how to apply various constraints on the long-term disease-specific mortality in this modeling framework. • Flexible parametric survival models can apply various extrapolation approaches in the framework. We compare models by imposing constraints on parameter estimation and applying those constraints at specific points of follow-up, including beyond the range of the data. We compare these methods in scenarios where cure is simulated to be reasonable and in cases where it is not. The result shows that different assumptions on the excess hazards can result in different extrapolations of all-cause survival. The various constraints can lead to more plausible survival extrapolation depending on the disease or treatment characteristics. • The approaches for all-cause survival extrapolation outlined offer a suite of possibilities to clearly describe an approach to extrapolation that automatically incorporates general population mortality rates while making explicit choices on how long disease-related mortality will affect the cohort. The choice of approach can be dictated by the clinical context in the decision-making process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. A robust navigation filter fusing delayed measurements from multiple sensors and its application to spacecraft rendezvous.
- Author
-
Frei, Heike, Burri, Matthias, Rems, Florian, and Risse, Eicke-Alexander
- Subjects
- *
DETECTORS , *KALMAN filtering , *NAVIGATION , *TESTING laboratories , *EXTRAPOLATION - Abstract
A filter is an essential part of many control systems. For example guidance, navigation and control systems for spacecraft rendezvous require a robust navigation filter that generates estimates of the state in a smooth and stable way. This is important for a safe spacecraft navigation within rendezvous missions. Delayed, asynchronous measurements from possibly different sensors require a new filter technique which can handle these different challenges. A new method is developed which is based on an Extended Kalman Filter with several adaptations in the prediction and correction step. Two key aspects are extrapolation of delayed measurements and sensor fusion in the filter correction. The new filter technique is applied on different close-range rendezvous examples and tested at the hardware-in-the-loop facility EPOS 2.0 (European Proximity Operations Simulator) with two different rendezvous sensors. Even with realistic delays by using an ARM-based on-board computer in the hardware-in-the-loop tests the filter is able to provide accurate, stable and smooth state estimates in all test scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Documento de consenso sobre los medicamentos biosimilares en enfermedades inmunomediadas en España.
- Author
-
Monte-Boquet, Emilio, Florez, Ángeles, Alcaín Martínez, Guillermo José, and Sellas, Agustí
- Subjects
- *
MEDICAL personnel , *BIOTHERAPY , *IMMUNE response , *RHEUMATOLOGISTS - Abstract
Mejorar el nivel de conocimiento sobre los medicamentos biosimilares y generar un marco consensuado sobre su uso. Estudio cualitativo. Se seleccionó un grupo multidisciplinar de expertos en medicamentos biosimilares (una dermatóloga, un farmacéutico de hospital, un reumatólogo y un gastroenterólogo) que definieron los apartados y los temas del documento. Se realizó una revisión narrativa de la literatura en Medline para identificar artículos sobre los medicamentos biosimilares. Se seleccionaron revisiones sistemáticas de la literatura, estudios controlados pre-clínicos, clínicos y en vida real. Con esta información se generaron varios principios generales y recomendaciones. El grado de acuerdo con los mismos se estableció mediante un Delphi que se extendió a 66 profesionales de la salud que votaron de 1 (totalmente en desacuerdo) a 10 (totalmente de acuerdo). Se definió acuerdo si al menos el 70% de los participantes votaron ≥ 7. La revisión de la literatura incluyó 555 artículos. Se votaron un total de 10 principios generales y recomendaciones. Todos alcanzaron el nivel de acuerdo establecido en el Delphi. El documento incluye datos sobre las características principales de los medicamentos biosimilares (definición, desarrollo, aprobación, extrapolación de indicaciones, intercambiabilidad, financiación y trazabilidad); sobre la evidencia publicada (biosimilitud, eficacia, efectividad, seguridad, inmunogenicidad, eficiencia, switch); sobre barreras y facilitadores a su uso, y datos sobre la información para pacientes. Los medicamentos biosimilares autorizados reúnen todas las características de calidad, eficacia y seguridad. Además, ayudan significativamente a mejorar el acceso de los pacientes a las terapias biológicas y contribuyen a la sostenibilidad de los sistemas sanitarios. To improve knowledge about biosimilar medicines and to generate a consensus framework on their use. Qualitative study. A multidisciplinary group of experts in biosimilar medicines was established (1 dermatologist, 1 hospital pharmacist, 1 rheumatologist, and 1 gastroenterologist) who defined the sections and topics of the document. A narrative literature review was performed in Medline to identify articles on biosimilar medicines. Systematic reviews, controlled, pre-clinical, clinical, and real-life studies were selected. Based on the results of the review, several general principles and recommendations were generated. The level of agreement was tested in a Delphi that was extended to 66 health professionals who voted from 1 (totally disagree) to 10 (totally agree). Agreement was defined if at least 70% of the participants voted ≥ 7. The literature review included 555 articles. A total of 10 general principles and recommendations were voted upon. All reached the level of agreement established. The document includes data on the main characteristics of biosimilar medicines (definition, development, approval, indication extrapolation, interchangeability, financing, and traceability); published evidence (biosimilarity, efficacy, effectiveness, safety, immunogenicity, efficiency, switch); barriers and facilitators to its use; and data on information for patients. Authorized biosimilar medicines meet all the characteristics of quality, efficacy, and safety. They also significantly help improve patient access to biological therapies and contribute to health system sustainability. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. First-principle calculation of the [formula omitted] decay width from lattice QCD.
- Author
-
Meng, Yu, Feng, Xu, Liu, Chuan, Wang, Teng, and Zou, Zuoheng
- Subjects
- *
QUANTUM chromodynamics , *QUARKS , *EXTRAPOLATION , *FERMIONS , *CHARMONIUM , *LATTICE field theory - Abstract
[Display omitted] We perform a lattice QCD calculation of the η c → 2 γ decay width using a model-independent method that requires no momentum extrapolation of the off-shell form factors. This method also provides a straightforward and simple way to examine the finite-volume effects. The calculation is accomplished using N f = 2 twisted mass fermion ensembles. The statistically significant excited-state effects are observed and eliminated using a multi-state fit. The impact of fine-tuning the charm quark mass is also examined and confirmed to be well-controlled. Finally, using three lattice spacings for the continuum extrapolation, we obtain the decay width Γ η c γ γ = 6.67 (16) stat (6) syst keV, which differs significantly from the Particle Data Group's reported value of Γ η c γ γ = 5.4 (4) keV (2.9 σ tension). We provide insight into the comparison between our findings, previous theoretical predictions, and experimental measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Dynamic Mortality Modeling: Incorporating Predictions of Future General Population Mortality Into Cost-Effectiveness Analysis.
- Author
-
Lee, Dawn and McNamara, Simon
- Subjects
- *
DIFFUSE large B-cell lymphomas , *DYNAMIC models , *DEATH rate , *AGE distribution , *ECONOMIC models - Abstract
Health economic models commonly apply observed general population mortality rates to simulate future deaths in a cohort. This is potentially problematic, because mortality statistics are records of the past, not predictions for the future. We propose a new dynamic general population mortality modeling approach, which enables analysts to implement predictions of future changes in mortality rates. The potential implications of moving from a conventional static approach to a dynamic approach are illustrated using a case study. The model utilized in National Institute for Health and Care Excellence appraisal TA559, axicabtagene ciloleucel axi for diffuse large B-cell lymphoma, was replicated. National mortality projections were taken from the UK Office for National Statistics. Mortality rates by age and sex were updated each modeled year with the first modeled year using 2022 rates, the second modeled year 2023 and so on. A total of 4 different assumptions were made around age distribution: fixed mean age, lognormal, normal, and gamma. The dynamic model outcomes were compared with those from a conventional static approach. Including dynamic calculations increased the undiscounted life-years attributed to general population mortality by 2.4 to 3.3 years. This led to an increase in discounted incremental life-years within the case study of 0.38 to 0.45 years (8.1%-8.9%), and a commensurate impact on the economically justifiable price of £14 456 to £17 097. The application of a dynamic approach is technically simple and has the potential to meaningfully affect estimates of cost-effectiveness analysis. Therefore, we call on health economists and health technology assessment bodies to move toward use of dynamic mortality modeling in future. • Health economic models commonly apply observed general population mortality rates to simulate future deaths in a cohort. This is potentially problematic, because mortality statistics are records of the past, not predictors of the future. • We propose a new dynamic general population mortality modeling approach, which enables analysts to implement predictions of future changes in mortality rates. We demonstrate the potential impact of this approach using a replication of the axicabtagene ciloleucel model from National Institute for Health and Care Excellence appraisal TA559. • Including dynamic calculations increased the undiscounted life-years attributed to general population mortality by 2.4 to 3.3 years with a potentially meaningful impact on the economically justifiable price of £14 456 to £17 097. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Density-extrapolation Global Variance Reduction (DeGVR) method for large-scale radiation field calculation.
- Author
-
Pan, Qingquan, Wang, Lianjie, Cai, Yun, Liu, Xiaojing, and Xiong, Jinbiao
- Subjects
- *
RADIATION , *GEOMETRIC modeling , *MONTE Carlo method , *EXTRAPOLATION , *NUCLEAR facilities - Abstract
The lack of a universal and effective Global Variance Reduction (GVR) method for the deep-penetration Monte Carlo calculation makes it difficult or impossible to calculate the radiation field of large nuclear facilities. We propose a Density-extrapolation Global Variance Reduction (DeGVR) method for the large-scale radiation field calculation. After reducing the density of all materials to avoid the deep penetration problem, two global flux distributions of the two different material densities are obtained with two fixed-source calculations, and then the global flux distribution of the original material density is obtained by extrapolating the two global flux distributions. The global flux distribution of the original material density is then used for global variance reduction. In a large-scale model with a flux attenuation of 1080, the DeGVR method only takes 40.3 minutes to build the radiation field, improves the Average Figure-of-Merit (AV.FOM) up to 85652 times and the counting rate per time (CRPT) up to 88107 times compared with the standard Monte Carlo method. In addition to the high efficiency, the DeGVR method has high universality and robustness because it constructs the global information in a clever way that does not rely on the complex mathematical derivation and geometric modeling. The DeGVR method shows excellent application potential in large-scale radiation analysis. • Scientific Importance: provides a clever solution to the deep penetration problem. • Effectiveness: improves the Average Figure-of-Merit (AV.FOM) greatly. • Robustness: without complex mathematical derivation and geometric modeling. • Engineering application: is helpful for large-scale radiation field calculation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Smoothing fast proximal gradient algorithm for the relaxation of matrix rank regularization problem.
- Author
-
Zhang, Jie and Yang, Xinmin
- Subjects
- *
ALGORITHMS , *MATRICES (Mathematics) , *EXTRAPOLATION , *SMOOTHING (Numerical analysis) - Abstract
This paper proposes a general inertial smoothing proximal gradient algorithm for solving the Capped- ℓ 1 exact continuous relaxation regularization model proposed by Yu and Zhang (2022) [29]. The proposed algorithm incorporates different extrapolations into the gradient and proximal steps. It is proved that, under some general parameter constraints, the singular values of any accumulation point of the sequence generated by the proposed algorithm have the common support set, and the zero singular values can be achieved in a finite number of iterations. Furthermore, any accumulation point is a lifted stationary point of the relaxation model. Numerical experiments illustrate the efficiency of the proposed algorithm on synthetic and real data, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Approximation of the Tikhonov regularization parameter through Aitken's extrapolation.
- Author
-
Fika, Paraskevi
- Subjects
- *
TIKHONOV regularization , *REGULARIZATION parameter , *EXTRAPOLATION , *QUADRATIC forms - Abstract
In the present work, we study the determination of the regularization parameter and the computation of the regularized solution in Tikhonov regularization, by the Aitken's extrapolation method. In particular, this convergence acceleration method is adjusted for the approximation of quadratic forms that appear in regularization methods, such as the generalized cross-validation method, the quasi-optimality criterion, the Gfrerer/Raus method and the Morozov's discrepancy principle. We present several numerical examples to illustrate the effectiveness of the derived estimates for approximating the regularization parameter for several linear discrete ill-posed problems and we compare the described method with further existing methods, for the determination of the regularized solution. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Improvement of satellite-derived surface solar irradiance estimations using spatio-temporal extrapolation with statistical learning.
- Author
-
Verbois, Hadrien, Saint-Drenan, Yves-Marie, Libois, Quentin, Michel, Yann, Cassas, Marie, Dubus, Laurent, and Blanc, Philippe
- Subjects
- *
STATISTICAL learning , *SOLAR surface , *METEOROLOGICAL satellites , *SOLAR energy industries , *BOOSTING algorithms , *EXTRAPOLATION , *ORBITS of artificial satellites - Abstract
Estimations of solar surface irradiance (SSI) derived from meteorological satellites are widely used by various actors in the solar industry. However, even state-of-the-art empirical and physical SSI retrieval models exhibit significant errors; the estimations provided by these models are thus traditionally corrected using ground-based measurements of SSI as references. The literature is rich with such correction methods, often called adaptation techniques. Most of the proposed models, however, are local or site-specific, i.e., they do not extrapolate the correction in space and are only applicable to the location of the ground-based measurements. In this work, we propose a novel global adaptation technique, that can extrapolate the correction in both space and time. To that end, we leverage (1) a dense network of measurement stations across France, (2) a relatively large number of predictors, and (3) a non-linear, sophisticated regression algorithm, the Extreme Gradient Boosting. The model is applied to the HelioClim3 database; its performance is benchmarked against raw HelioClim3 estimations, and alternative, simpler adaptation techniques. Our analysis shows that this global model significantly improves satellite-derived SSI estimations from the HelioClim3 database, even when the evaluation is carried out on measurement stations that were not part of the training set of the algorithm. Our proposed model also outperforms all tested alternative global adaptation techniques. These results suggest that global adaptation techniques leveraging advanced machine learning and high dimensionality have the potential to significantly improve satellite-derived SSI estimations, notably more than traditional adaptation approaches. There is certainly room for improvement, but the development of such techniques is a promising research topic. • A novel, non-linear and high-dimensional global adaptation model was developed. • The proposed model significantly and systematically outperforms HelioClim3. • Several benchmarks were developed to justify the design choices. • The benefits of non-linearity and high dimensionality were demonstrated. • The analysis of the dynamic of the spatial field of SSI revealed limitations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Control Dosimetric Outcome of AI-Generated Plans through OAR Prioritization for Head-and-Neck Cancer: An Investigation on Input Sensitivity.
- Author
-
Li, X., Yang, D., Sheng, Y., Ge, Y., Wu, Q., and Wu, Q.J.J.
- Subjects
- *
MEDICAL dosimetry , *ARTIFICIAL intelligence , *CLINICAL medicine , *PREDICTION models , *EXTRAPOLATION - Abstract
To examine response of an Artificial Intelligence (AI) model for Head-and-Neck (HN) IMRT treatment planning to various organs-at-risk (OAR) priorities as model inputs. Our lab has developed an AI system that predict IMRT plan's optimal fluence maps based on patient's anatomical information and user-defines OAR priorities. This AI system was specialized for HN primary targets with bilateral segments and showed promising dosimetric outcomes using preset OAR priorities as inputs. This study focuses on examining the response of the AI system to an extensive range of OAR priorities as inputs to investigate if the model prediction breaks under certain conditions and if the AI behaves as expected in the extrapolation region. During training of the AI system, two IMRT plans were generated for each of the 200 training cases. One used a set of fixed priorities that have balanced dose tradeoffs among OARs and PTV; the other one used the same set of objectives whereas priorities were randomly altered up to examine the changes in response, the priorities of 4 OARs including L&R parotid, oral cavity, and cord + 5 mm were altered as inputs. Each priority was altered by -80%, -40%, 0%, 40%, and +80% from the balanced values, and a total of 5^4 = 625 AI plans were generated for a test case. The AI plans had full dose calculated by a treatment planning system. All plans were normalized as 100% prescription dose covers 90% of PTV. To avoid extreme cases, the AI plans with conformity index > 1.1, heterogeneity index > 30, or max dose > 125% were excluded from statistics calculations. Median dose of L&R parotid, median dose of oral cavity, and D1cc of cord+5mm was evaluated. These metrics were linearly fit using OAR priorities as inputs and 5-fold cross validation. 232 OAR priority sets were included in the statistics. These sets do not include any -80% priority change, indicating the AI model breaks when users heavily prioritize PTV over OARs. Among these 232 AI plans, L&R parotids median dose was altered by almost ±10Gy. Ground truth plan with balanced tradeoffs was used as reference. The linear fitting coefficients averaged among the 5 folds were reported in the table in the format of y = c1 + c2 x ΔP(parotidR) + c3 x ΔP(parotidL) + c4 x ΔP(oralcavity) + c5 x ΔP(cord5mm), where ΔP(OAR) was the ratio of priority variation. R-squared of linear fitting was ~0.95 except for cord+5mm. The AI system could generate substantially different AI plans for the same patient and the response was essentially linear to the input priorities. This study demonstrated the promising performance of the AI system and potentially enhanced trust in AI's clinical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Evaluation and quantification of compressor model predictive capabilities under modulation and extrapolation scenarios.
- Author
-
Gabel, Kalen S. and Bradshaw, Craig R.
- Subjects
- *
COMPRESSORS , *EXTRAPOLATION , *PREDICTION models , *ELECTRICAL load - Abstract
Testing and evaluation of select semi-empirical and black-box compressor models is carried out to quantify performance in modulation (variable speed), extrapolation, and additionally, variable superheat scenarios. Three representative literature models and an artificial neural network (ANN) model are benchmarked against the industry standard AHRI model. A methodology quantifying model performance, compared against experimental data, in said scenarios is presented. High-fidelity test data taken from either a hot-gas bypass load stand or compressor calorimeter. Scroll, screw, reciprocating, and spool compressor technologies were collected with R410A, R1234ze(E), R134a, and R32 refrigerants totaling 434 experimental points. Data is divided into training, extrapolation, variable speed, and variable superheat data splits to examine model performance. Mean Absolute Percentage Error (MAPE) is computed for mass flow rate and power after training models with training data and evaluating them against the other data splits. Two literature models are true semi-empirical formulations while the other, the ANN, and AHRI model are more empirical in nature. Neither semi-empirical model predicted all compressors. When the compressor type is predicted, the semi-empirical models yield MAPE's less than 8%, 5%, and 4% for mass flow rate and power prediction in extrapolation, modulation, and variable superheat scenarios, respectively. The exception is the Popovic and Shapiro model performing at 21% MAPE in variable superheat power prediction for the spool compressor with R1234ze(E). The ANN showed highest errors of 9.3%, 12%, and 17% in extrapolation, modulation, and variable superheat scenarios, respectively. All models outperformed the AHRI model by several orders of magnitude in these scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Efficient numerical simulation of Cahn-Hilliard type models by a dimension splitting method.
- Author
-
Xiao, Xufeng, Feng, Xinlong, and Shi, Zuoqiang
- Subjects
- *
DIBLOCK copolymers , *CONSERVATION of mass , *COMPUTER simulation , *COMPUTATIONAL complexity , *PHASE separation , *TUMOR growth , *EXTRAPOLATION - Abstract
In this paper, an efficient dimension splitting method is applied to the two-dimensional (2D) and three-dimensional (3D) Cahn-Hilliard type equations with significant applications in physical, biological and computer science. The proposed method can greatly reduce the storage requirements and computational complexity in two and three dimensions, and preserve the high precision and mass conservation. The dimension splitting method is based on auxiliary variables for spatial derivative terms and the operator splitting approach. It converts the 2D or 3D problem into a series of one-dimensional (1D) problems which can be solved by using multi-thread. The stability and accuracy of the proposed method are improved by a local stabilizing approach and extrapolation. The analyses of discrete energy and discrete mass conservation property are shown. Numerical examples confirm the advantages of proposed method. And a variety of simulations, such as the phase separations, curvature driven flows, diblock copolymers, tumor growth and volume restoration, are performed with respect to practical applications. • Dimension splitting method for Cahn-Hilliard type equations. • The global stability is achieved by the local stability of solving each sub-problem. • Fourth-order mass conservative scheme based on the compact differencing and extrapolation. • A large number of numerical examples for practical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Incidence, mortality and trends of cutaneous squamous cell carcinoma in Germany, the Netherlands, and Scotland.
- Author
-
Keim, Ulrike, Katalinic, Alexander, Holleczek, Bernd, Wakkee, Marlies, Garbe, Claus, and Leiter, Ulrike
- Subjects
- *
REPORTING of diseases , *MATHEMATICAL models , *MORTALITY , *DISEASE incidence , *SKIN tumors , *THEORY , *SQUAMOUS cell carcinoma - Abstract
Cutaneous squamous cell carcinoma (cSCC) incidences are increasing but scarcely available separated. We analysed incidence rates of cSCC over three decades with an extrapolation to 2040. Cancer registries from the Netherlands, Scotland and two federal states of Germany (Saarland/Schleswig-Holstein) were sourced for separate cSCC incidence data. Incidence and mortality trends between 1989/90 and 2020 were assessed using Joinpoint regression models. Modified age-period-cohort models were applied to predict incidence rates up to 2044. Rates were age-standardised using the new European standard population (2013). Age-standardised incidence rates (ASIR, per 100,000 persons per year) increased in all populations. The annual percent increase ranged between 2.4% and 5.7%. The highest increase occurred in the age groups ≥60 years, especially in men aged ≥80 years, with a three to 5-fold increase. Extrapolations up to 2044 showed an unrestrained increase in incidence rates in all countries investigated. Age-standardised mortality rates (ASMR) showed slight increases between 1.4 and 3.2% per year in Saarland and Schleswig-Holstein for both sexes and for men in Scotland. For the Netherlands, ASMRs remained stable for women but declined for men. There was a continuous increase of cSCC incidence over three decades with no tendency for levelling-off, especially in the older populations as males ≥80 years. Extrapolations point to a further increasing number of cSCC up to 2044, especially among ≥60 years. This will have a significant impact on the current and future burden on dermatologic health care which will be faced with major challenges. • Age standardised incidence rates separated of cutaneous squamous cell carcinoma from four European registries were analysed. • An annual increase between +2.4% and +5.7% was observed over three decades. • Highest increases occurred in ages ≥60, especially in men ≥80. • Age-standardised mortality rates are rising in Germany and Scotland. • In extrapolations until 2044, an increase up to 5-fold in ages ≥60 is expected. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. A new approach on data extrapolation for mortar dating in the Zagreb Radiocarbon Laboratory.
- Author
-
Sironić, Andreja, Cherkinsky, Alexander, Borković, Damir, Damiani, Suzana, Barešić, Jadranka, Visković, Eduard, and Bronić, Ines Krajcar
- Subjects
- *
EXTRAPOLATION , *MORTAR , *CARBON dioxide , *RADIOCARBON dating , *HYDROLYSIS kinetics , *GRAIN size , *PHOSPHORIC acid - Abstract
There is no unique way of mortar sample preparation that would always provide the true date. We propose a procedure of 14C age extrapolation from CO 2 fractions obtained by sequential dissolution of mortar grain size 32 – 63 μm by phosphoric acid. The collection of CO 2 fractions is deduced from kinetics of mortar-hydrolysis curve. The procedure was designed from data obtained from mortars prepared in laboratory and tested on mortar/plaster from three archaeological cases, with confirmed ages. The first fractions gave true result for historical mortars, but for laboratory ones only extrapolated values were true. When certain conditions regarding CO 2 fraction collection were met, extrapolated values agreed with the first fractions for the historical mortars. Although the extrapolation procedure seems to eliminate the effect of dead carbon contamination, it is not effective against influence of delayed hardening or restoration. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. Incorporating NODE with pre-trained neural differential operator for learning dynamics.
- Author
-
Gong, Shiqi, Meng, Qi, Wang, Yue, Wu, Lijun, Chen, Wei, Ma, Zhiming, and Liu, Tie-Yan
- Subjects
- *
DIFFERENTIAL operators , *DERIVATIVES (Mathematics) , *ORDINARY differential equations , *SYSTEMS theory , *DIFFERENTIAL equations , *EXTRAPOLATION - Abstract
[Display omitted] • We propose a new approach named NDO-NODE to stabilize the training and improve generalization of NODE, which injects the inductive bias of a function library on derivative calculation. • We prove the generalization ability of the learned NDO: it can output derivatives for functions belonging to the library with small errors. • Experiments show that NDO-NODE can consistently improve both the interpolation and extrapolation accuracy on these tasks with one pre-trained NDO. Also, it can stablize the training for stiff ODEs. Learning dynamics governed by differential equations is crucial for predicting and controlling the systems in science and engineering. Neural Ordinary Differential Equation (NODE), a deep learning model integrated with differential equations, is popular in learning dynamics recently due to its robustness to irregular samples and its flexibility to high-dimensional input. However, the training of NODE is sensitive to the precision of the numerical solver, which makes the convergence of NODE unstable, especially for ill-conditioned dynamical systems. In this paper, to reduce the reliance on the numerical solver, we propose to enhance the supervised signal in the training of NODE. Specifically, we pre-train a neural differential operator (NDO) to output an estimation of the derivatives to serve as an additional supervised signal. The NDO is pre-trained on a class of basis functions and learns the mapping between the trajectory samples of these functions to their derivatives. To leverage both the trajectory signal and the estimated derivatives from NDO, we propose an algorithm called NDO-NODE, in which the loss function contains two terms: the fitness on the true trajectory samples and the fitness on the estimated derivatives that are outputted by the pre-trained NDO. Experiments on various kinds of dynamics show that our proposed NDO-NODE can consistently improve the forecasting accuracy with one pre-trained NDO. Especially for the stiff ODEs, we observe that NDO-NODE can capture the transitions in the dynamics more accurately compared with other regularization methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Learning the physics-consistent material behavior from measurable data via PDE-constrained optimization.
- Author
-
Wu, Xinxin, Zhang, Yin, and Mao, Sheng
- Subjects
- *
MATHEMATICAL forms , *MATERIALS science , *TEST methods , *EXTRAPOLATION , *INTERPOLATION - Abstract
Constitutive models play a crucial role in materials science as they describe the behavior of the materials in mathematical forms. Over the last few decades, the rapid development of manufacturing technologies has led to the discovery of many advanced materials with complex and novel behavior, which in the meantime, has also posed great challenges for constructing accurate and reliable constitutive models of these materials. In this work, we propose a data-driven approach to construct physics-consistent constitutive models for hyperelastic materials from measurable data, with the help of PDE-constrained optimization methods. Specifically, our constitutive models are based on the physically augmented neural networks (PANNs), which has been shown to ensure that the models are both physically consistent but also mathematically well-posed by construction. Specimens with deliberately introduced inhomogeneity are used to generate the data, i.e., the full-field displacement data and the total external load, for training the model. Using such approach, a considerably diverse pairs of stress–strain states can be explored with a limited number of simple tests, such as uniaxial tension. A loss function is defined to measure the difference between the data and the model prediction, which is obtained by numerically solving the governing PDEs under the same geometry and loading conditions. With the help of adjoint method, we can iteratively optimize the parameters of our NN-based constitutive models through gradient descent. We test our method for a wide range of hyperelastic materials and in all cases, our methods are able to capture the constitutive model efficiently and accurately. The trained models are also tested against unseen geometry and unseen loading conditions, exhibiting good interpolation and extrapolation capabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
44. More consistent learnt relationships in regression neural networks using Fit to Median error measure.
- Author
-
Parkes, Amy I., Sobey, Adam J., and Hudson, Dominic A.
- Subjects
- *
SHIP models , *MACHINE learning , *CARBON dioxide mitigation , *EXTRAPOLATION , *GENERALIZATION - Abstract
Machine learning is increasingly used to optimise complex systems. The quality of this optimisation is dependent on the models having a reasonable physical representation of the system. However, standard error metrics are pointwise, providing a reasonable physical representation only under certain constraints. As an example, data models of ship powering have low errors values, < 2%, but fail to consistently approximate the input–output relationships and cannot be used to optimise performance. This paper illustrates that the Fit to Median error measure can be used to assess how well the ground truth is modelled. It provides more consistent learnt relationships and improves the extrapolation accuracy of neural networks. This is illustrated on real-world data used for ship power prediction. • Explains why modern machine learning regression methods fail to model the ground truth. • Derives a new error measure to ensure ground truth is modelled. • Illustrates that this new error measure produces improved extrapolation. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
45. A novel dual-stage grey-box stacking method for significantly improving the extrapolation performance of ship fuel consumption prediction models.
- Author
-
Ruan, Zhang, Huang, Lianzhong, Li, Daize, Ma, Ranqi, Wang, Kai, Zhang, Rui, Zhao, Haoyang, Wu, Jianyi, and Li, Xiaowu
- Subjects
- *
MACHINE learning , *SHIP fuel , *ENERGY consumption , *CONSUMPTION (Economics) , *EXTRAPOLATION - Abstract
Ship Fuel Consumption Prediction (SFCP) is the foundation of ship energy efficiency assessment and optimization. However, existing research neglects to examine the model extrapolation performance, leading to significant degradation in predictive accuracy when models face dataset shift. To address this, a novel dual-stage grey-box stacking (DSGBS) model is proposed. First, based on the traditional grey-box model (GBM), a light grey-box model (LGBM) is proposed to enhance the extrapolation ability by incorporating more prior knowledge. Then, an improved stacking framework is used to fuse multiple GBMs to build the DSGBS model. Finally, a physics-based white-box model (WBM) is established, along with black-box model (BBM), traditional GBM, and LGBM based on nine machine learning algorithms. The extrapolation performance of these models is compared using data from three independent voyages. Results show that DSGBS model has a significant advantage in extrapolation performance, reducing its RMSE by about 63.51 %, 10.91 %, and 52.52 %, respectively, compared to the best model in BBMs, the best model among GBMs and LGBMs, and WBM. Therefore, the DSGBS model mitigates prediction accuracy loss from dataset shift, significantly improve the extrapolation performance, and support the practical application of ship energy efficiency management, with great significance for reducing operation cost and emission. • A DSGBS model with high extrapolation performance is proposed for ship fuel consumption prediction. • Reveals that BBM's predicted performance significantly degrades in extrapolation. • An LGBM is proposed that relies only on the physical model to generate input features. • First full revelation of the differences in extrapolation performance across models. • DSGBS model improves extrapolation performance by 63.51 %, 52.52 % and 10.91 % compared to BBM, WBM and GBM. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
46. An extrapolation-driven network architecture for physics-informed deep learning.
- Author
-
Wang, Yong, Yao, Yanzhong, and Gao, Zhiming
- Subjects
- *
SEQUENTIAL learning , *DEEP learning , *EVOLUTION equations , *CHARACTERISTIC functions , *LEARNING strategies - Abstract
Current physics-informed neural network (PINN) implementations with sequential learning strategies often experience some weaknesses, such as the failure to reproduce the previous training results when using a single network, the difficulty to strictly ensure continuity and smoothness at the time interval nodes when using multiple networks, and the increase in complexity and computational overhead. To overcome these shortcomings, we first investigate the extrapolation capability of the PINN method for time-dependent PDEs. Taking advantage of this extrapolation property, we generalize the training result obtained in a specific time subinterval to larger intervals by adding a correction term to the network parameters of the subinterval. The correction term is determined by further training with the sample points in the added subinterval. Secondly, by designing an extrapolation control function with special characteristics and combining it with a correction term, we construct a new neural network architecture whose network parameters are coupled with the time variable, which we call the extrapolation-driven network architecture. Based on this architecture, using a single neural network, we can obtain the overall PINN solution of the whole domain with the following two characteristics: (1) it completely inherits the local solution of the interval obtained from the previous training, (2) at the interval node, it strictly maintains the continuity and smoothness that the true solution has. The extrapolation-driven network architecture allows us to divide a large time domain into multiple subintervals and solve the time-dependent PDEs one by one in a chronological order. This training scheme respects the causality principle and effectively overcomes the difficulties of the conventional PINN method in solving the evolution equation on a large time domain. Numerical experiments verify the performance of our method. The data and code accompanying this paper are available at https://github.com/wangyong1301108/E-DNN. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
47. Moving boundary truncated grid method: Application to activated barrier crossing with the Klein–Kramers and Boltzmann–BGK models.
- Author
-
Li, Ming-Yu, Lu, Chun-Yaung, and Chou, Chia-Chun
- Subjects
- *
PADE approximant , *CORRECTION factors , *ANHARMONIC motion , *ASYMPTOTES , *EXTRAPOLATION - Abstract
We exploit the moving boundary truncated grid method for the Klein–Kramers and Boltzmann–BGK kinetic equations to approach the problem of thermally activated barrier crossing across non-parabolic barriers with reduced computational effort. The grid truncation algorithm dynamically deactivates the insignificant grid points while the boundary extrapolation procedure explores potentially important portions of phase space. An economized Eulerian framework is established to integrate the kinetic equations in the tailored phase space efficiently. The effects of coupling strength, kinetic model, and potential shape on the escape rate are assessed through direct numerical simulations. Besides, we adapt the Padé approximant approach for non-parabolic barriers by introducing a correction factor into the spatial diffusion asymptote to account for the anharmonicity. The modified Padé approximants are remarkably consistent with the numerical results obtained from the conventional full grid method in underdamped and overdamped regimes, whereas overestimating the rates in the turnover region, even exceeding the upper bound given by the transition-state theory. By contrast, the truncated grid method provides accurate rate estimates in excellent agreement with the full grid benchmarks globally, with negligible relative errors lower than 1.02% for the BGK model and below 0.56% for the Kramers model, while substantially reducing the computational cost. Overall, the truncated grid method has shown great promise as a high-performance scheme for the escape problem. • Efficient truncated grid method for kinetic equations reduces computational effort. • Dynamic boundary extrapolation explores critical phase space areas efficiently. • Modified Padé approximants account for anharmonicity in non-parabolic barriers. • Truncated grid method shows excellent consistency with full grid benchmarks. • Truncated grid method achieves precise escape rate estimates. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
48. Improving extrapolation capabilities of a data-driven prediction model for control of an air separation unit.
- Author
-
Krespach, Valentin, Blum, Nicolas, Pottmann, Martin, Rehfeldt, Sebastian, and Klein, Harald
- Subjects
- *
SEPARATION of gases , *DIGITAL twin , *PREDICTION models , *DATA augmentation , *EXTRAPOLATION - Abstract
In model predictive control, fully data-driven prediction models can be used besides common (non-)linear prediction models based on first-principles. Although no process knowledge is required while relying only on sufficient data, they suffer in their extrapolation capability which is shown in the present work for the control of an air separation unit. In order to compensate for the deficits in the extrapolation behavior, a further data source, here a digital twin, is deployed for additional data generation. The plant data set is augmented with the artificially generated data giving rise to a hybrid model in terms of data generation. It is shown that this model can significantly improve the prediction quality in former extrapolation areas of the plant data set. Even conclusions about the uncertainty behavior of the prediction model can be found. • Data-driven prediction model is deployed in model predictive control of an air separation unit. • Extrapolation capability of prediction model is investigated. • Three different prediction models are trained and compared in terms of their prediction quality. • Digital twin is used as additional data source. • Model trained with hybrid combined data set can improve extrapolation capabilities. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
49. An efficient kernel learning-based constitutive model for cyclic plasticity in nonlinear finite element analysis.
- Author
-
Liao, Yue and Luo, Huan
- Subjects
- *
FINITE element method , *EXTRAPOLATION , *HYSTERESIS , *INTERPOLATION , *STEEL - Abstract
Machine learning-based data-driven constitutive models have shown promise in capturing behavior of elastoplastic materials. However, training these models is usually computationally expensive and can become even more time-consuming as the size of the training data set increases. To address this challenge, this paper proposes a novel efficient kernel learning-based constitutive (EKLC) model to learn constitutive relations for elastoplastic materials directly from stress–strain data. The proposed EKLC model, from a neural network (NN) perspective, consists of six layers: input, hysteresis, feature, basis, kernel and output. The hysteresis layer enables the learning of path-dependent behavior of elastoplastic materials, while the basis layer ensures computational efficiency by allowing for nonlinear mappings between hysteresis neurons and stress. The proposed EKLC model outperforms NN-based models in terms of computational efficiency, interpolation and extrapolation capabilities by thorough comparisons with numerical results obtained from learning two widely used elastoplastic constitutive models for steel materials and from experimental data sets. Furthermore, the EKLC model is successfully applied to the nonlinear finite element analysis with cyclic plasticity and to learning the multi-dimensional stress–strain relationships. Noteworthily, training the proposed EKLC model runs much faster than training NN-based models, with a maximum speedup of about 496 000 times. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
50. Prior information assisted multi-scale network for precipitation nowcasting.
- Author
-
Song, Dan, Wang, Yu, Li, Wenhui, Liu, Wen, Wei, Zhiqiang, and Liu, An-An
- Subjects
- *
CONVOLUTIONAL neural networks , *RADAR , *EXTRAPOLATION , *DEEP learning , *PREDICTION models , *EVERYDAY life - Abstract
Accurate precipitation nowcasting holds great significance for daily life. In recent years, deep learning networks have demonstrated excellent performance in the field of precipitation nowcasting. However, they did not fully harness important prior information such as the experience acquired from pre-trained model and the effects caused by terrain. In this paper, we propose a prior information assisted multi-scale network for precipitation nowcasting. Firstly, we employ a cross-attention mechanism to model the correlation between terrain elevation and radar echoes, enhancing the feature representation of the input. Subsequently, we introduce a teacher–student network, leveraging the pre-trained model's capability in modeling echo movement as prior information to assist in the prediction. Finally, a multi-scale UNet network is proposed to cross-fuse large-scale and small-scale features so that the predicted images retain global information and more local details. We conduct precipitation nowcasting tests using real radar echo datasets within the 0–2 h range. Compared with the second best results(i.e., REMNet (Jing et al., 2022) for Probability of Detection (POD) and RainNet (Ayzel et al., 2020) for Critical Success Index (CSI)), our method improves the POD and CSI by 15.4% and 27.7%, respectively, demonstrating the superiority of our method. • Incorporate terrain as prior information to facilitate the precipitation nowcast. • Leverage the prior information of the pre-trained model to assist predictions. • Propose a multi-scale UNet network to retain detailed convection information. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.