We introduced DrYFiT (drying data fitting tool), a Microsoft Excel freeware tool to be used for modeling thin‐layer drying of foods, which is available at https://drive.google.com/drive/folders/1ouompmNkmdmw1KMTJUnY0t8dJqSJ9iKv?usp=drive%5flink. There are 12 models in the tool and it can be used without any modeling and programming skills. Time and moisture ratio data can be entered and one of models available (one at a time) can be selected to describe the drying data. Parameter values, standard error of the parameters, p value and a statement that indicates whether the parameter is statistically significant or not (α = 0.05) are reported. Moreover, R2, adjusted R2 and root mean square error values are calculated for each model. Users can instantaneously observe the experimental data and the model fit on the same graph. Residual plot is given next to this graph. It is possible for the users to have the results of all models applied to drying data within a couple of minutes. The results of DrYFiT were compared with some popular software programs used for nonlinear regression and identical values (parameters, standard errors, p values, goodness‐of‐fit statistics) were obtained for 40 datasets. [ABSTRACT FROM AUTHOR]
Purpose: To build, train, and assess the artificial neural network (ANN) system in estimating the residual valve rate after endoscopic valve ablation and compare the data obtained with conventional analysis. Methods: In a retrospective cross-sectional study between June 2010 and December 2020, 144 children with a history of posterior urethral valve (PUV) who underwent endoscopic valve ablation were enrolled in the study. MATLAB software was used to design and train the network in a feed-forward backpropagation error adjustment scheme. Preoperative and postoperative data from 101 patients (70%) (training set) were utilized to assess the impact and relative significance of the necessity for repeated ablation. The validated suitably trained ANN was used to predict repeated ablation in the next 33 patients (22.9%) (test set) whose preoperative data were serially input into the system. To assess system accuracy in forecasting the requirement for repeat ablation, projected values were compared to actual outcomes. The likelihood of predicting the residual valve was calculated using a three-layered backpropagating deep ANN using preoperative and postoperative information. Results: Of 144 operated cases, 33 (22.9%) had residual valves and needs to repeated ablation. The ANN accuracy, sensitivity, and specificity for predicting the residual valve were 90.75%, 92.73%, and 73.19%, respectively. Younger age at surgery, hyperechogenicity of the renal parenchyma, presence of vesicoureteral reflux (VUR), and grade of reflux before surgery were among the most significant characteristics that affected postoperative outcome variables, the need for repeated ablation, and were given the highest relative weight by the ANN system. Conclusions: The ANN is an integrated data-gathering tool for analyzing and finding relationships among variables as a complex non-linear statistical model. The results indicate that ANN is a valuable tool for outcome prediction of the residual valve after endoscopic valve ablation in patients with PUV. [ABSTRACT FROM AUTHOR]
Spinach (Spinacia oleracea L.), a valuable crop, suffers greatly from salt stress. This study investigates the potential of green silver nanoparticles (Ag-NPs) synthesized using Aloe vera extract to relieve the harmful effects of salt stress on spinach seed germination and growth. The experiment evaluated various seed germination and plant growth characteristics of spinach cultivar Viroflay RZ under five Ag-NP concentrations (0, 20, 40, 80, and 100 ppm) and four salinity levels (0, 50, 100, and 150 mM NaCl) in a controlled laboratory setting. This studymeasured seed germination percentage, rate and relative germination, vigor indices, plant height, root length, shoot and root dry weight, and chlorophyll content. Data analysis revealed that salinity stress significantly inhibited seed germination and all other studied parameters, especially at higher salt concentrations. The impact of green Ag-NPs on these traits varied considerably under salt stress. A complex statistical model showed a non-linear relationship between Ag-NP concentration and its effect, with an optimal concentration potentially alleviating the negative effects of salt stress. The study suggests that pre-treating spinach seeds with green Ag-NPs at an optimized concentration might enhance their tolerance to salt stress, potentially improving germination and growth under saline conditions. This research promotes using eco-friendly nanotechnology to mitigate the detrimental effects of salinity on agricultural productivity. [ABSTRACT FROM AUTHOR]
FALSE alarms, STRUCTURAL health monitoring, MONTE Carlo method, DUFFING equations, NONLINEAR statistical models, NONLINEAR analysis, EIGENFREQUENCIES
Abstract
In recent years, the development of quick and streamlined methods for the detection and localization of structural damage has been achieved by analysing key dynamic parameters before and after significant events or as a result of aging. Many Structural Health Monitoring (SHM) systems rely on the relationship between occurred damage and variations in eigenfrequencies. While it is acknowledged that damage can affect eigenfrequencies, the reverse is not necessarily true, particularly for minor frequency variations. Thus, reducing false positives is essential for the effectiveness of SHM systems. The aim of this paper is to identify scenarios where observed changes in eigenfrequencies are not caused by structural damage, but rather by non-stationary combinations of input and system response (e.g., wind effects, traffic vibrations), or by stochastic variations in mass, damping, and stiffness (e.g., environmental variations). To achieve this, statistical variations of thresholds were established to separate linear non-stationary behaviour from nonlinear structural behaviour. The Duffing oscillator was employed in this study to perform various nonlinear analyses via Monte Carlo simulations. [ABSTRACT FROM AUTHOR]
Boudreault, Jérémie, Campagna, Céline, and Chebana, Fateh
Subjects
AIR pollution, NONLINEAR statistical models, MACHINE learning, ATMOSPHERIC temperature, WEATHER
Abstract
Extreme heat events have significant health impacts that need to be adequately quantified in the context of climate change. Traditionally, heat-health association methods have relied on statistical models using a single air temperature index, without considering other heat-related variables that may influence the relationship and their potentially complex interactions. This study aims to introduce and compare different machine learning (ML) models, which naturally consider interactions between predictors and non-linearities, to re-examine the importance of temperature, weather and air pollution predictors in modeling the heat-mortality relationship. ML approaches based on tree ensembles and neural networks, as well as non-linear statistical models, were used to model the heat-mortality relationship in the two most populated metropolitan areas of the province of Quebec, Canada. The models were calibrated using a comprehensive database of heat-related predictors including various lagged temperature indices, temperature variations, meteorological and air pollution variables. Performance was evaluated based on out-of-sample summer mortality predictions. For the two studied regions, models relying only on lagged temperature indices performed better, or equally well, than models considering more heat-related predictors such as temperature variations, weather and air pollution variables. The temperature index with the best performance differed by region, but both mean temperature and humidex were among the best indices. In terms of modeling approaches, non-linear statistical models were as competent as more advanced ML models for predicting out-of-sample summer mortality. This research validated the current use of non-linear statistical models with the appropriate lagged temperature index to model the heat-mortality relationship. Although ML models have not improved the performance of all-cause mortality modeling, these approaches should continue to be explored, particularly for other health effects that may be more directly linked to heat exposure and, in the future, when more data become available. [ABSTRACT FROM AUTHOR]
In this paper, we compared the linear and nonlinear motion prediction models of a long combination vehicle (LCV). We designed a nonlinear model predictive control (NMPC) for trajectory-following and off-tracking minimisation of the LCV. The used prediction model allowed coupled longitudinal and lateral dynamics together with the possibility of a combined steering, propulsion and braking control of those vehicles in long prediction horizons and in all ranges of forward velocity. For LCVs where the vehicle model is highly nonlinear, we showed that the control actions calculated by a linear time-varying model predictive control (LTV-MPC) are relatively close to those obtained by the NMPC if the guess linearisation trajectory is sufficiently close to the nonlinear solution, in contrast to linearising for specific operating conditions that limit the generality of the designed function. We discussed how those guess trajectories can be obtained allowing off-line fixed time-varying model linearisation that is beneficial for real-time implementation of MPC in LCVs with long prediction horizons. The long prediction horizons are necessary for motion planning and trajectory-following of LCVs to maintain stability and tracking quality, e.g. by optimally reducing the speed prior to reaching a curve, and by generating control actions within the actuators limits. [ABSTRACT FROM AUTHOR]
The article discusses the implementation of independent component analysis (ICA) in the brain and its potential for modeling brain computations. The author suggests that while modeling the brain on statistical and objective levels is promising, modeling the algorithmic level is more challenging due to the difficulty of measuring single neurons and their learning. The author also discusses the interpretation of nonlinear ICA as defining an exponential model and the use of self-supervised learning methods in machine learning. Overall, the article highlights the complexities and challenges of nonlinear ICA as a statistical model and its potential applications in understanding brain computations. [Extracted from the article]
In this paper, we present a novel nonlinear model predictive control (NMPC) algorithm based on the Laguerre function for dynamic positioning ships to solve the problems of input saturation, unknown time-varying disturbances, and heavy computation. The nonlinear model of a dynamic positioning ship is presented as a linear model, transformed from a standard affine nonlinear state-space model by precise feedback linearization. The environmental disturbance is overcome using an integrator. The time cost of the proposed nonlinear control algorithm is decreased by inducing the Laguerre function to describe the feedback-linearization system input increments. The Laguerre function reduces the matrix dimensions of the nonlinear optimization problem. The simulation results for a DP supply vessel showed that the novel algorithm maintained the effective control performance of the original nonlinear model predictive control algorithm and had a reduced computation load to satisfy the requirements of real-time operation. [ABSTRACT FROM AUTHOR]
A stochastic model predictive control (MPC) framework is presented in this paper for nonlinear affine systems with stability and feasibility guarantee. We first introduce the concept of stochastic control Lyapunov–Barrier function (CLBF) and provide a method to construct CLBF by combining an unconstrained control Lyapunov function (CLF) and control barrier functions. The unconstrained CLF is obtained from its corresponding semi‐linear system through dynamic feedback linearization. Based on the constructed CLBF, we utilize sampled‐data MPC framework to deal with states and inputs constraints, and to analyze stability of closed‐loop systems. Moreover, event‐triggering mechanisms are integrated into MPC framework to improve performance during sampling intervals. The proposed CLBF based stochastic MPC is validated via an obstacle avoidance example. [ABSTRACT FROM AUTHOR]
We construct a new class of nonlinear coherent states for the isotonic oscillator by replacing the factorial of the coefficients z n / n ! of the canonical coherent states by the factorial x n γ ! = x 1 γ. x 2 γ ... x n γ with x 0 γ = 0 , where x n γ is a sequence of positive numbers and γ is a positive real parameter. This also leads to the construction of a Bargmann-type integral transform which will allow us to find some integral transforms for orthogonal polynomials. The statistics of our coherent states will also be considered by the calculus of one called Mandel parameter. The squeezing phenomenon was also discussed. [ABSTRACT FROM AUTHOR]
Accurately determining hydrodynamic force statistics is crucial for designing offshore engineering structures, including offshore wind turbine foundations, due to the significant impact of nonlinear wave–structure interactions. However, obtaining precise load statistics often involves computationally intensive simulations. Furthermore, the estimation of statistics using current practices is subject to ongoing discussion due to the inherent uncertainty involved. To address these challenges, we present a novel machine learning framework that leverages data‐driven surrogate modeling to predict hydrodynamic loads on monopile foundations while reducing reliance on costly simulations and facilitate the load statistics reconstruction. The primary advantage of our approach is the significant reduction in evaluation time compared to traditional modeling methods. The novelty of our framework lies in its efficient construction of the surrogate model, utilizing the Gaussian process regression machine learning technique and a Bayesian active learning method to sequentially sample wave episodes that contribute to accurate predictions of extreme hydrodynamic forces. Additionally, a spectrum transfer technique combines computational fluid dynamics (CFD) results from both quiescent and extreme waves, further reducing data requirements. This study focuses on reducing the dimensionality of stochastic irregular wave episodes and their associated hydrodynamic force time series. Although the dimensionality reduction is linear, Gaussian process regression successfully captures high‐order correlations. Furthermore, our framework incorporates built‐in uncertainty quantification capabilities, facilitating efficient parameter sampling using traditional CFD tools. This paper provides comprehensive implementation details and demonstrates the effectiveness of our approach in delivering reliable statistics for hydrodynamic loads while overcoming the computational cost constraints associated with classical modeling methods. [ABSTRACT FROM AUTHOR]
Sen, Sweta, Nayak, Narayan Chandra, and Mohanty, William Kumar
Subjects
NONLINEAR statistical models, TROPICAL cyclones, LINEAR statistical models, CYCLONE forecasting, CYCLONES, SEVERE storms, BOX-Jenkins forecasting
Abstract
Forecasting tropical cyclones with climate and physical variability and observed cyclonic disturbances has been developed over the years for all the ocean basins successfully and is still one of the priorities for disaster risk reduction policymaking. This study attempts to forecast seasonal cyclonic disturbances and severe cyclonic storms over the Bay of Bengal, where about 80% of the tropical cyclones of the North Indian Ocean are formed. We have used three time-series models, namely, the seasonal autoregressive integrated moving average with exogenous variables (SARIMAX) model, artificial neural network-nonlinear autoregressive with exogenous variables (ANN-NARX) model, and the hybrid model. The basic purpose of considering three different models is to improve the forecasting accuracy of tropical cyclones. We have shown that the intensification rate of the severe cyclonic storms over the Bay of Bengal has been significant and increasing over the years. Results show that the ANN-NARX model with sea surface temperature and near-surface wind speed as predictors is the best performance model for long-term forecasting of cyclonic disturbances. Hence, the distribution of cyclonic disturbances is non-linear. The correlations between observed and predicted occurrences are 0.80 and 0.85 for cyclonic disturbances and severe cyclonic storms, respectively, corroborating, by and large, the forecasting accuracies of some previous studies. The forecasting of cyclonic disturbances indicates that they will vary from 5 to 13 annually and there will be, on average, one severe cyclonic storm per year. The likelihood of occurrence of severe cyclonic storms is most significant in the post-monsoon season. This forecast till 2050 would help the scientific community and policymakers significantly for applications and good disaster risk governance. [ABSTRACT FROM AUTHOR]
Due to the effects of global climate change and altered human land-use patterns, typical shrub encroachment in grasslands has become one of the most prominent ecological problems in grassland ecosystems. Shrub coverage can quantitatively indicate the degree of shrub encroachment in grasslands; therefore, real-time and accurate monitoring of shrub coverage in large areas has important scientific significance for the protection and restoration of grassland ecosystems. As shrub-encroached grasslands (SEGs) are a type of grassland with continuous and alternating growth of shrubs and grasses, estimating shrub coverage is different from estimating vegetation coverage. It is not only necessary to consider the differences in the characteristics of vegetation and non-vegetation variables but also the differences in characteristics of shrubs and herbs, which can be a challenging estimation. There is a scientific need to estimate shrub coverage in SEGs to improve our understanding of the process of shrub encroachment in grasslands. This article discusses the spectral differences between herbs and shrubs and further points out the possibility of distinguishing between herbs and shrubs. We use Sentinel-2 and Gao Fen-6 (GF-6) Wide Field of View (WFV) as data sources to build a linear spectral mixture model and a random forest (RF) model via space–air–ground collaboration and investigate the effectiveness of different data sources, features and methods in estimating shrub coverage in SEGs, which provide promising ways to monitor the dynamics of SEGs. The results showed that (1) the linear spectral mixture model can hardly distinguish between shrubs and herbs from medium-resolution images in the SEG. (2) The RF model showed high estimation accuracy for shrub coverage in the SEG; the estimation accuracy (R2) of the Sentinel-2 image was 0.81, and the root-mean-square error (RMSE) was 0.03. The R2 of the GF6-WFV image was 0.72, and the RMSE was 0.03. (3) Texture feature introduced in RF models are helpful to estimate shrub coverage in SEGs. (4) Regardless of the linear spectral mixture model or the RF model being employed, the Sentinel-2 image presented a better estimation than the GF6-WFV image; thus, this data has great potential to monitor shrub encroachment in grasslands. This research aims to provide a scientific basis and reference for remote sensing-based monitoring of SEGs. [ABSTRACT FROM AUTHOR]
Photonic Orbital Angular Momentum (OAM) is becoming a pertinent quantum variable for atom-light interaction, in particular for non-linear interaction which leads to photon entanglement and OAM-entanglement. With two 4-levels atomic schemes, we show that Four Wave Mixing addressed by vortex beams leads to very different OAM-entanglement especially for large OAM values. [ABSTRACT FROM AUTHOR]
Nguyen, Dac Cong Tai, Benameur, Said, Mignotte, Max, and Lavoie, Frédéric
Abstract
Three-dimensional (3D) reconstruction of lower limbs is of great interest in surgical planning, computer assisted surgery, and for biomechanical applications. The use of 3D imaging modalities such as computed tomography (CT) scan and magnetic resonance imaging (MRI) has limitations such as high radiation and expense. Therefore, three-dimensional reconstruction methods from biplanar X-ray images represent an attractive alternative. In this paper, we present a new unsupervised 3D reconstruction method for the patella, talus, and pelvis using calibrated biplanar (45- and 135-degree oblique) radiographic images and a prior information on the geometric/anatomical structure of these complex bones. A multidimensional scaling (MDS)-based nonlinear dimensionality reduction algorithm is applied to exploit this prior geometric/anatomical information. It represents relevant deformations existing in the training set. Our method is based on a hybrid-likelihood using regions and contours. The edge-based notion represents the relation between the external contours of the bone projections and an edge potential field estimated on the radiographic images. Region-based notion is the non-overlapping ratio between segmented and projected bone regions of interest (RoIs). Our automatic 3D reconstruction model entails stochastically minimizing an energy function allowing an estimation of deformation parameters of the bone shape. This 3D reconstruction method has been successfully tested on 13 biplanar radiographic image pairs, yielding very promising results. [ABSTRACT FROM AUTHOR]
Khather, Salam Ibrahim, Ibrahim, Muhammed A., and Abdullah, Abdullah I.
Subjects
NONLINEAR analysis, LINEAR control systems, PREDICTION models, ENERGY conservation, CONSERVATION laws (Physics), NONLINEAR statistical models
Abstract
Nonlinear model predictive control (NMPC) has been recognized as an influential control strategy for intricate dynamical systems due to its superior performance over conventional linear control systems. The complexity associated with nonlinear dynamics is a recurring issue in a multitude of engineering applications, rendering the development of nonlinear models a challenging endeavor. The construction of such models, either through correlating input and output data or applying fundamental energy conservation laws, presents considerable difficulties. The absence of an effective model suitable for fundamental nonlinear processes is a marked deficiency, one that NMPCs are poised to address. NMPCs demonstrate a pronounced advantage over linear MPCs, particularly in managing the complexities and nonlinearities inherent in various systems. They exhibit efficacy in controlling nonlinear dynamics, including input/output constraints, objective functions, and computationally demanding optimization problems integral to real-time applications in process industries, power systems, and autonomous vehicular systems. This capability has prompted extensive research into nonlinear dynamics, thereby diminishing the disparity between the analysis of linear and nonlinear MPCs. This review provides a thorough examination of NMPCs, encompassing the fundamental principle, mathematical formulation, and various algorithms associated with NMPCs. A concise overview of NMPC applications, along with the challenges they pose, is also discussed. [ABSTRACT FROM AUTHOR]
Seepage prediction is a vital part of the dam safety monitoring system. Traditional statistical models ignore the nonlinear characteristics of the measured variables, resulting in poor accuracy and stability. In this study, an optimized neural network model is constructed to predict the seepage extent of hydropower station dams. An improved gradient-based optimizer (IGBO) is proposed to increase the accuracy and reliability of extreme learning machine (ELM) model predictions. The IGBO introduces an initialization method with elite opposition-based learning to improve population diversity. A crossover operator and a nonlinear parameter are used in the IGBO to enhance the ability of local search and the probability of avoiding local optima. The performance of the IGBO-optimized ELM network (IGBO-ELM) was evaluated on 12 datasets. In addition, the comparison experimental results with actual monitoring data of concrete dams show that IGBO-ELM has strong generalization performance and accuracy among the other four optimization models. [ABSTRACT FROM AUTHOR]
• NSDAHU is proposed to solve nonlinear structural dynamics problems with hybrid uncertainties. • NSDAHU easily transmits response distribution of nonlinear structural dynamic systems directly using control equations. • NSDAHU solves pure interval, pure random, and linear structural dynamics problems. • Three numerical examples verify the applicability, accuracy, and efficiency of NSDAHU. The physical systems of nonlinear structural dynamics are usually expressed mathematically as ordinary differential equations. When hybrid uncertainties are in the system, it is necessary to understand the effects of uncertainties in the system response to accurately predict its behavior using mathematical models. To address the hybrid uncertainty of random and interval parameters often encountered in engineering applications, we propose the nonlinear structural dynamics analysis with hybrid uncertainty (NSDAHU) method. The expressions for calculating the statistical characteristics (i.e., mean and variance) of nonlinear structural dynamic responses with hybrid uncertainties are derived using random interval moment and random interval perturbation methods. The applicability, accuracy, and efficiency of the NSDAHU method are verified using Monte Carlo simulation method (MCSM). The NSDAHU method can not only solve hybrid random interval problems but also solve single interval, single random, and linear structural dynamics problems. [ABSTRACT FROM AUTHOR]
Given the highly nonlinear and strongly constrained nature of the electro-hydraulic system, we proposed an observer-based approximate nonlinear model predictive controller (ANMPC) for the trajectory tracking control of robotic excavators. A nonlinear non-affine state space equation with identified parameters is employed to describe the dynamics of the electro-hydraulic system. Then, to mitigate the plant-model mismatch caused by the first-order linearization, an approximate affine nonlinear state space model is utilized to represent the explicit relationship between the output and input and an ANMPC is designed based on the approximate nonlinear model. Meanwhile, the Extended Kalman Filter was introduced for state observation to deal with the unmeasurable velocity information and heavy measurement noises. Comparative experiments are conducted on a 1.7-ton hydraulic robotic excavator, where ANMPC and linear model predictive control are used to track a typical excavation trajectory. The experimental results provide evidence of convincing trajectory tracking performance. [ABSTRACT FROM AUTHOR]
In this paper, the problem of a fully actuated hexarotor performing a physical interaction with the environment through a rigidly attached tool is considered. A nonlinear model predictive impedance control (NMPIC) method is proposed to achieve the goal in which the controller is able to simultaneously handle the constraints and maintain the compliant behavior. The design of NMPIC is the combination of a nonlinear model predictive control and impedance control based on the dynamics of the system. A disturbance observer is exploited to estimate the external wrench and then provide compensation for the model which was employed in the controller. Moreover, a weight adaptive strategy is proposed to perform the online tuning of the weighting matrix of the cost function within the optimal problem of NMPIC to improve the performance and stability. The effectiveness and advantages of the proposed method are validated by several simulations in different scenarios compared with the general impedance controller. The results also indicate that the proposed method opens a novel way for interaction force regulation. [ABSTRACT FROM AUTHOR]
Antonesi, Gabriel, Cioara, Tudor, Toderean, Liana, Anghel, Ionut, and De Mulder, Chaim
Subjects
MULTILAYER perceptrons, ELECTRIC power consumption, DEMAND forecasting, MACHINE learning, CITIES & towns, NONLINEAR statistical models, ENERGY consumption
Abstract
The shift towards renewable energy integration into smart grids has led to complex management processes, which require finer-grained energy and heat generation/ demand forecasting while considering data from monitoring devices and the integration of smaller multi-energy sub-systems at the community, district, or buildings level. However, energy prediction is challenging due to the high variability in the electrical and thermal energy demands of building occupants, the heterogenous characteristics of the energy assets or buildings in a district, and the length of the forecasting horizon. In this paper, we define a data-driven machine-learning pipeline to predict the electricity and thermal consumption of buildings and energy assets from a city district in 24 h intervals. Each pipeline's step is divided into sensors' data processing and model integration, data enrichment and features engineering, and multilayer perceptron model training. To address some of the drawbacks of using the multi-layer perceptron model, such as slow convergence rate and risk of overfitting, and to ensure a lower error in the energy prediction process features, an engineering technique was employed. We incorporated weather data features and interaction features derived from fusing the energy data with statistical models to capture the nonlinear patterns of the electrical and heat demands. The proposed approach was successfully validated in a real-world environment, a city district in Gent, Belgium. It featured good prediction results for electricity and heat production and consumption of various assets without considering the physical characteristics, making it viable and easily applicable in broader urban areas. The evaluation of energy prediction accuracy yielded good results, with a Mean Absolute Error (MAE) falling within the range of 0.003 to 3.27, and a Mean Absolute Scaled Error (MASE) ranging from 7 × 10−5 to 2.57 × 10−3. [ABSTRACT FROM AUTHOR]
Insight problems are likely to trigger an initial, incorrect mental representation, which needs to be restructured in order to find the solution. Despite the widespread theoretical assumption that this restructuring process happens suddenly, leading to the typical "Aha!" experience, the evidence is inconclusive. Among the reasons for this lack of clarity is that many measures of insight rely solely on the solvers' subjective experience of the solution process. In our previous paper, we used matchstick arithmetic problems to demonstrate that it is possible to objectively trace problem-solving processes by combining eye movements with new analytical and statistical approaches. Specifically, we divided the problem-solving process into ten (relative) temporal phases to better capture possible small changes in problem representation. Here, we go a step further to demonstrate that classical statistical procedures, such as ANOVA, cannot capture sudden representational change processes, which are typical for insight problems. Only nonlinear statistical models, such as generalized additive (mixed) models (GAMs) and change points analysis, correctly identified the abrupt representational change. Additionally, we demonstrate that explicit hints reorient participants' focus in a qualitatively different manner, changing the dynamics of restructuring in insight problem solving. While insight problems may indeed require a sudden restructuring of the initial mental representation, more sophisticated analytical and statistical approaches are necessary to uncover their true nature. [ABSTRACT FROM AUTHOR]
The identification of nonlinear terms existing in the dynamic model of real-world mechanical systems such as robotic manipulators is a challenging modeling problem. The main aim of this research is not only to identify the unknown parameters of the nonlinear terms but also to verify their existence in the model. Generally, if the structure of the model is provided, the parameters of the nonlinear terms can be identified using different numerical approaches or evolutionary algorithms. However, finding a non-zero coefficient does not guarantee the existence of the nonlinear term or vice versa. Therefore, in this study, a meticulous investigation and statistical verification are carried out to ensure the reliability of the identification process. First, the simulation data are generated using the white-box model of a direct current motor that includes some of the nonlinear terms. Second, the particle swarm optimization (PSO) algorithm is applied to identify the unknown parameters of the model among many possible configurations. Then, to evaluate the results of the algorithm, statistical hypothesis and confidence interval tests are implemented. Finally, the reliability of the PSO algorithm is investigated using experimental data acquired from the UR5 manipulator. To compare the results of the PSO algorithm, the nonlinear least squares errors (NLSE) estimation algorithm is applied to identify the unknown parameters of the nonlinear models. The result shows that the PSO algorithm has higher identification accuracy than the NLSE estimation algorithm, and the model with identified parameters using the PSO algorithm accurately calculates the output torques of the joints of the manipulator. [ABSTRACT FROM AUTHOR]
Different from the mainstream nonlinear multivariate statistical process monitoring approaches, which usually implement offline feature extraction for a given dataset sampled from the normal operating condition, the proposed method analyzes the inconsistency inherited in each specific online monitored sample of current interest in a timely manner so that a novel monitoring statistic called locality constrained index (LCI) can be simultaneously calculated. Through taking advantage of an explicit nonlinear mapping (ENM), the sampled data is first transformed into a higher‐dimensional space so as to explicitly reflect the inherited nonlinearity between measured variables. With the involvement of the online monitored sample in designing the corresponding objective function, the calculation of LCI is then targeted to point out the deviation within the neighbourhood. According to the comparisons with the counterparts, the proposed ENM‐LCI‐based method demonstrated its salient superiority and consistent effectiveness in nonlinear process monitoring. [ABSTRACT FROM AUTHOR]
Ma, Jiangtao, Song, Yan, Niu, Yin, and Dong, Yuying
Subjects
*PREDICTIVE control systems, *DECEPTION, *SINGULAR value decomposition, *FUZZY systems, *DYNAMIC models, *NONLINEAR statistical models, *PSYCHOLOGICAL feedback
Abstract
This paper is concerned with the security-based fuzzy model predictive control (FMPC) problem for a class of discrete-time Takagi-Sugeno (T-S) fuzzy systems with deception attacks on the measured outputs. With respect to the unmeasurable system states, the nonlinearity of the system and the destructiveness of deception attacks, the dynamic output-feedback control in the framework of FMPC is adopted, meanwhile, the worst-case optimization problem over the infinite moving horizon is formulated for the performance analysis and control synthesis. By means of the quadratic function approach and the singular value decomposition technique, the non-convexity caused by couplings between decisive variables is coped with and conditions are derived to suffice the terminal constraint set. In addition, to mitigate the destruction of attacks to the recursive feasibility, the inequality analysis technique is applied with the aid of the introduction of special scalars. Based on the establishments, a certain auxiliary optimization problem with solvability is put forward to find the desired controllers, and sufficient conditions are obtained to guarantee that the underlying system subject to deception attacks under the proposed FMPC-based controllers is mean square secure in H 2 -sense. Finally, an illustrative example is used to demonstrate the validity of the proposed methods. [ABSTRACT FROM AUTHOR]
The article is aimed at the mathematical and optimization modeling of technological processes of surface treatments, specifically the zincing process. In surface engineering, it is necessary to eliminate the risk that the resulting product quality will not be in line with the reliability requirements or needs of customers. To date, a number of research studies deal with the applications of mathematical modeling and optimization methods to control technological processes and eliminate uncertainties in the technological response variables. The situation is somewhat different with the acid zinc plating process, and we perceive their lack more. This article reacts to the specific requirements from practice for the prescribed thickness and quality of the zinc layer deposited in the acid electrolyte, which stimulated our interest in creating a statistical nonlinear model predicting the thickness of the resulting zinc coating (ZC). The determination of optimal process conditions for acid galvanizing is a complex problem; therefore, we propose an effective solving strategy based on the (i) experiment performed by using the design of experiments (DOE) approach; (ii) exploratory and confirmatory statistical analysis of experimentally obtained data; (iii) nonlinear regression model development; (iv) implementation of nonlinear programming (NLP) methods by the usage of MATLAB toolboxes. The main goal is achieved—regression model for eight input variables, including their interactions, is developed (the coefficient of determination reaches the value of R2 = 0.959403); the optimal values of the factors acting during the zincing process to achieve the maximum thickness of the resulting protective zinc layer (the achieved optimum value th* = 12.7036 μm), are determined. [ABSTRACT FROM AUTHOR]
Perovskite materials have a variety of crystal structures, and the properties of crystalline materials are greatly influenced by geometric information such as the space group, crystal system, and lattice constant. It used to be mostly obtained using calculations based on density functional theory (DFT) and experimental data from X-ray diffraction (XRD) curve fitting. These two techniques cannot be utilized to identify materials on a wide scale in businesses since they require expensive equipment and take a lot of time. Machine learning (ML), which is based on big data statistics and nonlinear modeling, has advanced significantly in recent years and is now capable of swiftly and reliably predicting the structures of materials with known chemical ratios based on a few key material-specific factors. A dataset encompassing 1647 perovskite compounds in seven crystal systems was obtained from the Materials Project database for this study, which used the ABX3 perovskite system as its research object. A descriptor called the bond-valence vector sum (BVVS) is presented to describe the intricate geometry of perovskites in addition to information on the usual chemical composition of the elements. Additionally, a model for the automatic identification of perovskite structures was built through a comparison of various ML techniques. It is possible to identify the space group and crystal system using just a small dataset of 10 feature descriptors. The highest accuracy is 0.955 and 0.974, and the highest correlation coefficient (R2) value of the lattice constant can reach 0.887, making this a quick and efficient method for determining the crystal structure. [ABSTRACT FROM AUTHOR]
The self-similar gravitational collapse solutions to the Einstein-axion–dilaton system have already been discovered. Those solutions become invariants after combining the spacetime dilation with the transformations of internal SL(2, R). We apply nonlinear statistical models to estimate the functions that appear in the physics of Black Holes of the axion–dilaton system in four dimensions. These statistical models include parametric polynomial regression, nonparametric kernel regression and semi-parametric local polynomial regression models. Through various numerical studies, we reached accurate numerical and closed-form continuously differentiable estimates for the functions appearing in the metric and equations of motion. [ABSTRACT FROM AUTHOR]
NONLINEAR statistical models, GAUSSIAN processes, TIME series analysis, HYPERGEOMETRIC series, GRANGER causality test, HILBERT-Huang transform, MARKOV processes
Abstract
The ability to test for statistical causality in linear and nonlinear contexts, in stationary or non-stationary settings, and to identify whether statistical causality influences trend of volatility forms a particularly important class of problems to explore in multi-modal and multivariate processes. In this paper, we develop novel testing frameworks for statistical causality in general classes of multivariate nonlinear time series models. Our framework accommodates flexible features where causality may be present in either: trend, volatility or both structural components of the general multivariate Markov processes under study. In addition, we accommodate the added possibilities of flexible structural features such as long memory and persistence in the multivariate processes when applying our semi-parametric approach to causality detection. We design a calibration procedure and formal testing procedure to detect these relationships through classes of Gaussian process models. We provide a generic framework which can be applied to a wide range of problems, including partially observed generalised diffusions or general multivariate linear or nonlinear time series models. We demonstrate several illustrative examples of features that are easily testable under our framework to study the properties of the inference procedure developed including the power of the test, sensitivity and robustness. We then illustrate our method on an interesting real data example from commodity modelling. [ABSTRACT FROM AUTHOR]
The article reports that Okra, Abelmoschus esculentus (L.) Moench, representative of family Malvaeceae is the important vegetable crops cultivated in the tropical and subtropical part of the world. Topics include examines India ranks first in terms of area and production and occupy several percentage share in world's okra production.
Ortíz-Lozano, José Ángel, Rodríguez-Narciso, Silvia, Carrillo, Julián, Hernández-Andrade, Juan Antonio, Pacheco-Martínez, Jesús, Hernández-Marín, Martín, and de la Fuente-Antequera, Albert
Concrete is one of the most commonly used construction materials in the world due to its versatility. There are different types of concrete according to the required mechanical responses, and these will depend on the composition of the elements. Therefore, additional elements have been developed to improve the properties and conditions of concrete. One of these elements is reinforcing fibers made of steel, polypropylene, glass, and so on, which, according to the base material, geometry, and dosage, improve the mechanical and workability properties and decrease and/or prevent the generation of cracks, which are some of the most common problems in industrial slabs. This study performs an analysis of the changes in the mechanical properties of concrete (compressive strength, rupture modulus, modulus of elasticity, Poisson's ratio, and residual stress) due to the addition of fiber-reinforced concrete (FRC) to determine the physical and mechanical conditions of the fibers that improve the concrete and its application in industrial concrete. Due to the large number of samples and variables, advanced statistical methods (analysis of variance and comparative index) were used in the numerical study, which allowed to analyze and compare several results at the same time. This research is divided into two stages. In the first stage, six steel fibers (with a dosage of 2.7, 6, and 11 and three of 28 kg/m3) and five polypropylene fibers (with a dosage of 0.6, 2.15, and 2.7 and two of 3 kg/m3) were used in the study, and compression and bending tests (ASTM C39 and C78, respectively) were performed on 35 cylinders and 45 beams. Improvements were identified in several fiber-reinforced concrete samples in terms of compressive strength: 67% of the steel fiber samples and 100% of the polypropylene fiber samples had values above the average value of the simple concrete; in terms of the modulus of rupture, 83% of the steel fiber samples and 80% of the polypropylene fiber samples had values above the average value of the simple concrete. In the second stage, one type of steel fiber and one type of polypropylene fiber were selected for a second mechanical analysis (64 cylinders, 72 beams, and 15 slabs) with dosages of 20, 30, and 40 kg/m3 and 2.13, 4.25, and 6.38 kg/m3, respectively. In the second stage, statistical analysis and modeling with nonlinear analysis were used to evaluate the results, where residual strength improved but Poisson's ratio decreased when the dosage of fibers was increased. [ABSTRACT FROM AUTHOR]
Bolmin, Ophelia, McElrath, Thomas C, Wissa, Aimy, and Alleyne, Marianne
Subjects
*NONLINEAR statistical models, *LINEAR statistical models, *CENTER of mass
Abstract
Click beetles (Coleoptera: Elateridae) are known for their unique clicking mechanism that generates a powerful legless jump. From an inverted position, click beetles jump by rapidly accelerating their center of mass (COM) upwards. Prior studies on the click beetle jump have focused on relatively small species (body length ranging from 7 to 24 mm) and have assumed that the COM follows a ballistics trajectory during the airborne phase. In this study, we record the jump and the morphology of 38 specimens from diverse click beetle genera (body length varying from 7 to 37 mm) to investigate how body length and jumping performance scale across the mass range. The experimental results are used to test the ballistics motion assumption. We derive the first morphometric scaling laws for click beetles and provide evidence that the click beetle body scales isometrically with increasing body mass. Linear and nonlinear statistical models are developed to study the jumping kinematics. Modeling results show that mass is not a predictor of jump height, take-off angle, velocity at take-off, and maximum acceleration. The ballistics motion assumption is strongly supported. This work provides a modeling framework to reconstruct complete morphological data sets and predict the jumping performance of click beetles from various shapes and sizes. [ABSTRACT FROM AUTHOR]
CUTTING machines, ROCK music, NONLINEAR statistical models, LUBRICATION & lubricants, MANUFACTURING processes, ROCK deformation, FLUIDS
Abstract
Copyright of Rudarsko-Geolosko-Naftni Zbornik is the property of Faculty of Mining, Geology & Petroleum Engineering and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
In this brief, a precise angular tracking control strategy using nonlinear predictive optimization control (POC) approach is address. In order to deal with the model uncertainty and noise interference, a online Hammerstein-model-based POC is designed using online estimated parameters and model residual. Above all, a rate-dependent Duhem model is used to describe the nonlinear sub-model of the whole Hammerstein architecture for depicting multi-valued mapping nonlinear characteristic. Then, predictive output of angular deflection is obtained by Diophantine function based on linear submodel. Subsequently, the iterative control value depends on estimated parameters through data-driven is acquired. Later, based on the cost function, the iteratively optimization control quantity is fed back to the electromagnetic driven deflection micromirror (EDDM) system on the basis of Hammerstein architecture. It should be stressed that the control value is determined by real-time update model residual and defined cost function. Moreover, the stability of POC strategy is proposed. In addition, experimental result is proposed to validate the effectiveness of the control technique adopted in this paper. [ABSTRACT FROM AUTHOR]
Lai Wei, McCloy, Ryan, Jie Bao, and Cranney, Jesse
Subjects
PREDICTION models, PROCESS control systems, COST functions, TECHNICAL specifications, CHEMICAL processes, ECONOMIC convergence, NONLINEAR statistical models, GEODESICS
Abstract
Modern chemical processes need to operate around time-varying operating conditions to optimize plant economy, in response to dynamic supply chains (e.g., time-varying specifications of product and energy costs). As such, the process control system needs to handle a wide range of operating conditions whilst optimizing system performance and ensuring stability during transitions. This article presents a reference-flexible nonlinear model predictive control approach using contraction based constraints. Firstly, a contraction condition that ensures convergence to any feasible state trajectories or set-points is constructed. This condition is then imposed as a constraint on the optimization problem for model predictive control with a general (typically economic) cost function, utilizing Riemannian weighted graphs and shortest path techniques. The result is a reference flexible and fast optimal controller that can trade-off between the rate of target trajectory convergence and economic benefit (away from the desired process objective). The proposed approach is illustrated by a simulation study on a CSTR control problem. [ABSTRACT FROM AUTHOR]
During 2019-20, export of cotton yarn, cotton fabrics, cotton made-ups and handloom products reached US$ 10.01 billion (Anonymous-1). Forecasting cotton area, production and productivity of India HT
Year
Area under Cotton Production (in Lakh ha)
Cotton Production (in Lakh bales of 170 kg each)
Cotton Productivity (Kg. Cotton is an important fibre and cash crop of India and plays a dominant role in the industrial and agricultural economy of the country. [Extracted from the article]
In this paper, we have compared some linear and nonlinear models for explaining and forecasting the productionof two different crops, Mango and the Sugarcane using area of fields as the auxiliary variable. The models under considerations are compared on the basis of different fitting measures such as, Coefficient of Determination (R²),Residual Mean Square (s²), Mean Absolute Error (MAE) and Akaike Information Criterion (A.I.C.). The two primary data sets have been collected, first for sugarcane production from the Sitapur district of Uttar Pradesh state in India and second for mango production from the Lucknow district of Uttar Pradesh state in India. The fitting measures are calculated for the collected primary data sets and the best fitted models are selected and recommended for further use on the basis of the fitting measures. From the data analysis for both the data sets, it is found that the Compound, Growth, Exponential and Logistic models are equally good for practical applications. [ABSTRACT FROM AUTHOR]
Optimal design ideas are increasingly used in different disciplines to rein in experimental costs. Given a nonlinear statistical model and a design criterion, optimal designs determine the number of experimental points to observe the responses, the design points and the number of replications at each design point. Currently, there are very few free and effective computing tools for finding different types of optimal designs for a general nonlinear model, especially when the criterion is not differentiable. We introduce an R package ICAOD to find various types of optimal designs and they include locally, minimax and Bayesian optimal designs for different nonlinear statistical models. Our main computational tool is a novel metaheuristic algorithm called imperialist competitive algorithm (ICA) and inspired by socio-political behavior of humans and colonialism. We demonstrate its capability and effectiveness using several applications. The package also includes several theory-based tools to assess optimality of a generated design when the criterion is a convex function of the design. [ABSTRACT FROM AUTHOR]
NONLINEAR statistical models, FOURIER analysis, BAYESIAN field theory, DIAGNOSIS methods, TEST design, FAULT diagnosis
Abstract
Copyright of Experimental Technology & Management is the property of Experimental Technology & Management Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
This study presents a repetitive group sampling plan and a multiple dependent state sampling plan based on the EWMA (exponentially weighted moving average) yield index for product acceptance. The proposed plans utilize the current and previous information through EWMA statistic to reach a decision of lot sentencing. A non-linear optimization model is developed to determine the plan parameters of the proposed plans for various specified conditions. The performance of the proposed plans over several existing sampling plans is analyzed, showing that the proposed plans are efficient in reducing the sample size for lot sentencing. For industrial application, a real example is given to demonstrate the implementation of the proposed plans. [ABSTRACT FROM AUTHOR]
Research conducted at the School of Medicine focused on utilizing artificial neural networks to predict the residual valve after endoscopic posterior urethral valve ablation in children with a history of posterior urethral valve. The study involved 144 patients and found that the artificial neural network system had an accuracy of 90.75% in predicting the need for repeated ablation. Factors such as younger age at surgery, hyperechogenicity of the renal parenchyma, presence of vesicoureteral reflux, and grade of reflux before surgery were identified as significant variables affecting postoperative outcomes. The research concluded that artificial neural networks are valuable tools for predicting outcomes in patients undergoing endoscopic valve ablation. [Extracted from the article]
Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression. [ABSTRACT FROM AUTHOR]
In a variety of discrete manufacturing environments, it is common to experience a nonlinear production rate. In particular, our interest is in the case of an increasing production rate, where learning creates efficiencies. This leads to greater output per unit time as the process continues. However, the advantages of an increasing production rate may be offset by other factors. For examples, JIT policies typically lead to smaller lot sizes, where the value of an increasing production rate is largely lost. We develop a general model that balances the impact of various competing effects. Our research focuses on determining lot sizes that satisfy demand requirements while minimising production and holding costs. We extend our prior work by developing a multi-product, multi-machine method for modelling and solving this class of production problems. The solution method is demonstrated using the production function from the PR#2 grinding process for a production plant in Carlisle, PA. The solution heuristic provides solution times that are on average only 0.22 to 0.55% above optimum as the solution parameters are varied and the ratio of heuristic solution times to optimal solution times varies from 18.16 to 14.15%. [ABSTRACT FROM AUTHOR]
The present study aimed to apply a sinusoidal model to duck body weight records in order to introduce it to the field of poultry science. Using 8 traditional growth functions as a guide (Bridges, Janoschek, logistic, Gompertz, Von Bertalanffy, Richards, Schumacher, and Morgan), this study looked at how well the sinusoidal equation described the growth patterns of ducks. By evaluating statistical performance and examining model behavior during nonlinear regression curve fitting, models were compared. The data used in this study came from 3 published articles reporting 1) body weight records of Kuzi ducks aged 1 to 70 d, 2) body weight records for Polish Peking ducks aged 1 to 70 d, and 3) average body weight of Peking ducks aged 1 to 42 d belonging to 5 different breeds. The general goodness-of-fit of each model to the various data profiles was assessed using the adjusted coefficient of determination, root mean square error, Akaike's information criterion (AIC), and Bayesian information criterion. All of the models had adjusted coefficient of determination values that were generally high, indicating that the models generally fit the data well. Duck growth dynamics are accurately described by the chosen sinusoidal equation. The sinusoidal equation was found to be one of the best functions for describing the age-related changes in body weight in ducks when the growth functions were compared using the goodness-of-fit criteria. To date, no research has been conducted on the use of sinusoidal equations to describe duck growth development. To describe the growth curves for a variety of duck strains/lines, the sinusoidal function employed in this study serves as a suitable substitute for conventional growth functions. [ABSTRACT FROM AUTHOR]
The optimal milling parameters of the brewers grains hammer mill have been determined to make the miller operates in the best performance condition, minimum cost and ensure the quality of milled brewers grains. The experiment planning method with "black box" model and statistical analyses by a non-linear regression analysis method to determine were used. The study determined two second-order polynomial regression equations describing the influence of the operating parameters to the quality of the milled brewers grains (Cb) and specific energy consumption (Se). Based on directed grope algorithm and random algorithm method, the results of solving optimal problems gave optimal milling regime of the brewers grains mill as follows: optimal norms for milling process: Cbmax = 87.4% and Semin = 0.15 kWh/kg at hammer velocity of 32 m/s, material feed rate of 18.3 kg/h and clearance between the hammer and milling chamber of 3.2 mm. [ABSTRACT FROM AUTHOR]
NONLINEAR statistical models, ENTROPY (Information theory), TIME series analysis, CEREBRAL cortex, INTEREST (Finance)
Abstract
The Hurst exponent estimates the degree of self-similarity and predictability of a time series, which, under this nonlinear statistical model, can adopt two opposing tendencies with respect to the way these data series are mobilized over time. Persistent and anti-persistent series are thus described, depending on whether the Hurst value describing them is greater or less than 0.5 in a range of 0 to 1. Those series that show "Hurst effect" (persistent series with H>0.5) are potentially predictable in the short or medium term, which makes them especially interesting for any predictive interest in science or economics. In the brain, when the oscillatory electrical activity of an EEG is analyzed, the time series that make up a complete recording include a variable number of sources, electrodes or channels, which detect the electrical activity of millions of pyramidal neurons at the level of the cerebral cortex. When this signal is decomposed into its frequency components, its spectrum ranges from the slowest waves, or Delta (0.1-4Hz), to the fastest, Gamma (>30 Hz). In the intermediate range, the Theta (4-8Hz); Alpha (8-12Hz); and Beta (13-30Hz) oscillations complete the range of the EEG spectrum. When estimating the Hurst exponent by filtered intervals of the total frequency range, only for the Delta band, the H values exceed H=0.5. The Hurst values for Delta were close to H = 0.85 and all the rest of the spectrum falls to values less than 0.5, in a range from 1.5
Conventional statistical models provide inaccurate predictions of concrete crack openings because they do not consider the nonlinear temperature response and the residual characteristics of concrete. To address this problem, this study introduces a nonlinear temperature factor and develops an improved statistical model of crack openings. The chaotic characteristics of residual time series of the improved statistical model are analyzed based on chaos theory and phase-space reconstruction theory. These theories are integrated with back-propagation (BP) artificial neural networks and genetic algorithms (GAs) to establish a GA-BP neural network model for predicting residuals. Finally, a hybrid model is developed for predicting the concrete crack opening behavior. The predictions of the conventional statistical model, the statistical model considering nonlinear temperature component, and the hybrid model are compared using the case study on the crack openings of a regulating sluice. The results show that the proposed hybrid model in this study for predicting concrete crack openings is significantly more accurate than the conventional statistical model and the statistical model considering nonlinear temperature component. [ABSTRACT FROM AUTHOR]
The influence of the air resistance over a body moving over a slant cable was studied using an integrated approach, including modelling, analytical solutions to ordinary differential equations, experimental practice and the estimation of error margins for the experimental data. The proposed setup allows for three different configurations for the moving body, one without coating, with a small transverse section area, and two with coating, with higher transverse section area, and this allows us to experimentally demonstrate not only the dependence of air resistance with the transverse area, but also with the format of the moving body. As a result of our methods, we were able to determine properties such as friction coefficient, air resistance coefficient per mass unit and limiting speed. The proposed experiment involves only low cost materials and can be adapted to be used as an activity in college education. [ABSTRACT FROM AUTHOR]
Wireless ranging measurements have been proposed for enabling multiple Micro Air Vehicles (MAVs) to localize with respect to each other. However, the high-dimensional relative states are weakly observable due to the scalar distance measurement. Hence, the MAVs have degraded relative localization and control performance under unobservable conditions as can be deduced by the Lie derivatives. This paper presents a nonlinear model predictive control (NMPC) by maximizing the determinant of the observability matrix to generate optimal control inputs, which also satisfy constraints including multi-robot tasks, input limitation, and state bounds. Simulation results validate the localization and control efficacy of the proposed MPC method for range-based multi-MAV systems with weak observability, which has faster convergence time and more accurate localization compared to previously proposed random motions. A real-world experiment on two Crazyflies indicates the optimal states and control behaviours generated by the proposed NMPC. [ABSTRACT FROM AUTHOR]
*FLUORESCENCE resonance energy transfer, *NONLINEAR statistical models, *NONLINEAR regression, *MONTE Carlo method
Abstract
The article aimed to outline a statistical model based on nonlinear regressions for the efficiency of the Fluorescence Resonance Energy Transfer (FRET) phenomenon and to select a general simulation method of the FRET phenomenon, independent of the nature of the active substance used as a donor. For this purpose, several statistical models of Polynoal and nonlinear Gaussiene regressions were first tested, which would fit as well as possible the variation in time efficiency and its dependence on the concentrations of active substances. The regression models used were implemented and tested in the R language. In the second part of the study we aprooached the problem of simulating the efficiency of FRET by adapting a Monte Carlo method, using as a starting point the previously determined regressive models. [ABSTRACT FROM AUTHOR]