3,353 results on '"Estimation theory"'
Search Results
102. A new faulty section identification and fault localization technique for three-terminal transmission line.
- Author
-
Gaur, Vishal Kumar and Bhalja, Bhavesh
- Subjects
- *
ELECTRIC fault location , *ELECTRIC lines , *ESTIMATION theory , *ELECTRIC potential , *ELECTRIC currents , *SIMULATION methods & models , *MATHEMATICAL models - Abstract
Owing to mal-operation of the conventional scheme during high resistance ground fault near tap point, a new faulty section identification and fault localization technique for three-terminal transmission line is presented in this paper. The proposed technique utilizes time-synchronized voltage and current signals from all the three terminals. Initially, fault detection based on estimation of superimposed voltage of tap point with reference to all three terminals has been carried out. Subsequently, utilizing the above three estimated superimposed voltages; faulty section identification criterion is formed. Finally, fault localization i.e. estimation of the value of fault distance and resistance has been performed. The authenticity of the proposed technique has been verified by simulating an existing 400 kV Indian three-terminal transmission network in PSCAD/EMTDC software. The simulation results point out that the proposed technique is able to identify the faulty section correctly. Moreover, it precisely estimates the value of fault distance and fault resistance as the percentage error for fault location and resistance remains within ±1.5% and ±3.5%, respectively. Likewise, its performance remains unaffected during wide variation in fault and system parameters. At the end, comparative evaluation of the proposed technique with the existing protection scheme clearly shows its superiority. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
103. System wide MV distribution network technical losses estimation based on reference feeder and energy flow model.
- Author
-
Au, Mau Teng, Ibrahim, Khairul Anwar, Gan, Chin Kim, and Tang, Jun Huat
- Subjects
- *
ELECTRIC loss in electric power systems , *ESTIMATION theory , *POWER distribution networks , *ELECTRIC potential , *ELECTRIC power consumption , *ELECTRICAL load , *MATHEMATICAL models - Abstract
This paper presents an integrated analytical approach to estimate technical losses (TL) of medium voltage (MV) distribution network. The concept of energy flow in a radial MV distribution network is modelled using representative feeders (RF) characterized by feeder peak power demand, feeder length, load distribution, and load factor to develop the generic analytical TL equations. The TL estimation approach is applied to typical utility MV distribution network equipped with energy meters at transmission/distribution interface substation (TDIS) which register monthly inflow energy and peak power demand to the distribution networks. Additional input parameters for the TL estimation are from the feeder ammeters of the outgoing primary and secondary MV feeders. The developed models have been demonstrated through case study performed on a utility MV distribution network supplied from grid source through a TDIS with a registered total maximum demand of 44.9 MW, connected to four (4) 33 kV feeders, four (4) 33/11 kV 30 MVA transformers, and twelve (12) 11 kV feeders. The result shows close agreement with TL provided by the local power utility company. With RF, the approach could be extended and applied to estimate TL of any radial MV distribution network of different sizes and demography. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
104. Travel time estimation from sparse floating car data with consistent path inference: A fixed point approach.
- Author
-
Rahmani, Mahmood, Jenelius, Erik, and Koutsopoulos, Haris N.
- Subjects
- *
TRAVEL time (Traffic engineering) , *FIXED point theory , *ESTIMATION theory , *URBAN transportation , *SENSITIVITY analysis , *MATHEMATICAL models - Abstract
Estimation of urban network link travel times from sparse floating car data (FCD) usually needs pre-processing, mainly map-matching and path inference for finding the most likely vehicle paths that are consistent with reported locations. Path inference requires a priori assumptions about link travel times; using unrealistic initial link travel times can bias the travel time estimation and subsequent identification of shortest paths. Thus, the combination of path inference and travel time estimation is a joint problem. This paper investigates the sensitivity of estimated travel times, and proposes a fixed point formulation of the simultaneous path inference and travel time estimation problem. The methodology is applied in a case study to estimate travel times from taxi FCD in Stockholm, Sweden. The results show that standard fixed point iterations converge quickly to a solution where input and output travel times are consistent. The solution is robust under different initial travel times assumptions and data sizes. Validation against actual path travel time measurements from the Google API and an instrumented vehicle deployed for this purpose shows that the fixed point algorithm improves shortest path finding. The results highlight the importance of the joint solution of the path inference and travel time estimation problem, in particular for accurate path finding and route optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
105. A phase-based smoothing method for accurate traffic speed estimation with floating car data.
- Author
-
Franeck, Philipp, Fastenrath, Ulrich, Rempe, Felix, and Bogenberger, Klaus
- Subjects
- *
TRAFFIC speed , *STATISTICAL smoothing , *ESTIMATION theory , *TRAFFIC flow , *VELOCITY measurements , *DATA analysis , *MATHEMATICAL models - Abstract
In this paper, a novel freeway traffic speed estimation method based on probe data is presented. In contrast to other traffic speed estimators, it only requires velocity data from probes and does not depend on any additional data inputs such as density or flow information. In the first step the method determines the three traffic phases free flow, synchronized flow, and Wide Moving Jam (WMJ) described by Kerner et al. in space and time. Subsequently, reported data is processed with respect to the prevailing traffic phase in order to estimate traffic velocities. This two-step approach allows incorporating empirical features of phase fronts into the estimation procedure. For instance, downstream fronts of WMJs always propagate upstream with approximately constant velocity, and downstream fronts of synchronized flow phases usually stick to bottlenecks. The second step assures the validity of measured velocities is limited to the extent of its assigned phase. Effectively, velocity information in space-time can be estimated more distinctively and the result is therefore more accurate even if the input data density is low. The accuracy of the proposed Phase-Based Smoothing Method (PSM) is evaluated using real floating car data collected during two traffic congestions on the German freeway A99 and compared to the performance of the Generalized Adaptive Smoothing Method (GASM) as well as a naive algorithm. The quantitative and qualitative results show that the PSM reconstructs the congestion pattern more accurately than the other two. A subsequent analysis of the computational efficiency and sensitivity demonstrates its practical suitability. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
106. Median bias reduction of maximum likelihood estimates.
- Author
-
PAGUI, E. C. KENNE, SALVAN, A., and SARTORI, N.
- Subjects
- *
ESTIMATION theory , *LEAST squares , *MATHEMATICAL statistics , *MATHEMATICAL models , *SIMULATION methods & models - Abstract
For regular parametric problems, we show how median centring of the maximum likelihood estimate can be achieved by a simple modification of the score equation. For a scalar parameter of interest, the estimator is equivariant under interest-respecting reparameterizations and is third-order median unbiased. With a vector parameter of interest, componentwise equivariance and third-order median centring are obtained. Like the implicit method of Firth (1993) for bias reduction, the new method does not require finiteness of the maximum likelihood estimate and is effective in preventing infinite estimates. Simulation results for continuous and discrete models, including binary and beta regression, confirm that the method succeeds in achieving componentwise median centring and in solving the boundary estimate problem, while keeping comparable dispersion and the same approximate distribution as its main competitors. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
107. Further model development for prediction of viscosity of mixed oils.
- Author
-
Ashtari Larki, Saeed and Banashooshtari, Hooman
- Subjects
- *
VISCOSITY of petroleum , *FUZZY systems , *ESTIMATION theory , *MIXING , *MATHEMATICAL models - Abstract
In the present study a new model named adaptive neuron fuzzy inference system optimized by hybrid method (Hybrid-ANFIS) is developed for estimation of viscosity of mixed oils. The experimental data for development of the new model were gathered from literature. Various methods were taken into consideration to examine the accuracy and precision of the model. The outcomes of the developed model were also put into comparison with four well-known literature models. Results show that the Hybrid-ANFIS model provides acceptable and precise estimations and more satisfactory predictions in comparison with literature models. The model exhibits an overall AARD% value of 2.19%. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
108. Parameter estimation of shallow wave equation via cuckoo search.
- Author
-
Zhang, Xin-Ming
- Subjects
- *
MATHEMATICAL optimization , *NUMERICAL analysis , *PARAMETER estimation , *ESTIMATION theory , *MATHEMATICAL models , *APPROXIMATION theory , *STOCHASTIC convergence , *MATHEMATICAL analysis - Abstract
In this study, cuckoo search is introduced for performing the parameter estimation of shallow wave equation for the first time. Cuckoo search (CS) is invented based on the inspiration of brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior. These meta-heuristics have been successfully used for solving some optimization problems with promising results. However, this emerging optimization method has not been applied in parameter inversion problem. This study reports a CS-based parameter estimation method to inverse the roughness coefficient and the coefficient of eddy viscosity under some specific conditions. Simulation results and experimental data show that cuckoo search offers a reliable performance for these parameter estimation problems. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
109. Spatial quantile estimation of multivariate threshold time series models.
- Author
-
Jiang, Jiancheng, Jiang, Xuejun, Li, Jingzhi, Liu, Yi, and Yan, Wanfeng
- Subjects
- *
QUANTILE regression , *MULTIVARIATE analysis , *ESTIMATION theory , *TIME series analysis , *MATHEMATICAL models , *COMPUTER simulation - Abstract
In this paper we study spatial quantile regression estimation of multivariate threshold time series models. Bahadur’s representations for our estimators are established, which naturally lead to asymptotic normality of the estimators. Simulations and a real example are used to evaluate the performance of the proposed estimators. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
110. How an added mass matrix estimation may dramatically improve FSI calculations for moving foils.
- Author
-
Lefrançois, Emmanuel
- Subjects
- *
METAL foils , *FLUID-structure interaction , *ESTIMATION theory , *OSCILLATIONS , *LIQUIDS , *MATHEMATICAL models - Abstract
This paper presents a corrected partitioned scheme for investigating fluid–structure interaction (FSI) that may be encountered by lifting devices immersed in heavy fluid such as liquids. The purpose of this model is to counteract the penalizing impact of the added mass effect on the classical partitioned FSI coupling scheme. This work is based on an added mass corrected version of the classical strongly coupled partitioned scheme presented in Song et al. (2013). Results show that this corrected version systematically allows convergence to the coupled solution. The fluid flow model considered here uses a non-stationary potential approach, commonly termed the Panel Method. The advantage of this kind of approach is twofold: first, in restricting itself to a boundary method and, second, in allowing an added mass matrix to be estimated as a post-processing phase. Whereas the classical scheme encounters an acceptable (no numerical oscillation) convergence limit for fluid densities higher than 8 kg/m 3 for the considered case, our corrected scheme is not dependent on fluid density and converges with only 6 iterations. This makes it possible to investigate the dynamic behavior of a 2D foil immersed in heavy fluids such as water. For example, it recognizes that frequency shifting may occur as the consequence of a strong added mass effect. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
111. A single spike deteriorates synaptic conductance estimation.
- Author
-
Kobayashi, Ryota, Nishimaru, Hiroshi, Nishijo, Hisao, and Lansky, Petr
- Subjects
- *
POSTSYNAPTIC potential , *NEURAL conduction , *NEURAL transmission , *SYNAPTIC vesicles , *GALVANIC skin response , *COMPUTER simulation , *ESTIMATION theory , *MATHEMATICAL models - Abstract
We investigated the estimation accuracy of synaptic conductances by analyzing simulated voltage traces generated by a Hodgkin–Huxley type model. We show that even a single spike substantially deteriorates the estimation. We also demonstrate that two approaches, namely, negative current injection and spike removal, can ameliorate this deterioration. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
112. Feature-based estimation of preliminary costs in shipbuilding.
- Author
-
Lin, Cheng-Kuan and Shaw, Heiu-Jou
- Subjects
- *
SHIPBUILDING costs , *PARAMETRIC modeling , *ESTIMATION theory , *SHIPBUILDING equipment , *TOPOLOGY , *MATHEMATICAL models - Abstract
Accurate cost estimation is crucial for obtaining ship owners’ orders in shipyards. The classic preliminary estimation methods of ship costs provide only rough estimates of the labor, materials, and equipment based on the overall ship parameters and do not reflect further specifications. This study develops an innovative cost estimation method called the feature-based estimation method that is based on the preliminary specifications to estimate ship costs, including the steel, other main materials, engine, power generator, other core equipment, and labor hours. The method mainly establishes the topology of the relationships between the features by linking the general dimensional parameters and detailed features of the specifications of the designs and cost information to estimate the main cost items of the ship. The features are extracted and transformed into a quantifiable structure. The definitions of the features contains the core context using a small amount of information for the preliminary estimation. Empirical formulas are derived based on the configured cost items in the preliminary design stage. The errors of the estimated total costs are less than ± 7%. Hence, the estimation model is suitable for modern ships. The applications of the model may be more robust for new ships in a future study. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
113. IMPROVED FAMILY OF RATIO TYPE ESTIMATORS FOR ESTIMATING POPULATION MEAN USING CONVENTIONAL AND NON CONVENTIONAL LOCATION PARAMETERS.
- Author
-
Subzar, Mir, Maqbool, S., Raja, T. A., Mir, S. A., Jeelani, M. I., and Bhat, M. A.
- Subjects
- *
ESTIMATION theory , *MEAN square algorithms , *MATHEMATICAL models , *ANALYSIS of variance , *PARAMETER estimation - Abstract
The present paper deals with estimation of population mean with more precision by incorporating the auxiliary information, as it has been recognised that incorporation of auxiliary information at design or at estimation stage or at both stages, results in more precision in estimating population parameters of interest. We have proposed the improved family of estimators by incorporating the auxiliary information of conventional and non-conventional location parameters and their linear combinations with correlation coefficient and coefficient of variation for estimating the population mean with more precision. The properties associated with the proposed estimators are assessed by mean square error and bias and compared with the classical and existing estimators. By this comparison we found that our proposed estimators are more efficient than the classical and existing estimators. Numerical illustration is also given in support of the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2017
114. The analog of the Erdös distance problem in finite fields.
- Author
-
Adhikari, S. D., Mukhopadhyay, Anirban, and Ram Murty, M.
- Subjects
- *
MATHEMATICAL proofs , *ESTIMATION theory , *MATHEMATICAL models , *PRIME numbers , *PRIME number theorem - Abstract
In this paper, we give a proof of the result of Iosevich and Rudnev [Erdös distance problem in vector spaces over finite fields, Trans. Amer. Math. Soc. 359 (2007) 6127-6142] on the analog of the Erdös-Falconer distance problem in the case of a finite field of characteristic , where is an odd prime, without using estimates for Kloosterman sums. We also address the case of characteristic 2. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
115. Sparse estimation of gene-gene interactions in prediction models.
- Author
-
Sangin Lee, Pawitan, Yudi, Ingelsson, Erik, Youngjo Lee, Lee, Sangin, and Lee, Youngjo
- Subjects
- *
GENES , *SPARSE matrices , *PREDICTION models , *RANDOM effects model , *ESTIMATION theory , *MATHEMATICAL models , *ALGORITHMS , *DISEASE susceptibility , *PROBABILITY theory , *BODY mass index , *STATISTICAL models - Abstract
Current assessment of gene-gene interactions is typically based on separate parallel analysis, where each interaction term is tested separately, while less attention has been paid on simultaneous estimation of interaction terms in a prediction model. As the number of interaction terms grows fast, sparse estimation is desirable from statistical and interpretability reasons. There is a large literature on sparse estimation, but there is a natural hierarchy between the interaction and its corresponding main effects that requires special considerations. We describe random-effect models that impose sparse estimation of interactions under both strong and weak-hierarchy constraints. We develop an estimation procedure based on the hierarchical-likelihood argument and show that the modelling approach is equivalent to a penalty-based method, with the advantage of the models being more transparent and flexible. We compare the procedure with some standard methods in a simulation study and illustrate its application in an analysis of gene-gene interaction model to predict body-mass index. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
116. Data-driven fuel consumption estimation: A multivariate adaptive regression spline approach.
- Author
-
Chen, Yuche, Zhu, Lei, Gonder, Jeffrey, Young, Stanley, and Walkowicz, Kevin
- Subjects
- *
AUTOMOTIVE fuel consumption , *ESTIMATION theory , *REGRESSION analysis , *SPLINES , *TRAFFIC speed , *TRAFFIC flow , *CLUSTER analysis (Statistics) , *AUTOMOBILE driving in cities , *MATHEMATICAL models - Abstract
Providing guidance and information to drivers to help them make fuel-efficient route choices remains an important and effective strategy in the near term to reduce fuel consumption from the transportation sector. One key component in implementing this strategy is a fuel-consumption estimation model. In this paper, we developed a mesoscopic fuel consumption estimation model that can be implemented into an eco-routing system. Our proposed model presents a framework that utilizes large-scale, real-world driving data, clusters road links by free-flow speed and fits one statistical model for each of cluster. This model includes predicting variables that were rarely or never considered before, such as free-flow speed and number of lanes. We applied the model to a real-world driving data set based on a global positioning system travel survey in the Philadelphia-Camden-Trenton metropolitan area. Results from the statistical analyses indicate that the independent variables we chose influence the fuel consumption rates of vehicles. But the magnitude and direction of the influences are dependent on the type of road links, specifically free-flow speeds of links. A statistical diagnostic is conducted to ensure the validity of the models and results. Although the real-world driving data we used to develop statistical relationships are specific to one region, the framework we developed can be easily adjusted and used to explore the fuel consumption relationship in other regions. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
117. Approximate confidence intervals for a linear combination of binomial proportions: A new variant.
- Author
-
Escudeiro, Sara, Freitas, Adelaide, and Afreixo, Vera
- Subjects
- *
CONFIDENCE intervals , *ESTIMATION theory , *PROBABILITY theory , *MATHEMATICAL models , *SIMULATION methods & models - Abstract
We propose a new adjustment for constructing an improved version of the Wald interval for linear combinations of binomial proportions, which addresses the presence of extremal samples. A comparative simulation study was carried out to investigate the performance of this new variant with respect to the exact coverage probability, expected interval length, and mesial and distal noncoverage probabilities. Additionally, we discuss the application of a criterion for interpreting interval location in the case of small samples and/or in situations in which extremal observations exist. The confidence intervals obtained from the new variant performed better for some evaluation measures. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
118. Feasible robust estimator in restricted semiparametric regression models based on the LTS approach.
- Author
-
Roozbeh, Mahdi and Hamzah, Nor Aishah
- Subjects
- *
REGRESSION analysis , *ESTIMATION theory , *MONTE Carlo method , *MATHEMATICAL models , *COVARIANCE matrices - Abstract
Under some nonstochastic linear restrictions based on either additional information or prior knowledge in a semiparametric regression model, a family of feasible generalized robust estimators for the regression parameter is proposed. The least trimmed squares (LTS) method proposed by Rousseeuw as a highly robust regression estimator is a statistical technique for fitting a regression model based on the subset ofhobservations (out ofn) whose least-square fit possesses the smallest sum of squared residuals. The coveragehmay be set betweenn/2 andn. The LTS estimator involves computing the hyperplane that minimizes the sum of the smallesthsquared residuals. For practical purpose, it is assumed that the covariance matrix of the error term is unknown and thus feasible estimators are replaced. Then, we develop an algorithm for the LTS estimator based on feasible methods. Through the Monte Carlo simulation studies and a real data example, performance of the feasible type of robust estimators is compared with the classical ones in restricted semiparametric regression models. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
119. Adaptive photovoltaic array reconfiguration based on real cloud patterns to mitigate effects of non-uniform spatial irradiance profiles.
- Author
-
Jazayeri, Moein, Jazayeri, Kian, and Uysal, Sener
- Subjects
- *
PHOTOVOLTAIC power system design & construction , *SPECTRAL irradiance , *SEMICONDUCTOR diodes , *SOLAR energy , *ESTIMATION theory , *MATHEMATICAL models - Abstract
This paper proposes a simple and dynamic reconfiguration algorithm for photovoltaic (PV) arrays in order to mitigate negative effects of non-uniform spatial irradiance profiles on PV power production. Spatially dispersed irradiance profiles incident on inclined PV module surfaces at each application site are generated based on real sky images. Models of PV modules are constructed in MATLAB/Simulink based on one-diode mathematical model of a PV cell. The proposed dynamic reconfiguration algorithm operates based on irradiance equalization principle aiming for creation of balanced-irradiance series-connected rows of PV modules. The proposed algorithm utilizes an irradiance threshold to obtain near-optimal configurations in terms of irradiance equalization and number of switching actions under any type of non-uniform spatial irradiance profile. The algorithm provides no limits on the number of PV modules within the array. The reconfiguration algorithm is examined with different irradiance profiles and significant improvements, almost equivalent to the ideal case corresponding to equal irradiance for all panels, are achieved for each shading pattern. The advantages of the algorithm are simplicity and providing significant improvements in array’s power generation alongside with reduced number of switching actions. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
120. The equation of state of hard hyperspheres in four and five dimensions.
- Author
-
Bishop, Marvin and Whitlock, Paula A.
- Subjects
- *
MOLECULAR dynamics , *ESTIMATION theory , *MONTE Carlo method , *NUMERICAL analysis , *MATHEMATICAL models , *NUMERICAL calculations - Abstract
The equation of state of hard hyperspheres in four and five dimensions is calculated from the value of the pair correlation function at contact, as determined by Monte Carlo simulations. These results are compared to equations of state obtained by molecular dynamics and theoretical approaches. In all cases the agreement is excellent. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
121. M-PCM-OFFD: An effective output statistics estimation method for systems of high dimensional uncertainties subject to low-order parameter interactions.
- Author
-
Xie, Junfei, Wan, Yan, Mills, Kevin, Filliben, James J., Lei, Yu, and Lin, Zongli
- Subjects
- *
FACTORIAL experiment designs , *MATHEMATICAL models , *MATHEMATICAL constants , *ESTIMATION theory , *REAL-time computing - Abstract
Abstract The evaluation of output performance statistics for systems of high-dimensional uncertain input parameters is crucial for robust real-time decision-making tasks of large-scale complex systems that operate in an uncertain environment. We develop a framework that integrates Multivariate Probabilistic Collocation Method (M-PCM) and Orthogonal Fractional Factorial Design (OFFD) to achieve an effective and scalable output statistics estimation. In this paper, we prove that when the degree of each uncertain parameter does not exceed 3 and under the widely held assumption for high-dimensional systems that the interactions among uncertain input parameters are negligible beyond certain order, the integrated M-PCM–OFFD method breaks the curse of dimensionality for correct output mean estimation by maximally reducing the number of simulations from 2 2 m to 2 log 2 (m + 1) for a system mapping of m uncertain input parameters. In addition, the resulting reduced-size simulation set is the most robust to numerical truncation errors of simulators among all subsets of the same size in the M-PCM simulation set. The analysis also provides new insightful formal interpretations of the optimality of OFFDs. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
122. A New Approach to Linear/Nonlinear Distributed Fusion Estimation Problem.
- Author
-
Chen, Bo, Hu, Guoqiang, Ho, Daniel W.C., and Yu, Li
- Subjects
- *
ESTIMATION theory , *MATHEMATICAL models , *LINEAR systems , *NONLINEAR systems , *MOBILE robots - Abstract
In this paper, we study the distributed fusion estimation problem for linear time-varying systems and nonlinear systems with bounded noises, where the addressed noises do not provide any statistical information, and are unknown but bounded. When considering linear time-varying fusion systems with bounded noises, a new local Kalman-like estimator is designed such that the square error of the estimator is bounded as time goes to $\infty$. A novel constructive method is proposed to find an upper bound of fusion estimation error, then a convex optimization problem on the design of an optimal weighting fusion criterion is established in terms of linear matrix inequalities, which can be solved by standard software packages. Furthermore, according to the design method of linear time-varying fusion systems, each local nonlinear estimator is derived for nonlinear systems with bounded noises by using Taylor series expansion, and a corresponding distributed fusion criterion is obtained by solving a convex optimization problem. Finally, target tracking system and localization of a mobile robot are given to show the advantages and effectiveness of the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
123. Estimating across-trial variability parameters of the Diffusion Decision Model: Expert advice and recommendations.
- Author
-
Boehm, Udo, Annis, Jeffrey, Frank, Michael J., Hawkins, Guy E., Heathcote, Andrew, Kellen, David, Krypotos, Angelos-Miltiadis, Lerche, Veronika, Logan, Gordon D., Palmeri, Thomas J., van Ravenzwaaij, Don, Servant, Mathieu, Singmann, Henrik, Starns, Jeffrey J., Voss, Andreas, Wiecki, Thomas V., Matzke, Dora, and Wagenmakers, Eric-Jan
- Subjects
- *
DIFFUSION , *DISTRIBUTION (Probability theory) , *DIFFERENCES , *ESTIMATION theory , *MATHEMATICAL models - Abstract
Abstract For many years the Diffusion Decision Model (DDM) has successfully accounted for behavioral data from a wide range of domains. Important contributors to the DDM's success are the across-trial variability parameters, which allow the model to account for the various shapes of response time distributions encountered in practice. However, several researchers have pointed out that estimating the variability parameters can be a challenging task. Moreover, the numerous fitting methods for the DDM each come with their own associated problems and solutions. This often leaves users in a difficult position. In this collaborative project we invited researchers from the DDM community to apply their various fitting methods to simulated data and provide advice and expert guidance on estimating the DDM's across-trial variability parameters using these methods. Our study establishes a comprehensive reference resource and describes methods that can help to overcome the challenges associated with estimating the DDM's across-trial variability parameters. Highlights • Main Diffusion Model parameters estimated with high precision by all methods. • Across-trial variability in non-decision time recovered accurately. • Large uncertainty for across-trial variability in drift rate and starting point. • Prior restrictions on parameters improve estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
124. Estimates of the eigenvalues of operator arising in swelling pressure model.
- Author
-
Kanguzhin, Baltabek and Zhapsarbayeva, Lyailya
- Subjects
- *
ESTIMATION theory , *EIGENVALUES , *OPERATOR theory , *PRESSURE , *COMPUTATIONAL complexity , *SOLID-liquid equilibrium , *MATHEMATICAL models - Abstract
Swelling pressures from materials confined by structures can cause structural deformations and instability. Due to the complexity of interactions between expansive solid and solid-liquid equilibrium, the forces exerting on retaining structures from swelling are highly nonlinear. This work is our initial attempt to study a simplistic spectral problem based on the Euler-elastic beam theory and some simplistic swelling pressure model. In this work estimates of the eigenvalues of some initial/boundary value problem for nonlinear Euler-elastic beam equation are obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
125. Estimators for Variance Components in Structured Stair Nesting Models.
- Author
-
Monteiro, Sandra, Fonseca, Miguel, and Carvalho, Francisco
- Subjects
- *
ESTIMATION theory , *ANALYSIS of variance , *MATHEMATICAL models , *NUMERICAL analysis , *COMPUTER simulation - Abstract
The purpose of this paper is to present the estimation of the components of variance in structured stair nesting models. The relationship between the canonical variance components and the original ones, will be very important in obtaining that estimators. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
126. Modeling the Philippines' Real Gross Domestic Product: A Normal Estimation Equation for Multiple Linear Regression.
- Author
-
Urrutia, Jackie D., Tampis, Razzcelle L., Mercado, Joseph, Baygan, Aaron Vito M., and Baccay, Edcon B.
- Subjects
- *
GROSS domestic product , *MATHEMATICAL models , *ESTIMATION theory , *CONSUMERS , *REGRESSION analysis - Abstract
The objective of this research is to formulate a mathematical model for the Philippines' Real Gross Domestic Product (Real GDP). The following factors are considered: Consumers' Spending (x1), Government's Spending (x2), Capital Formation (x3) and Imports (x4) as the Independent Variables that can actually influence in the Real GDP in the Philippines (y). The researchers used a Normal Estimation Equation using Matrices to create the model for Real GDP and used a = 0.01.The researchers analyzed quarterly data from 1990 to 2013. The data were acquired from the National Statistical Coordination Board (NSCB) resulting to a total of 96 observations for each variable. The data have undergone a logarithmic transformation particularly the Dependent Variable (y) to satisfy all the assumptions of the Multiple Linear Regression Analysis. The mathematical model for Real GDP was formulated using Matrices through MATLAB. Based on the results, only three of the Independent Variables are significant to the Dependent Variable namely: Consumers' Spending (x1), Capital Formation (x3) and Imports (x4), hence, can actually predict Real GDP (y). The regression analysis displays that 98.7% (coefficient of determination) of the Independent Variables can actually predict the Dependent Variable. With 97.6% of the result in Paired T-Test, the Predicted Values obtained from the model showed no significant difference from the Actual Values of Real GDP. This research will be essential in appraising the forthcoming changes to aid the Government in implementing policies for the development of the economy. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
127. Automated Transition State Theory Calculations for High-Throughput Kinetics.
- Author
-
Bhoorasingh, Pierre L., Slakman, Belinda L., Khanshan, Fariba Seyedzadeh, Cain, Jason Y., and West, Richard H.
- Subjects
- *
TRANSITION state theory (Chemistry) , *CHEMICAL kinetics , *ALGORITHMS , *SYMMETRY (Physics) , *ESTIMATION theory , *MATHEMATICAL models - Abstract
A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
128. Extension of Wolfe Method for Solving Quadratic Programming with Interval Coefficients.
- Author
-
Syaripuddin, Suprajitno, Herry, and Fatmawati
- Subjects
- *
QUADRATIC programming , *COEFFICIENTS (Statistics) , *PROBLEM solving , *LINEAR programming , *MATHEMATICAL models , *ESTIMATION theory - Abstract
Quadratic programming with interval coefficients developed to overcome cases in classic quadratic programming where the coefficient value is unknown and must be estimated. This paper discusses the extension of Wolfe method. The extended Wolfe method can be used to solve quadratic programming with interval coefficients. The extension process of Wolfe method involves the transformation of the quadratic programming with interval coefficients model into linear programming with interval coefficients model. The next step is transforming linear programming with interval coefficients model into two classic linear programming models with special characteristics, namely, the optimum best and the worst optimum problem. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
129. Optimum designs for non-linear mixed effects models in the presence of covariates.
- Author
-
Bogacka, Barbara, Latif, Mahbub A. H. M., Gilmour, Steven G., and Youdim, Kuresh
- Subjects
- *
NONLINEAR analysis , *DYNAMICS , *STRUCTURAL optimization , *ESTIMATION theory , *MATHEMATICAL models - Abstract
In this article, we present a new method for optimizing designs of experiments for non-linear mixed effects models, where a categorical factor with covariate information is a design variable combined with another design factor. The work is motivated by the need to efficiently design preclinical experiments in enzyme kinetics for a set of Human Liver Microsomes. However, the results are general and can be applied to other experimental situations where the variation in the response due to a categorical factor can be partially accounted for by a covariate. The covariate included in the model explains some systematic variability in a random model parameter. This approach allows better understanding of the population variation as well as estimation of the model parameters with higher precision. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
130. Estimation of the relatedness coefficients from biallelic markers, application in plant mating designs.
- Author
-
Laporte, Fabien, Charcosset, Alain, and Mary‐Huard, Tristan
- Subjects
- *
COMPLEMENTATION (Genetics) , *ESTIMATION theory , *ALGORITHMS , *MATHEMATICAL models , *DATA analysis - Abstract
The problem of inferring the relatedness distribution between two individuals from biallelic marker data is considered. This problem can be cast as an estimation task in a mixture model: at each marker the latent variable is the relatedness state, and the observed variable is the genotype of the two individuals. In this model, only the prior proportions are unknown, and can be obtained via ML estimation using the EM algorithm. When the markers are biallelic and the data unphased, the identifiability of the model is known not to be guaranteed. In this article, model identifiability is investigated in the case of phased data generated from a crossing design, a classical situation in plant genetics. It is shown that identifiability can be guaranteed under some conditions on the crossing design. The adapted ML estimator is implemented in an R package called Relatedness. The performance of the ML estimator is evaluated and compared to that of the benchmark moment estimator, both on simulated and real data. Compared to its competitor, the ML estimator is shown to be more robust and to provide more realistic estimates. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
131. Maximum empirical likelihood estimation for abundance in a closed population from capture-recapture data.
- Author
-
YUKUN LIU, PENGFEI LI, and JING QIN
- Subjects
- *
MATHEMATICAL models of population , *PROBABILITY theory , *EMPIRICAL Bayes methods , *ESTIMATION theory , *DISTRIBUTION (Probability theory) , *CONFIDENCE intervals , *MATHEMATICAL models - Abstract
Capture-recapture experiments are widely used to collect data needed for estimating the abundance of a closed population. To account for heterogeneity in the capture probabilities, Huggins (1989) and Alho (1990) proposed a semiparametric model in which the capture probabilities are modelled parametrically and the distribution of individual characteristics is left unspecified. A conditional likelihood method was then proposed to obtain point estimates and Wald-type confidence intervals for the abundance. Empirical studies show that the small-sample distribution of the maximum conditional likelihood estimator is strongly skewed to the right, which may produce Wald-type confidence intervals with lower limits that are less than the number of captured individuals or even are negative. In this paper, we propose a full empirical likelihood approach based on Huggins and Alho's model. We show that the null distribution of the empirical likelihood ratio for the abundance is asymptotically chi-squared with one degree of freedom, and that the maximum empirical likelihood estimator achieves semiparametric efficiency. Simulation studies show that the empirical likelihood-based method is superior to the conditional likelihood-based method: its confidence interval has much better coverage, and the maximum empirical likelihood estimator has a smaller mean square error. We analyse three datasets to illustrate the advantages of our empirical likelihood approach. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
132. Weighted envelope estimation to handle variability in model selection.
- Author
-
ECK, D. J. and COOK, R. D.
- Subjects
- *
ESTIMATION theory , *STATISTICAL bootstrapping , *ANALYSIS of variance , *ASYMPTOTIC efficiencies , *DATA analysis , *MATHEMATICAL models - Abstract
Envelope methodology can provide substantial efficiency gains in multivariate statistical problems, but in some applications the estimation of the envelope dimension can induce selection volatility that may mitigate those gains. Current envelope methodology does not account for the added variance that can result from this selection. In this article, we circumvent dimension selection volatility through the development of a weighted envelope estimator. Theoretical justification is given for our estimator, and the validity of the residual bootstrap for estimating its asymptotic variance is established. A simulation study and real-data analysis illustrate the utility of our weighted envelope estimator. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
133. Soil Moisture Estimation by SAR in Alpine Fields Using Gaussian Process Regressor Trained by Model Simulations.
- Author
-
Stamenkovic, Jelena, Thiran, Jean-Philippe, Guerriero, Leila, Ferrazzoli, Paolo, Notarnicola, Claudia, and Greifeneder, Felix
- Subjects
- *
SOIL moisture , *SYNTHETIC aperture radar , *ESTIMATION theory , *GAUSSIAN processes , *REGRESSION analysis , *ALPINE regions , *MATHEMATICAL models - Abstract
In this paper, we address the problem of retrieving soil moisture over a grassland alpine area from Synthetic Aperture Radar (SAR) data using a statistical algorithm trained by simulations of a physical model. A time series of C-band VV-polarized Wide Swath images acquired by Envisat Advanced SAR (ASAR) in the snow-free periods of 2010 and 2011 was simulated using a discrete radiative transfer model (RTM). The test area was located in the Mazia valley, South Tyrol (Italy), where the main land types are meadows and pastures. Soil moisture was collected from five meteorological stations, two of which situated in meadows and the rest in pastures. The smallest and the highest RMSEs of the RTM simulations were 0.78 dB and 1.91 dB, respectively. After backscattering simulation, the top soil moisture was estimated using Gaussian Process Regression (GPR). GPR was trained with the backscatter model simulations (including terrain features) for 2010, and then used to predict moisture from radar observations acquired in 2011. The relative importance of different input features was also assessed. The RMSE of the predicted soil moisture for the largest training data set (including aspect as a terrain feature) was 5.6% Vol. and the corresponding correlation coefficient was 0.84. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
134. Discriminative Robust Deep Dictionary Learning for Hyperspectral Image Classification.
- Author
-
Singhal, Vanika, Aggarwal, Hemant K., Tariyal, Snigdha, and Majumdar, Angshul
- Subjects
- *
HYPERSPECTRAL imaging systems , *DEEP learning , *IMAGE analysis , *ESTIMATION theory , *ROBUST control , *RANDOM noise theory , *MATHEMATICAL models - Abstract
This paper proposes a new framework for deep learning that has been particularly tailored for hyperspectral image classification. We learn multiple levels of dictionaries in a robust fashion. The last layer is discriminative that learns a linear classifier. The training proceeds greedily; at a time, a single level of dictionary is learned and the coefficients used to train the next level. The coefficients from the final level are used for classification. Robustness is incorporated by minimizing the absolute deviations instead of the more popular Euclidean norm. The inbuilt robustness helps combat mixed noise (Gaussian and sparse) present in hyperspectral images. Results show that our proposed techniques outperform all other deep learning methods—deep belief network, stacked autoencoder, and convolutional neural network. The experiments have been carried out on both benchmark deep learning data sets (MNIST, CIFAR-10, and Street View House Numbers) as well as on real hyperspectral imaging data sets. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
135. Soil Moisture Estimation Using Differential Radar Interferometry: Toward Separating Soil Moisture and Displacements.
- Author
-
Zwieback, Simon, Hensley, Scott, and Hajnsek, Irena
- Subjects
- *
SOIL moisture , *ESTIMATION theory , *RADAR interferometry , *TIME series analysis , *DECORRELATION (Signal processing) , *ELECTROMAGNETIC wave scattering , *MATHEMATICAL models - Abstract
Differential interferometric synthetic aperture radar (DInSAR) measurements are sensitive to displacements, but also to soil moisture mv changes. Here, we analyze whether soil moisture can be estimated from three DInSAR observables without making any assumptions about its complex spatio-temporal dynamics, with the goal of removing its contribution from the displacement estimates. We find that the referenced DInSAR phase can be a suitable means to estimate mv time series up to an overall offset, as indicated by correlations with in situ measurements of 0.75–0.90 in two campaigns. However, the phase can only be referenced when no displacements (and atmospheric delays) occur or when they can be estimated reliably. We study the separability of displacements and mv using two additional DInSAR observables (closure phase and coherence magnitude) that are sensitive to mv but insensitive to displacements. However, our analyses show that neither contains enough information for this purpose, i.e., it is not possible to estimate mv uniquely. The soil moisture correction of the displacement estimates is hence ambiguous too. Their applicability is furthermore limited by their proneness to model misspecifications and decorrelation. Consequently, the separation of soil moisture changes and displacements using DInSAR observations alone is difficult in practice, and—like for mitigating tropospheric errors—additional data (e.g., external mv estimates) or assumptions (e.g., spatio-temporal patterns) are required when the mv effects on the displacement estimates are comparable to the magnitude of the movements. This will be critical when soil moisture changes are correlated with the actual displacements. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
136. Using n-dimensional hypervolumes for species distribution modelling: A response to Qiao et al. (.
- Author
-
Blonder, Benjamin, Lamanna, Christine, Violle, Cyrille, Enquist, Brian J., and Peres‐Neto, Pedro
- Subjects
- *
SPECIES distribution , *ECOLOGICAL niche , *SPECIES diversity , *ESTIMATION theory , *SIMULATION methods & models , *MATHEMATICAL models - Abstract
Hypervolume approaches are used to quantify functional diversity and quantify environmental niches for species distribution modelling. Recently, Qiao et al. () criticized our geometrical kernel density estimation (KDE) method for measuring hypervolumes. They used a simulation analysis to argue that the method yields high error rates and makes biased estimates of fundamental niches. Here, we show that (a) KDE output depends in useful ways on dataset size and bias, (b) other species distribution modelling methods make equally stringent but different assumptions about dataset bias, (c) simulation results presented by Qiao et al. () were incorrect, with revised analyses showing performance comparable to other methods, and (d) hypervolume methods are more general than KDE and have other benefits for niche modelling. As a result, our KDE method remains a promising tool for species distribution modelling. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
137. Estimation of added resistance and ship speed loss in a seaway.
- Author
-
Kim, Mingyu, Turan, Osman, Day, Sandy, Incecik, Atilla, and Hizir, Olgun
- Subjects
- *
SHIP resistance , *SPEED , *ESTIMATION theory , *POTENTIAL flow , *MOTION analysis , *CLIMATOLOGY , *NAVIER-Stokes equations , *COMPUTATIONAL fluid dynamics , *MATHEMATICAL models - Abstract
The prediction of the added resistance and attainable ship speed under actual weather conditions is essential to evaluate the true ship performance in operating conditions and assess environmental impact. In this study, a reliable methodology is proposed to estimate the ship speed loss of the S175 container ship in specific sea conditions of wind and waves. Firstly, the numerical simulations are performed to predict the added resistance and ship motions in regular head and oblique seas using three different methods; a 2-D and 3-D potential flow method and a Computational Fluid Dynamics (CFD) with an Unsteady Reynolds-Averaged Navier-Stokes (URANS) approach. Simulations of various wave conditions are compared with the available experimental data and these are used in a validation study. Secondly, following the validation study in regular waves, the ship speed loss is estimated using the developed methodology by calculating the resistance in calm water and the added resistance due to wind and irregular waves, taking into account relevant wave parameters and wind speed corresponding to the Beaufort scale, and results are compared with simulation results obtained by other researchers. Finally, the effect of the variation in ship speed and therefore the ship speed loss is investigated. This study shows the capabilities of the 2-D and 3-D potential methods and CFD to calculate the added resistance and ship motions in regular waves in various wave headings. It also demonstrates that the proposed methodology can estimate the impacts on the ship operating speed and the required sea margin in irregular seas. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
138. Uncertainty of the magnetic flux linkage measurements performed by modified current decay test.
- Author
-
KOWALIK, ZYGMUNT
- Subjects
- *
MAGNETIC flux , *MATHEMATICAL models , *INTEGRAL calculus , *APPROXIMATION theory , *ESTIMATION theory - Abstract
The paper presents the estimation methodology for uncertainties of magnetic flux linkage measurements, when the flux linkage and current functions with respect to time are obtained instead of single values of these quantities. The computed uncertainties are then used to estimate the quality of an approximation of a current-flux characteristic in the mathematical model of an electrical machine when the approximation is based on the results of measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
139. A class of box-cox transformation models for recurrent event data with a terminal event.
- Author
-
Zhou, Jie, Zhu, Jun, and Sun, Liu
- Subjects
- *
MATHEMATICAL transformations , *MATHEMATICAL models , *GENERALIZED estimating equations , *ESTIMATION theory , *ALGORITHMS - Abstract
In this article, we study a class of Box-Cox transformation models for recurrent event data in the presence of terminal event, which includes the proportional means models as special cases. Estimating equation approaches and the inverse probability weighting technique are used for estimation of the regression parameters. The asymptotic properties of the resulting estimators are established. The finite sample behavior of the proposed methods is examined through simulation studies, and an application to a heart failure study is presented to illustrate the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
140. Robust inference in a linear functional model with replications using the [formula omitted] distribution.
- Author
-
Galea, Manuel and de Castro, Mário
- Subjects
- *
MATHEMATICAL models , *ESTIMATION theory , *LINEAR statistical models , *EQUATIONS , *VECTOR analysis - Abstract
In this paper, we investigate model assessment, estimation and hypothesis testing in a linear functional relationship for replicated data when the distribution of the measurement errors is a multivariate Student t distribution. For statistical inference, we adopt the unbiased estimating equations approach. The resulting estimator is consistent and asymptotically normal; a closed form expression is also given for its asymptotic covariance matrix. A simple graphical device for model checking is proposed. We also describe how to test some hypotheses of interest on the parameter vector using the Wald statistic. A simulation study is performed to gauge the performance of the estimators and of the Wald statistic. The methodology developed in the paper is illustrated with a real data set. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
141. Estimation of optimum binder content of recycled asphalt incorporating a wax warm additive using response surface method.
- Author
-
Hamzah, Meor Othman, Gungat, Lillian, and Golchin, Babak
- Subjects
- *
ASPHALT , *RESPONSE surfaces (Statistics) , *VOLUMETRIC analysis , *ADDITIVES , *ESTIMATION theory , *COMPACTING , *STATISTICS , *MATHEMATICAL models - Abstract
This paper presents a new approach to estimate the optimum binder content (OBC) of recycled asphalts (RAs) incorporating a warm mix additive based on the interaction effects of compaction temperature, RA content and binder content using volumetric and strength characterisation. The experimental design was developed using response surface method (RSM) based on central composite design for various compaction temperatures (130–160 °C), RA contents (30–50%) and binder contents (4.9–6.0%). Laboratory tests were performed and analysed to meet the desired volumetric and strength properties according to the Malaysian specifications for the design of dense asphalt mixtures. Statistical analysis and mathematical models proposed by RSM were used to determine the OBC. The results showed that compaction temperature is the most significant factor in determining the OBC. There are minimum differences in the OBC variation of samples incorporating different dosage of RA. The developed model can be used for quick estimation of OBC for various levels of compaction temperature and RA content. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
142. Discriminative Scale Space Tracking.
- Author
-
Danelljan, Martin, Hager, Gustav, Khan, Fahad Shahbaz, and Felsberg, Michael
- Subjects
- *
ESTIMATION theory , *FILTERS & filtration , *BIG data , *STATISTICAL correlation , *TRACKING & trailing , *MATHEMATICAL models - Abstract
Accurate scale estimation of a target is a challenging research problem in visual object tracking. Most state-of-the-art methods employ an exhaustive scale search to estimate the target size. The exhaustive search strategy is computationally expensive and struggles when encountered with large scale variations. This paper investigates the problem of accurate and robust scale estimation in a tracking-by-detection framework. We propose a novel scale adaptive tracking approach by learning separate discriminative correlation filters for translation and scale estimation. The explicit scale filter is learned online using the target appearance sampled at a set of different scales. Contrary to standard approaches, our method directly learns the appearance change induced by variations in the target scale. Additionally, we investigate strategies to reduce the computational cost of our approach. Extensive experiments are performed on the OTB and the VOT2014 datasets. Compared to the standard exhaustive scale search, our approach achieves a gain of 2.5 percent in average overlap precision on the OTB dataset. Additionally, our method is computationally efficient, operating at a 50 percent higher frame rate compared to the exhaustive scale search. Our method obtains the top rank in performance by outperforming 19 state-of-the-art trackers on OTB and 37 state-of-the-art trackers on VOT2014. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
143. Mini-max-risk and mini-mean-risk inferences for a partially piecewise regression.
- Author
-
Lin, Lu, Liu, Yongxin, and Lin, Chen
- Subjects
- *
REGRESSION analysis , *COEFFICIENTS (Statistics) , *ESTIMATION theory , *SIMULATION methods & models , *ASYMPTOTIC theory in mathematical statistics , *MATHEMATICAL models - Abstract
We consider a partially piecewise regression in which the main regression coefficients are constant in all subdomains, but the extraessential regression function is variable in different pieces and is difficult to be estimated. Under this situation, two new regression methodologies are proposed under the criteria of mini-max-risk and mini-mean-risk. The resulting models can describe the regression relations in maximum-risk and mean-risk environments, respectively. A two-stage estimation procedure, together with a composite method, is introduced. The asymptotic normality of the estimators is established, the standard convergence rate and efficiency are achieved. Some unusual features of the new estimators and predictions, and the related variable selection are discussed for a comprehensive comparison. Simulation studies and a real-financial example are given to illustrate the new methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
144. A View of Information-Estimation Relations in Gaussian Networks.
- Author
-
Dytso, Alex, Bustin, Ronit, Poor, H. Vincent, and Shamai (Shitz), Shlomo
- Subjects
- *
GAUSSIAN distribution , *CONTINUOUS distributions , *INFORMATION theory , *MATHEMATICAL models , *MATHEMATICAL analysis - Abstract
Relations between estimation and information measures have received considerable attention from the information theory community. One of the most notable such relationships is the I-MMSE identity of Guo, Shamai and Verdú that connects the mutual information and the minimum mean square error (MMSE). This paper reviews several applications of the I-MMSE relationship to information theoretic problems arising in connection with multi-user channel coding. The goal of this paper is to review the different techniques used on such problems, as well as to emphasize the added-value obtained from the information-estimation point of view. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
145. A New Method of Multiattribute Decision-Making Based on Interval-Valued Hesitant Fuzzy Soft Sets and Its Application.
- Author
-
Yang, Yan, Lang, Lei, Lu, Liuli, and Sun, Yangmei
- Subjects
- *
MULTIPLE criteria decision making , *ESTIMATION theory , *MULTIVARIATE analysis , *GAME theory , *MATHEMATICAL models - Abstract
Combining interval-valued hesitant fuzzy soft sets (IVHFSSs) and a new comparative law, we propose a new method, which can effectively solve multiattribute decision-making (MADM) problems. Firstly, a characteristic function of two interval values and a new comparative law of interval-valued hesitant fuzzy elements (IVHFEs) based on the possibility degree are proposed. Then, we define two important definitions of IVHFSSs including the interval-valued hesitant fuzzy soft quasi subset and soft quasi equal based on the new comparative law. Finally, an algorithm is presented to solve MADM problems. We also use the method proposed in this paper to evaluate the importance of major components of the well drilling mud pump. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
146. On the performance of Turing's formula: A simulation study.
- Author
-
Grabchak, Michael and Cosme, Victor
- Subjects
- *
ESTIMATION theory , *SIMULATION methods & models , *POISSON distribution , *GEOMETRIC distribution , *ERROR analysis in mathematics , *SPECIES distribution , *MATHEMATICAL models - Abstract
Turing's formula is an amazing result that allows one to estimate the probability of observing something that has not been observed before. After a brief review of the literature, we perform a simulation study to better understand how well this formula works in a variety of situations. We also compare the performance of Turing's formula with several modifications that have appeared in the literature. We find that these modifications tend to outperform Turing's formula, but usually not by very much. We further find that Turing's formula and its modifications tend to work better for heavy-tailed distributions than for light-tailed ones. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
147. Extreme quantile estimation based on financial time series.
- Author
-
Dutta, Santanu and Biswas, Suparna
- Subjects
- *
ESTIMATION theory , *QUANTILES , *EXTREME value theory , *TIME series analysis , *VALUE at risk , *MONTE Carlo method , *FINANCIAL risk , *MATHEMATICAL models - Abstract
Estimation of market risk is an important problem in finance. Two well-known risk measures, viz., value at risk and median shortfall, turn out to be extreme quantiles of the marginal distribution of asset return. Time series on asset returns are known to exhibit certain stylized facts, such as heavy tails, skewness, volatility clustering, etc. Therefore, estimation of extreme quantiles in the presence of such features in the data seems to be of natural interest. It is difficult to capture most of these stylized facts using one specific time series model. This motivates nonparametric and extreme value theory-based estimation of extreme quantiles that do not require exact specification of the asset return model. We review these quantile estimators and compare their known properties. Their finite sample performance are compared using Monte Carlo simulation. We propose a new estimator that exhibits encouraging finite sample performance while estimating extreme quantile in the right tail region. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
148. Reliability inference for a general lower-truncated family of distributions under records.
- Author
-
Wang, Liang
- Subjects
- *
RELIABILITY in engineering , *DISTRIBUTION (Probability theory) , *INTERVAL analysis , *ESTIMATION theory , *STATISTICAL bootstrapping , *MATHEMATICAL models - Abstract
Based on record values, point and interval estimators are proposed in this paper for the parameters of a general lower-truncated family of distributions. Maximum likelihood and bias-corrected estimators are obtained for unknown model parameters. Based on a sufficient and complete statistic, the bias-corrected estimator is also shown to be uniformly minimum variance unbiased estimator. Different exact confidence intervals and exact confidence regions are constructed for the both model and truncated parameters, and other confidence interval estimates based on asymptotic distribution theory and bootstrap approaches are obtained as well. Finally, two real-life examples and a numerical study are presented to illustrate the performance of our methods. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
149. Dissipativity-based asynchronous state estimation for Markov jump neural networks with jumping fading channels.
- Author
-
Tao, Jie, Lu, Renquan, Su, Hongye, Wu, Zheng-Guang, and Xu, Yong
- Subjects
- *
ARTIFICIAL neural networks , *ASYNCHRONOUS learning , *MARKOV processes , *ESTIMATION theory , *COEFFICIENTS (Statistics) , *MATHEMATICAL models - Abstract
The problem of asynchronous state estimation for Markov jump neural networks taking into account jumping fading channels is investigated in this article. The phenomenon of channel fadings which occurs between the system and the state estimator is considered and a modified discrete-time Rice fading model with the mode-dependent channel coefficients is adopted. Due to the fact that the modes of system can not be completely accessible to the state estimator at any time, the asynchronous state estimator which can make full use of the partial information available to the state estimator is introduced. By using the mode-dependent Lyapunov functional approach, some sufficient conditions for the existence of asynchronous state estimator of the Markov jump neural networks are given to guarantee the stability and dissipativity of the augmented system. The gains of asynchronous state estimator are given via solving a set of linear matrix inequalities. The merits and effectiveness of the developed design scheme are verified by a simulation example. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
150. A simple method for deriving the confidence regions for the penalized Cox’s model via the minimand perturbation.
- Author
-
Lin, Chen-Yen and Halabi, Susan
- Subjects
- *
SURVIVAL , *PERTURBATION theory , *ESTIMATION theory , *APPROXIMATION theory , *OPERATOR theory , *MATHEMATICAL models - Abstract
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite-sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a Phase III clinical trial in prostate cancer. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.