320 results
Search Results
102. Enhancement of fault diagnosis of rolling element bearing using maximum kurtosis fast nonlocal means denoising.
- Author
-
Laha, S.K.
- Subjects
- *
ROLLING (Metalwork) , *KURTOSIS , *FAULT tolerance (Engineering) , *SIGNAL processing , *IMAGE processing , *ALGORITHMS , *MINIMUM entropy method - Abstract
In this paper, a modified nonlocal means denoising (NL-means) algorithm is proposed for rolling element bearing fault diagnosis. Although, nonlocal means denoising is widely used in image processing, this algorithm is rarely used in 1-D signal processing. The present work deals with application of 1-D nonlocal means denoising method for enhancement of fault signature in rolling element bearings. The parameters for the NL-means method are obtained by maximizing kurtosis value of bearing vibration signal. The proposed method is compared with minimum entropy deconvolution (MED) technique and the results indicate that the proposed method performs better for bearing fault diagnosis. The method is shown to be robust against various noise levels. Further, envelope spectrum of bearing vibration signal is also used to obtain characteristic bearing defect frequencies. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
103. A simple algorithm for amplitude estimation of distorted power system signal.
- Author
-
Goswami, Soumyajit, Sarkar, Arghya, and Sengupta, S.
- Subjects
- *
DIGITAL signal processing , *ALGORITHMS , *AMPLITUDE estimation , *BANDPASS filters , *DIGITAL filters (Mathematics) , *INTEGRATORS - Abstract
This paper presents development and implementation of a novel digital signal processing algorithm for on-line estimation of peak value of the fundamental component of a non-sinusoidal power system signal. The algorithm relies on stable band-pass first and second degree digital integrator based rigorous mathematical deduction and recombines itself with zero-crossing avoiding technique. Compared with the well-established technique such as the enhanced phase-locked-loop (EPLL) based system, the proposed algorithm provides higher degree of immunity and insensitivity to harmonics and noise, and faster response. Based on simulation studies, performances of the proposed algorithm at different operating conditions have been presented and its accuracy and response time have been compared with the EPLL systems. Moreover, a simple laboratory setup implemented by MATLAB and dedicated hardware is built to verify the performance of the proposed algorithm in real time applications. The framework of laboratory prototype has been given out and the necessary modification in proposed algorithm for on-line implementation has been discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
104. Active federated transfer algorithm based on broad learning for fault diagnosis.
- Author
-
Liu, Guokai, Shen, Weiming, Gao, Liang, and Kusiak, Andrew
- Subjects
- *
MACHINE learning , *ALGORITHMS , *FAULT diagnosis , *LEARNING , *ACTIVE learning - Abstract
Federated learning (FL) guaranteeing data privacy is of great interest in decentralized fault diagnosis. However, limited research attention has been paid to the dynamic domain-shift issue due to varying working conditions. This paper proposes an active federated transfer algorithm based on broad learning to address the domain shift issue in FL. First, a central server dispatches a global model to the source clients for collaborative modeling. Subsequently, the global model is initialized with a federated averaging strategy. Next, the initialized global model is used to annotate emerging signals from the target clients based on an active sampling strategy proposed. Finally, an asynchronous update scheme is designed to adapt the global model to the target domain. The performance of the AFTBL algorithm is validated with three datasets, including 24 centralized- and decentralized-modeling tasks. The computational results indicate that the proposed algorithm is more accurate and efficient than the prevalent algorithms. • The cross-domain incremental federated learning problem is investigated. • An active federated broad transfer algorithm is proposed for fault diagnosis. • Automated data annotation and selection are proposed for incremental model update. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
105. Robust fault recognition and correction scheme for induction motors using an effective IoT with deep learning approach.
- Author
-
Tran, Minh‐Quang, Amer, Mohammed, Dababat, Alya', Abdelaziz, Almoataz Y., Dai, Hong-Jie, Liu, Meng-Kun, and Elsisi, Mahmoud
- Subjects
- *
DEEP learning , *MACHINE learning , *INDUCTION motors , *INTERNET of things , *LIFE expectancy , *CYBERTERRORISM , *ALGORITHMS - Abstract
• The uncertainties due to the cyberattacks are tackled with a low computation burden deep learning algorithm. • A new IoT structure for monitoring the induction motor is developed. • A graphical visualization is performed to identify the faults of the induction motors. • Parameterizing the DNN of the fault recognition model is investigated. • The results confirm the robustness and resiliency of the designed model. Maintaining electrical machines in good working order and increasing their life expectancy is one of the main challenges. Precocious and accurate detection of faults is crucial to this process. Induction motors (IMs) are among these machines widely utilized in various fields including industrial and domestic applications that require effective detection of their status. This paper proposes a novel fault recognition and correction (FRC) scheme based on the internet of things (IoT) and deep learning for IMs. In the developed system, vibration signals during motor operation are used to generate bearing fault features, which are inputted to the designed deep learning model to successfully identify bearing faults. The robustness of the proposed approach is tested against a false data injection (FDI) attack. Further, the proposed deep learning approach has been assessed and compared with other state-of-the-art algorithms available in the literature. Experimental testing has been carried out on a real IM to perform the suitability of the developed fault detection and correction scheme. Compared to other fault recognition techniques, the proposed method proves to be more effective. In essence, the results verified the robustness of the novel proposed strategy against the FDI attack making it possible to recognize faults with confidence and improve decision-making to determine the motor's status. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
106. Research on control algorithm of strong magnetic interference compensation for MEMS electronic compass.
- Author
-
Fu, Jun, Ning, Zhiwen, Li, Bao, and Lv, Teng
- Subjects
- *
CONSTANT current sources , *DATA processing service centers , *COMPUTER software testing , *ALGORITHMS , *AUTOMOTIVE navigation systems - Abstract
• Modeling of magnetic interference compensation. • The scheme of reference compensation without heading is put forward. • Design a compensation scheme with heading reference. • Build a software and hardware test platform. • A vehicle experiment is designed to verify the proposed method. In view of the complicated data acquisition process of traditional MEMS electronic compass compensation methods and the inability to achieve high-precision compensation under strong magnetic interference, this paper studies a compensation method with an external three-axis coil. Firstly, the error modeling of the magnetometer is carried out, and two methods of compensation without heading reference based on M−estimates and compensation with heading reference are introduced. We built a hardware circuit on the basis of a physical coil, used a PXI industrial computer as the data acquisition and processing center, provided current excitation for the coil through a voltage-controlled constant current source, and used LabVIEW to design a control program. Finally, we comprehensively evaluate and verify the feasibility and compensation accuracy of the proposed scheme through vehicle-mounted tests. The test results show that: a heading reference compensation algorithm has higher compensation accuracy, the compensated heading error is within ± 1.5°, and the standard deviation is below 0.8°, which meets the needs of car navigation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
107. Reducing the systematic error of DIC using gradient filtering.
- Author
-
Cui, Hengrui, Zeng, Zhoumo, Zhang, Hui, and Yang, Fenglong
- Subjects
- *
FILTERS & filtration , *ALGORITHMS - Abstract
• A gradient operator processing method is proposed, which decomposes the gradient operator into two parts: gradient information acquisition (one-dimensional gradient operator) and gradient filtering (filtering operator), facilitating the analysis of the effect of the gradient operator on the error of DIC. • Using the decomposed model to analyse the effect of different filter operators on the error, the filter operator is not limited to the requirement that it needs to be perpendicular and equal in size to the gradient operator when constructing a 2-D gradient operator. • The gradient filter DIC (GF-DIC) is constructed based on the results of the above analysis. The filtering process of the gradient information is added to the DIC calculation process, and a suitable filtering method is selected through the above analysis to reduce the error of the DIC calculation results. The inverse compositional Gauss-Newton (IC-GN) DIC algorithm is now the most popular DIC algorithm. The error analysis of the algorithm is necessary. However, the effect of the gradient operator on the error cannot be systematically analysed due to the different dimensions of the gradient operator. In this paper, the 1-D and 2-D gradient operators are incorporated into the same framework by decomposing the gradient operator into two parts: gradient acquisition and gradient filtering. Based on the above analysis, a DIC method based on gradient filtering is constructed and the simulation analysis results show that the systematic error is reduced to less than 10% of the original ICGN-DIC algorithm, enhancing the robustness to noise variations. Finally, validation is performed using an open-source dataset. It is demonstrated that the proposed method can reduce the system error to less than 15%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
108. A baseline drift removal algorithm based on cumulative sum and downsampling for hydroacoustic signal.
- Author
-
Wu, Daiyue, Zhang, Guojun, Zhu, Shan, Liu, Yan, Liu, Guochang, Jia, Li, Wu, Yuding, and Zhang, Wendong
- Subjects
- *
SIGNAL theory , *SIGNAL filtering , *ALGORITHMS , *HIGHPASS electric filters , *SIGNAL processing - Abstract
• We analyzed the baseline drift problem in hydroacoustic signals, which often causes distortion of the target signal with conventional filtering. • We proposed a baseline drift fitting algorithm based on cumulative sum and downsampling. • The spectral variation of the cumulative sum and difference is derived. • The effectiveness and noise robustness of the algorithm in simulated signals are verified in comparison with VMD. • The time complexity of this algorithm is much smaller than that of VMD. Baseline drift is a widespread problem in a wide variety of signals and is traditionally handled by using high-pass filtering to filter out the low-frequency portion. However, the frequency of hydroacoustic signals is also very low, and filtering can cause distortion of the signal. In recent years, with the maturity of signal modal theory, variational modal decomposition (VMD) has become a widely used signal modal decomposition algorithm. VMD is effective in eliminating baseline drift, but the calculation steps of VMD are too complicated. In this paper, a baseline drift elimination algorithm based on cumulative sum and downsampling is proposed for the baseline drift characteristics of hydroacoustic signals. Its baseline is fitted to the signal to be processed. The algorithm is simple, can achieve better results than VMD in simulated experiments and actual hydroacoustic signal processing, and has good prospects for application in hydroacoustic signal processing. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
109. Optimization and experimental characterization of novel measurement methods for wide-band spectrum sensing in cognitive radio applications.
- Author
-
Angrisani, Leopoldo, Capriglione, Domenico, Cerro, Gianni, Ferrigno, Luigi, and Miele, Gianfranco
- Subjects
- *
COGNITIVE radio , *ALGORITHMS , *FALSE alarms , *RADIO transmitter-receivers , *WIRELESS communications - Abstract
Spectrum sensing is a fundamental task in the complex field of cognitive radio systems. It allows a cognitive terminal to scan a frequency span of interest and sense the presence of other users transmitting over it. Many spectrum sensing methods are present in literature and many interesting algorithms have been proposed. Unfortunately, very few methods allow to know the exact boundaries of the user signal, without knowing any channelization of the spectrum of interest. In this context, the paper proposes two novel algorithms, whose purpose is twofold: to keep a low computational burden and to provide information at the most detailed level with respect to the category the algorithms belong to. Tests, executed on simulated and emulated signals, have demonstrated that both algorithms allow reaching a detection probability greater then 95% and a false alarm probability lower then 5% even in scenarios characterized by SNR as low as −10 dB. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
110. Novel electrochemical impedance simulation design via stochastic algorithms for fitting equivalent circuits.
- Author
-
Kappel, Marco A.A., Fabbri, Ricardo, Domingos, Roberto P., and Bastos, Ivan N.
- Subjects
- *
SIMULATION methods & models , *ALGORITHMS , *IMPEDANCE spectroscopy , *ELECTROLYTES , *ELECTRIC circuits - Abstract
Electrochemical impedance spectroscopy (EIS) is of great value to corrosion studies because it is sensitive to transient changes that occur in the metal-electrolyte interface. A useful way to link the results of electrochemical impedance spectroscopy to corrosion phenomena is by simulating equivalent circuits. Equivalent circuit models are very attractive because of their relative simplicity, enabling the monitoring of electrochemical systems that have a complex physical mechanism. In this paper, the stochastic algorithm Differential Evolution is proposed to fit an equivalent circuit to the EIS results for a wide potential range. EIS is often limited to the corrosion potential despite being widely used. This greatly hinders the analysis regarding the effect of the applied potential, which strongly affects the interface, as shown, for example, in polarization curves. Moreover, the data from both the EIS and the DC values were used in the proposed scheme, allowing the best fit of the model parameters. The approach was compared to the standard Simplex square residual minimization of EIS data. In order to manage the large amount of generated data, the EIS-Mapper software package, which also plots the 2D/3D diagrams with potential, was used to fit the equivalent circuit of multiple diagrams. Furthermore, EIS-Mapper also computed all simulations. The results of 67 impedance diagrams of stainless steel in a 3.5% NaCl medium at 25 °C obtained in steps of 10 mV, and the respective values of the fitted parameters of the equivalent circuit are reported. The present approach conveys new insight to the use of electrochemical impedance and bridges the gap between polarization curves and equivalent electrical circuits. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
111. Research and design of a novel, low-cost and flexible tactile sensor array.
- Author
-
Huang, Yuanyang, Jiang, Qi, Li, Yibin, Zhao, Chenlu, Wang, Junjie, and Liang, Pei
- Subjects
- *
SENSOR arrays , *ARRAY processing , *FABRICATION (Manufacturing) , *POLYDIMETHYLSILOXANE , *ALGORITHMS - Abstract
At present, the flexible tactile sensor array has become a hot research topic in the field of robotics. This paper introduces the design, fabrication and measurement of a novel tactile sensor array based on soft materials and sensor’s piezoresistivity. This sensor array is capable of detecting the contact force and contact location via a triangular location algorithm. This triangular algorithm, which uses response of different sensing elements to a contact force, can reduce the amount of processing data and the density of the sensor array can get lower. The flexible pressure sensing elements are integrated in a flexible PDMS (polydimethylsiloxane) film, which can avoid the limitation that traditional rigid sensing elements have during their bending deformation. The sensor array’s performance has been experimentally evaluated. The results show that the proposed sensor array has an accuracy of 88.23% for the force and spatial resolution of the sensor array can reach 2.5 mm. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
112. Development and testing of an error compensation algorithm for photogrammetry assisted robotic machining.
- Author
-
Barnfather, J.D., Goodfellow, M.J., and Abram, T.
- Subjects
- *
PHOTOGRAMMETRY , *MACHINING , *INDUSTRIAL robots , *MEASUREMENT , *ALGORITHMS - Abstract
Robotic machining of relatively small features on large components potentially offers an opportunity to reduce capital expenditure in various industries. A barrier to this is the inability of robotic machine tools to machine to the tolerances of conventional equipment. This paper proposes and tests a photogrammetry-based metrology assistance algorithm to compensate for robotic machining inaccuracy, as measured in the part, and investigates the associated measurement challenges. The algorithm is executed in a two stage process, whereby the closest point to nominal cutting coordinates on an aligned inspection surface is used for compensation, created a penultimate measured cut. Finally, the finishing program coordinates are compensated to correct under-cuts during the measured cut stage. Conceptual tests using simulated measurement data give confidence that the proposed approach works well. In experiments, a key area for further R&D effort is found to be uneven inspect point coverage, which results in alignment issues and a poor surface finish. Ultimately, direction is given to improve measurement system performance to enable the metrology assistance approach proposed to be implemented and therefore the benefits of “process-to-part” robotic machining to be realised. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
113. Feature selection for machine fault diagnosis using clustering of non-negation matrix factorization.
- Author
-
Liang, Lin, Liu, Fei, Li, Maolin, He, Kangkang, and Xu, Guanghua
- Subjects
- *
FEATURE selection , *DIMENSIONAL reduction algorithms , *LEAST squares , *ROLLER bearings , *ALGORITHMS - Abstract
Feature selection has been attracting more attentions in recent years for its advantages in improving the fault diagnosis efficiency and reducing the cost of feature acquisition. In this paper, we regard the feature selection as a clustering process with data decomposition technique and propose a novel feature selection method based on the non-negation matrix factorization (NMF). Alternating Least Squares (ALS) algorithm with sparsity control and decorrelation constrains is adopted to factorize original feature space into two low-rank matrixes (projection vectors and feature spaces). Considering the clustering distribution of the projection space, the optimal feature vectors are calculated by the means of the best updating rule parameters. Besides, the inverse of feature vectors is furtherly utilized in the seeking feature subset, which ensures high classifying performance. Experiments are performed by using two standard data sets and the fault diagnosis of roller bearing case. The results are compared with those obtained by applying the whole feature set and standard feature selection algorithms. The outcomes of comparative analysis have confirmed the effectiveness of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
114. Computer vision algorithm for measurement and inspection of O-rings.
- Author
-
Peng, Gaoliang, Zhang, Zhujun, and Li, Weiquan
- Subjects
- *
O-rings , *COMPUTER vision , *SEALING machines , *ALGORITHMS , *SEALING (Technology) equipment & supplies manufacturing - Abstract
O-rings are one of the most common seals used in industry. Precision measurement and inspection of O-rings play a vital role in seal quality control. Human inspection is a traditional way to remove defective O-rings, which is instable and time consuming. The aim of this paper is to utilize the detection algorithms based on computer vision technology to control the quality of O-rings, which includes the accurate measurement algorithm for the internal/sectional diameter and the classification algorithm for the surface defects. A machine vision system is implemented analyze the captured images of O-rings and perform the measurement and inspection processes. The proposed system is evaluated by inspecting a series of O-rings. Experimental results show that the proposed vision system is capable for measuring and inspecting O-rings seal with good accuracy and efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
115. Evaluation of composite positional error based on superposition and containment model and geometrical approximation algorithm.
- Author
-
He, Gaiyun, Guo, Longzhen, Zhang, Mei, and Liu, Peipei
- Subjects
- *
APPROXIMATION algorithms , *MEASUREMENT errors , *SUPERPOSITION (Optics) , *MATHEMATICAL models , *ALGORITHMS - Abstract
Composite positional error is an important parameter in measuring parts with pattern of features. This paper, based on the method to define Spatial Straightness with mathematics according to ASME Y 14.5.1M standard, studied the evaluation of composite positional errors systematically. Firstly, the mathematical definitions of composite positional error in two most common types of pattern of features in engineering were researched. Then, an advanced mathematical model based on the superposition and containment method for the evaluation of composite positional error were proposed. In this model, another variable which represents the relative angle among holes was added into the optimization problem. To solve this model, an algorithm called geometrical approximation algorithm was also proposed. In this algorithm, the steepest descent direction was determined by the exact coordinates of the measured points which led to a better convergence and time efficiency than general algorithm for solving the proposed model. At last, two simulations showed that with the proposed model and algorithm, the accuracy was improved by about 8% and the efficiency was improved by about 50%. An application showed how this model and algorithm worked and due to their accuracy, less misrejection in product examination will be caused. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
116. Freeform texture representation and characterisation based on triangular mesh projection techniques.
- Author
-
Abdul-Rahman, Hussein S., Lou, Shan, Zeng, Wenhan, Jiang, Xiangqian, and Scott, Paul J.
- Subjects
- *
SURFACE analysis , *TEXTURE analysis (Image processing) , *NON-Euclidean geometry , *PARAMETERS (Statistics) , *ALGORITHMS , *COMPUTER graphics - Abstract
Texture characterisation for freeform non-Euclidean surfaces is becoming increasingly important due to the widespread of the use of such surfaces in different applications, e.g. the additive manufacturing. Four main steps are required to analyse and characterise those surfaces which include new surface representation, surface filtration and decomposition, texture representation methods and finally the calculation of the surface parameters. Recently, the representation, as well as the filtration and decomposition, of freeform surfaces have been investigated and some algorithms have been proposed. This paper, however, shed the light on how to represent the texture of freeform non-Euclidean surfaces before calculating the parameters. A novel model for freeform surface parameterisation is introduced; this new model proposes the use of a projection algorithm before the actual calculation of the parameters. Different projection algorithms have been adopted from the mesh projection techniques found in the field of computer graphics. The results of applying those algorithms to represent the texture of both simulated and bio-engineering surfaces are shown, also a comparison between those algorithms has been carried out. Furthermore, examples of calculating some of the surface parameters for freeform surfaces are given. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
117. A robust DIBR 3D image watermarking algorithm based on histogram shape.
- Author
-
Cui, Chen and Niu, Xia-Mu
- Subjects
- *
DIGITAL image watermarking , *RENDERING (Computer graphics) , *DIGITAL image processing , *ALGORITHMS , *HISTOGRAMS , *THREE-dimensional imaging - Abstract
Depth-image-based rendering (DIBR) has become an important technology in 3D displaying. Since either of the center image and generated virtual images might be illegally distributed, we need to protect both of the two kinds of images. In this paper, a histogram shape based watermarking algorithm is proposed to protect the DIBR 3D images. To make the watermarking method robust to common attacks, a pixel mean value based pixel groups selection method is presented to select several suitable pixel groups for watermark embedding. To solve the problem that the dividing of pixel groups can affect the performance of watermark extraction, the width of pixel group is determined by the maximum difference of pixel mean value between the original and attacked images. By this way, the watermark can be extracted with lower bit error rate (BER) from the attacked watermarked image. As the experimental results shown, the proposed method is much more robust to the geometric attacks and combined attacks compared with existing methods. In addition, the proposed watermarking method also has good robustness to the adjusting of baseline distance and depth image blurred. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
118. Ultimate iterative UFIR filtering algorithm.
- Author
-
Shmaliy, Yuriy S., Khan, Sanowar, and Zhao, Shunyi
- Subjects
- *
FINITE impulse response filters , *ITERATIVE methods (Mathematics) , *ALGORITHMS , *KALMAN filtering , *ROBUST statistics - Abstract
Measurements are often provided in the presence of noise and uncertainties that require optimal filters to estimate processes with highest accuracy. The ultimate iterative unbiased finite impulse response (UFIR) filtering algorithm presented in this paper is more robust in real world than the Kalman filter. It completely ignores the noise statistics and initial values while demonstrating better accuracy under the mismodeling and temporary uncertainties and lower sensitivity to errors in the noise statistics. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
119. Towards an intelligent approach for CMM inspection planning of prismatic parts.
- Author
-
Stojadinovic, Slavenko M., Majstorovic, Vidosav D., Durakbasa, Numan M., and Sibalija, Tatjana V.
- Subjects
- *
COORDINATE measuring machines , *ROBOTIC path planning , *WORKPIECES , *INTELLIGENT control systems , *ENGINEERING inspection , *ALGORITHMS - Abstract
This paper presents a model of prismatic parts (PPs) inspection planning on CMMs, in terms of an intelligent concept of inspection planning. The developed model is composed of Inspection Feature Construction, Sampling Strategy, Probe Accessibility Analysis, Automated Collision-Free Generation, and Probe Path Planning. In this model, the simulation of a measuring probe path is based on three algorithms: Algorithm for Measurement Points Distribution, Algorithm for Collision Avoidance, and Algorithm for Probe Path Planning. The simulation output is a measuring protocol for CMM UMM500. An experiment was performed on two PPs that have been produced for the purpose of this research. The inspection results show that all tolerances for both PPs are within the specified limits. The proposed model presents a novel approach for the automatic inspection and a basis for the development of an integrated, intelligent concept of inspection planning. The advantages of this approach imply the reduction of preparation time due to an automatic generation of a measuring protocol, a possibility for the optimisation of measuring probe path, i.e. the reduction of a time needed for the actual measurement and analysis of a workpiece, and an automatic configuration of measuring probes. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
120. Modified kernel density-based algorithm for despiking acoustic Doppler velocimeter data.
- Author
-
Chen, Yue, Yang, Wenjun, Lin, Haili, Li, Bin, and Jing, Siyu
- Subjects
- *
OPEN-channel flow , *POWER spectra , *POWER density , *ALGORITHMS , *TIME series analysis - Abstract
• Distribution characteristics are useful in de-spiking velocity data. • The proposed filtering method preserves valid data to the greatest extend. • Power spectra calculated from altered fluctuating velocity signals changes subtly. • Consideration should be taken in choosing the suitable spike-replacement strategy. Acoustic doppler velocimeter (ADV) has attracted significant attention, especially the research on its data post-processing, which is a vitial step before calculating turbulence statistics. This work processes the acoustic correlation velocimeter (ACV) time series, which have an identical characteristics with ADV data, sampled in open-channel flows utilizing current algorithms combined with a proposed modified kernel density-based algorithm (mkde) that preserves valid data to a larger extent. Trials investigating several interpolation strategies among various velocity series with different data quality demonstrate that without any replacement, the power spectrum density of data series with up to 22% spikes satisfies the Kolmogorov −5/3 law in an inertial subrange. Moreover, in highly contaminated velocity series presenting more consecutive spikes, linear and the last valid data interpolation turned out to be more robust. This paper shed light on dealing with outliers in acoustic Doppler velocimeter data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
121. Parameter identification of fractional order Hammerstein model with two-stage piecewise nonlinearity based on iterative algorithms.
- Author
-
Rui, Jiali, Li, Junhong, Chu, Yunkun, and Lu, Guoping
- Subjects
- *
PARAMETER identification , *PARTICLE swarm optimization , *SYSTEM identification , *ALGORITHMS - Abstract
This paper discusses identification of the fractional order Hammerstein model with colored noise. The static part of the Hammerstein model is a two-stage piecewise nonlinearity, while the dynamic part is a CARMA structure. We deduce the identification expression of the model through the definition of the Grünwald–Letnikov fractional differential. Then, two iterative algorithms are adopted to identify the unknown parameters. One of them is the Levenberg–Marquardt iterative algorithm, and the other is the particle swarm optimization iterative algorithm. The two algorithms are extended from the traditional integer order system identification field to the fractional order nonlinear colored noise system identification field. A numerical example and a case study of servo system identification are respectively presented to demonstrate the feasibility of the identification algorithms. It can be seen that the estimation errors of these two algorithms are relatively small, which reflects their good identification effect. • Identification of the fractional order Hammerstein model is investigated. • The nonlinear block is characterized as a two-stage piecewise nonlinearity. • The L–M iterative algorithm and the PSO iterative algorithm are derived. • The feasibility of the methods are evaluated through two examples. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
122. Sparse coefficient fast solution algorithm based on the circulant structure of a shift-invariant dictionary and its applications for machine fault diagnosis.
- Author
-
Liu, Zhongze, Ding, Kang, Lin, Huibin, Deng, Lifa, Chen, Zhuyun, and Li, Weihua
- Subjects
- *
FAULT diagnosis , *ROLLER bearings , *COMPUTATIONAL complexity , *FOURIER transforms , *ALGORITHMS , *CIRCULANT matrices , *ORTHOGONAL matching pursuit - Abstract
• The fault signals are sparsely represented by shift-invariant dictionary. • The circulant properties of shift-invariant dictionary are analyzed. • Three sparse coefficient fast solution algorithms are derived. • The calculation efficiency is verified by simulation and experimental data. • The fast algorithms can be applied to mechanical fault diagnosis. Sparse coefficient solutions have received continuous attention as one of the core steps of sparse representation. The computational complexity of almost all sparse coefficient solution algorithms is exponentially increased with a signal dimension. In this paper, impact signals caused by localized faults are sparsely represented by a shift-invariant dictionary, which can be Fourier diagonalized based on its circulant structure. Then, the matrix–matrix and matrix–vector multiplications involved in obtaining sparse coefficients are converted into calculation forms that mainly contain the Fourier transform and its inverse to reduce computational complexity. Based on this result, fast solutions of three common sparse coefficient solution algorithms are deduced in detail. Simulation analysis shows that the deduced fast algorithms can significantly fasten running time while maintaining same accuracy as corresponding original algorithms. Localized rolling bearing faults related experiments further verify the effectiveness of the proposed sparse coefficient fast solution algorithms for machine fault diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
123. Denoising low SNR percussion acoustic signal in the marine environment based on the LMS algorithm.
- Author
-
Yang, Zhuodong, Huo, Linsheng, Wang, Jingkai, and Zhou, Jing
- Subjects
- *
AIDS to navigation , *SIGNAL denoising , *SUBMERGED structures , *SIGNAL-to-noise ratio , *ALGORITHMS , *MODE shapes - Abstract
• A percussion acoustic signal denoising method in marine environment is proposed. • Compared with other denoising methods, the proposed method can significantly improve the SNR of noisy percussion acoustic signals under the complex marine environment with noise interference. • When the main peak frequency of the denoised percussion acoustic signal with the proposed method is used for damage identification, the error is only about 3% Percussion-based inspection of structures has attracted widespread attention in recent years. However, the percussion acoustic signals collected in the marine environment usually have a low signal-to-noise ratio (SNR) and are difficult to use directly due to the interference by a multitude of marine noises. The frequency contents of the ambient noises usually overlap with those of the percussion acoustic signals, thus limiting the denoising using traditional methods. This paper proposes a denoising method using the least mean square (LMS) algorithm to obtain the approximate percussion signal. The noisy percussion signals and marine noise are recorded synchronously by two hydrophones. Then the LMS algorithm processes the collected signals and provides the frequency peaks that cannot be extracted with conventional methods. The proposed method is validated by experiments conducted in a noiseless laboratory environment and a noisy, naturally occurring marine environment. The results reveal that the proposed method is excellent in denoising the raw signal, and the error is about 3% in terms of differences in the estimated value of the primary peak frequency. This study demonstrates the broad potential for the method to be applied toward damage detection for underwater structures. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
124. An optimization-based in-motion fine alignment and positioning algorithm for underwater vehicles.
- Author
-
Jin, Kaidi, Chai, Hongzhou, Su, Chuhan, Xiang, Minzhi, and Hui, Jun
- Subjects
- *
SUBMERSIBLES , *INERTIAL navigation systems , *ALGORITHMS , *NONLINEAR equations , *UNITS of measurement - Abstract
• An optimization-based alignment model considering IMU bias and vehicle movement for DVL/SINS system. • A positioning method utilizing the vehicle displacement in the inertial frame. • Implement SQP algorithm to solve the nonlinear alignment equation. • Compare the accuracy of the proposed method and tradition method with a shipborne sea trial data. Fast and accurate in-motion alignment of strapdown inertial navigation system (SINS) is still a difficult problem in underwater missions. Conventional optimization-based alignment cannot isolate the inertial measurement unit (IMU) bias, and Kalman-based fine alignment converges slowly and requires accurately prior knowledge of the Doppler velocity logger (DVL)/SINS system. In this paper, a novel optimization-based fine alignment and positioning algorithm is proposed using DVL measurements in which the IMU bias is considered, and the geodetic coordinate of SINS is updated by the displacement in the inertial frame. In addition, the sequence secondary planning algorithm is applied to the nonlinear optimization problem. Experimental results show that the position accuracy reaches 0.36% relative to the traveled distance, and the proposed method can improve the accuracy of alignment. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
125. Fusion of multi-agent preference orderings in an ordinal semi-democratic decision-making framework.
- Author
-
Franceschini, F., Maisano, D., and Mastrogiacomo, L.
- Subjects
- *
DECISION making , *ALGORITHMS , *MULTIAGENT systems , *MATHEMATICAL analysis , *SYSTEMS theory - Abstract
This paper focuses on the problem of combining multi-agent preference orderings of different alternatives into a single fused ordering, when the agents’ importance is expressed through a rank-ordering and not a set of weights. An enhanced version of the algorithm proposed by Yager (2001) is presented. The main advantages of the new algorithm are that: (i) it better reflects the multi-agent preference orderings and (ii) it is more versatile, since it admits preference orderings with omitted or incomparable alternatives. The description of the new algorithm is supported by a realistic example. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
126. User based Collaborative Filtering using fuzzy C-means.
- Author
-
Koohi, Hamidreza and Kiani, Kourosh
- Subjects
- *
FUZZY clustering technique , *RECOMMENDER systems , *ALGORITHMS , *PEARSON correlation (Statistics) , *CENTER of mass - Abstract
Today, users are surrounded by many items. Recommender Systems are used to help users find items of interest. Collaborative Filtering is one of the most successful techniques of Recommender Systems, which seeks to find users most similar to the active one in order to recommend items. In Collaborative Filtering, clustering techniques can be used for grouping the most similar users into some clusters. Fuzzy Clustering as one of the most frequently used clustering techniques, has not been used in user-based Collaborative Filtering yet. In this paper, a fuzzy C-means approach has been proposed for user-based Collaborative Filtering and its performance against different clustering approaches has been assessed. The MovieLens dataset is used to compare different clustering algorithms. They are evaluated in terms of recommendation accuracy, precision and recall. The empirical results indicate that a combination of Center of Gravity defuzzified Fuzzy Clustering and Pearson correlation coefficient can yield better recommendation results, compared to other techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
127. Modal testing of nanocomposite materials through an optimization algorithm.
- Author
-
Mansour, G., Tsongas, K., and Tzetzis, D.
- Subjects
- *
NANOCOMPOSITE materials , *MATHEMATICAL optimization , *ALGORITHMS , *VISCOELASTIC materials , *TRANSFER functions , *GENETIC algorithms - Abstract
An efficient identification method for modal testing of viscoelastic composite materials is demonstrated in this paper, through the analytical–experimental transfer function method. The procedure for the identification of analytical–experimental transfer functions is carried out using a genetic algorithm (GA) by minimizing the difference between the measured response from tests and the calculated response, which is a function of the modal parameters. The analytical transfer functions provide a sub-structuring process to identify modes, as a function of damped natural frequencies and loss factors of a complex structure and it is insensitive to experimental noise as well as the modal coupling effect. The proposed method is verified with the calculation of the elastic modulus and modal properties of a cantilever steel beam with the FEM and compared with the identification results of the proposed algorithm. The effectiveness of the proposed method is demonstrated by investigating the static and dynamic behavior of epoxy cantilever beam specimens reinforced with silica nanoparticles. Analytical–experimental transfer functions accurately identified the viscoelastic and dynamic response of the studied specimens, while the results indicated that the inclusion of nanosilica particles increased the stiffness of the epoxy network and the damping response of the reinforced specimens is improved. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
128. Non-contact resistance and capacitance on-line measurement of lubrication oil film in rolling element bearing employing an electric field coupling method.
- Author
-
Xie, Kai, Liu, Long-Chao, Li, Xiao-Ping, and Zhang, Han-Lu
- Subjects
- *
LUBRICATING oils , *ELECTRIC capacity , *BEARINGS (Machinery) , *ELECTRIC fields , *ELECTROMECHANICAL devices , *ALGORITHMS - Abstract
The lubrication condition of a radially loaded rolling element bearing mainly depends on the continuity of the oil film between friction pairs, which is critical for highly reliable electromechanical systems. To achieve the continuous online monitoring of the lubricating condition of the bearing oil film, a non-contact measuring method for both the oil film capacitance and resistance, which can be used to detect oil deterioration and film damage, is proposed. The method is implemented by measuring the complex impedance of the oil film using two electric field coupling paths instead of slip rings, thus avoiding mechanical contact with the rotating parts directly. This paper provides the equivalent circuit of the lubrication structure in the bearing and the operational principle, algorithm, and circuit implementation of the non-contact measurement. Verification experiments were carried out for a practical rotary mechanical system, obtaining the capacitance in the range of 50–300 pF and the resistance in the range of 50 kΩ–2 MΩ with error less than 15%, which is sufficient for practical on-line lubrication monitoring. The proposed method has a wide range of applications in high-reliability, high-speed, and precise suspension systems. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
129. Evaluation of inconsistent data: Comparison of two adjustment algorithms.
- Author
-
Chunovkina, A.G., Stepanov, A.V., and Burmistrova, N.A.
- Subjects
- *
DATA analysis , *ALGORITHMS , *MATHEMATICAL equivalence , *METEOROLOGY , *UNCERTAINTY (Information theory) - Abstract
In the paper the evaluation of inconsistent key comparison data is considered. The authors share the opinion that the degree of equivalence of national measurement standards can be established only for those Metrology Institutes that provide consistent measurement results. The idea of increasing the measurement uncertainties individually for each measurement result with the aim to achieve the consistency of the data is advocated. Two algorithms are considered and their properties are investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
130. High resolution melting curve analysis with MATLAB-based program.
- Author
-
Li, Huaizhong, Lan, Ruiting, Peng, Niancai, Sun, Jing, and Zhu, Yong
- Subjects
- *
DNA analysis , *SINGLE nucleotide polymorphisms , *GENETIC mutation , *ALGORITHMS - Abstract
High resolution melting curve analysis (HRM) is an emerging new method for interrogating and characterizing DNA samples. It has been used as a powerful tool for gene mutation and single-nucleotide polymorphism (SNP) detection with high throughput and low cost. Commercially available HRM analysis systems are mostly proprietary and expensive. It lacks the flexibility for the end users and researchers to incorporate new analysis algorithms into the existing system. This paper presents the development of a MATLAB-based open software program for high resolution melting curve analysis. Key analysis functions, such as to obtain the first derivative curve using Savitzky–Golay filter, to identify the melt region, subtraction of background fluorescence and curve normalization, are introduced, followed by case studies of HRM analysis using the developed software program. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
131. Comparative study of three algorithms for estimation of echo parameters in UWB radar module for monitoring of human movements.
- Author
-
Mazurek, Paweł, Miękina, Andrzej, and Morawski, Roman Z.
- Subjects
- *
ULTRA-wideband radar , *ECHO , *ALGORITHMS , *ESTIMATION theory , *OLDER people , *PEOPLE with disabilities - Abstract
The research reported in this paper is related to the ultra-wide-band radar technology that may be employed in care services for elderly and disabled persons. Three algorithms for preprocessing of measurement data from an impulse radar sensor, when applied for such people monitoring, are compared with respect to the uncertainty of estimation of echo parameters. These are: an algorithm based on the maximum-envelope of the measured radar data, an algorithm based on curve-fitting in the spectrum domain of those data, and a modified CLEAN algorithm. Results of the numerical experiments performed on both semi-synthetic data and real-world data, obtained by means of an impulse-radar sensor, are demonstrated. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
132. A T-wave alternans assessment method based on least squares curve fitting technique.
- Author
-
Wan, Xiangkui, Li, Yan, Xia, Chong, Wu, Minghu, Liang, Jin, and Wang, Na
- Subjects
- *
LEAST squares , *CURVE fitting , *ELECTROCARDIOGRAPHY , *VENTRICULAR arrhythmia , *MOVING average process , *ALGORITHMS , *DISEASE risk factors - Abstract
T-wave alternans (TWA) is hypothesized to be related with patients at an increased risk for ventricular arrhythmias. A novel TWA assessment method based on least squares curve fitting techniques (LSCF) is presented in this paper. Real clinical ECG recordings and simulated ECG signals containing additive TWAs and noise are used to test the efficiency of the LSCF. The results are also compared with those by the Modified Moving Average (MMA) algorithm. The experimental results demonstrate that the LSCF is efficiency and promising in detecting and measuring TWAs. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
133. Methods and algorithms for video-based multi-point frequency measuring and mapping.
- Author
-
Mas, David, Ferrer, Belen, Acevedo, Pablo, and Espinosa, Julian
- Subjects
- *
FREQUENCIES of oscillating systems , *MATHEMATICAL mappings , *VIDEO processing , *ALGORITHMS , *FOURIER analysis , *SCIENTIFIC community - Abstract
Object vibrations and movements can be detected through changes in its luminance. In this paper, we demonstrate that we can obtain the vibration frequency of all vibrating targets in the sequence simultaneously through the analysis of local neighborhoods. The study is completed with a short-time Fourier analysis so that changes in the movement frequencies are also accounted. We also show that this information can be displayed like a color frequency map that can be superimposed to the video sequence providing a whole description of the analyzed sequence. The method can be used to analyze complex structures since their different vibrating parts can be visualized at a glance. The main algorithms, methods and some sequences are freely downloadable so that new applications and procedures can be implemented by the scientific community. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
134. A new approach for image stitching technique using Dynamic Time Warping (DTW) algorithm towards scoliosis X-ray diagnosis.
- Author
-
Adwan, Somaya, Alsaleh, Iqbal, and Majed, Rasha
- Subjects
- *
IMAGE processing , *MEDICAL applications of x-rays , *SCOLIOSIS , *DYNAMICAL systems , *ALGORITHMS , *DIAGNOSIS - Abstract
Consider a set of images of a single object, or scenery, taken from different viewpoints and time. Panorama image creation is the process of stitching such images into a single coordinate system to generate a wider viewing panoramic image. Image stitching consists of two processes which are image registration and image blending. In image registration, parts of two overlapping or consecutive images are considered to find an appropriate merging position and transformation to combine the images. In image blending, the intensities of pixels along the stitching line are modified so that they flow naturally without any noticeable break. In this paper, we propose a novel method that utilizes the Dynamic Time Warping (DTW) algorithm to match pairs of images for image stitching. We also perform a dimension reduction scheme that significantly reduces the computational complexity of the standard DTW without affecting its performance. The effectiveness of the proposed method is demonstrated in stitching 50 pairs of medical X-ray images and its performance is compared to those of normalized cross correlation (NCC), Minimum Average Correlation Energy (MACE) filters, sum-of-square-differences (SSD) and sum-of-absolute-differences (SAD). For the database used, the dimensionally reduced DTW outperforms the NCC, MACE, SSD and SAD methods in accuracy and average execution time. The method also outperforms two widely used stitching programs available on the internet called Hugin and Autostitch. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
135. Improved discrete Fourier transform algorithm for harmonic analysis of rotor system.
- Author
-
Yao, Jinbao, Tang, Baoping, and Zhao, Jie
- Subjects
- *
DISCRETE Fourier transforms , *ALGORITHMS , *HARMONIC analysis (Mathematics) , *ROTORS , *FAULT diagnosis , *RESAMPLING (Statistics) - Abstract
Harmonic components are critically important to the fault diagnosis of rotor system. Various methods have been developed for harmonic component extraction. The challenge however, is to efficiently and accurately extract rotor system harmonic components. In this paper, a new harmonic analysis approach based on an improved discrete Fourier transform algorithm is proposed. It schemes out harmonic analysis sampling and resampling method to obtain synchronous sampling data, which can realize integer-period synchronous sampling and greatly increase the measurement accuracy of harmonic parameters. Then an improved discrete Fourier transform algorithm is proposed to extract harmonic components, which can dramatically reduce the computation load through replacing multiplication by shift operation. Finally, the effectiveness of the proposed method is analyzed by means of simulation and practical experiment for multifrequency simulated signal and shaft misalignment faults, respectively. Results show that the proposed method is faster and more accurate than both the DFT-based methods and the FFT-based methods for extracting harmonic components of rotor system. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
136. Frequency agility in cognitive radios: A new measurement algorithm for optimal operative frequency selection.
- Author
-
Angrisani, Leopoldo, Capriglione, Domenico, Ferrigno, Luigi, and Miele, Gianfranco
- Subjects
- *
COGNITIVE radio , *FREQUENCY agility , *ALGORITHMS , *RADIO frequency allocation , *SPECTRUM analysis , *RADIO transmitters & transmission - Abstract
The wide diffusion of multimedia services delivered also on mobile terminals (smart-phone, tablets and so on), is causing a fast and continuous increasing of spectrum usage demand. Nevertheless, several studies have demonstrated that portions of radio spectrum are not in use for significant periods of time. This waste of spectrum shows the necessity to design a more flexible way to manage this resource with respect to the traditional frequency allocation policy. In this context, cognitive radios play a crucial role, because they are thought to enable such flexible spectrum allocation by suitably changing their operating frequency without interfering with other transmitters. As a consequence, they have to implement a method to dynamically select the appropriate operating frequency based on the sensing of signals from other transmitters. This capability is usually called frequency agility. Several spectrum sensing methods have been proposed in literature, whereas few studies have been focused on the development of methods for satisfying the frequency agility capability. In this framework, the paper proposes a novel measurement algorithm able to meet those requirements. It is based on two sequential steps: the former performs a preliminary spectrum sensing aimed at excluding the frequency ranges surely occupied by primary users, while the latter performs a more refined analysis, restricted to frequency intervals not excluded by the previous stage, with the aim of selecting an operating frequency for the cognitive radio terminal that minimizes potential interferences with primary users. It has been designed for operating in scenarios involving signals based on OFDM or which present spectrum shapes and slopes similar to ones shown by OFDM. A key feature of the proposal is the ability to operate even in scenarios characterized by low signal-to-noise ratios as confirmed also by the experimental campaign. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
137. Research and implementation of visual helicopter coning angle measurement system.
- Author
-
Luo, Yi, Yang, Kun, Shang, Chunxue, and Yang, Chunbao
- Subjects
- *
ROTORS (Helicopters) , *CAMERAS , *LIGHT emitting diodes , *ALGORITHMS , *IMAGE segmentation - Abstract
A visual helicopter coning angle measurement method is presented in this paper. First, a reflective target is pasted onto the helicopter rotor tip; second, a high-speed industrial camera and LED strobe light are used to obtain grayscale images of each blade’s reflective target when the rotor is rotating; Third, target grayscale images are processed using the Otsu threshold segmentation algorithm to obtain binarized images. Next, the eigenvalues of the images are obtained using Sobel operator edge detection method, so the height difference of the reflective target and the helicopter coning angle can be calculated. A prototype measurement system is designed to prove the feasibility and scientific validity of the method. Experimental results show that the system is easy to operate, and the measurement accuracy is approximately 0.5 mm. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
138. A novel surface recovery algorithm in white light interferometry.
- Author
-
Lei, Zili, Liu, Xiaojun, Chen, Liangzhou, Lu, Wenlong, and Chang, Suping
- Subjects
- *
INTERFEROMETRY , *ALGORITHMS , *VIBRATION (Mechanics) , *COMPUTER simulation , *MEASUREMENT , *STATISTICAL correlation - Abstract
White-light interferometry (WLI) is an established method for surface measurement. In WLI, surface recovery algorithm from series of vertical scanning white light interferogram is pivotal technique and has been researched widely. However, the effectiveness of existing surface recovery algorithms is easy to be affected by some conditions like mechanical vibrations, low-reflection of the test surface and phase change caused by the reflection. In this paper, a new recovery algorithm is presented, in which correlation analysis of WLI envelop curves and a multi reference position based phase solution method are employed for robust and high precision surface recovery. Mathematical derivation of the algorithm is carried out, and simulation and experimental testing and comparison experiments are conducted, which show that the new algorithm is effective. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
139. Application of principal components analysis and signal-to-noise ratio for calibration of spectrophotometric analysers of food.
- Author
-
Morawski, Roman Z. and Miękina, Andrzej
- Subjects
- *
SPECTROPHOTOMETRY , *PRINCIPAL components analysis , *SIGNAL-to-noise ratio , *ALGORITHMS , *PARAMETER estimation , *FOOD industry - Abstract
Spectrophotometric analysers of food, being instruments for determination of the composition of food products and ingredients, are today of growing importance for food industry, as well as for food distributors and consumers. Their metrological performance significantly depends on the numerical performance of available means for spectrophotometric data processing; in particular – the means for calibration of analysers. In this paper, a new algorithm for this purpose is proposed, viz. the algorithm using principal components analysis. It is almost as efficient as partial least squares algorithms of calibration, but much simpler. It is fully automatic, viz. the selection of the most informative components is based on the signal-to-noise ratio characterising processed measurement data. The practical effectiveness of the proposed algorithm is demonstrated on a test problem consisting in determination of the concentrations of components of trinary oil mixtures. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
140. Dealing with prior knowledge about the measurand.
- Author
-
Lira, Ignacio
- Subjects
- *
PRIOR learning , *MATHEMATICS , *ALGORITHMS , *PDF (Computer file format) , *ENCODING - Abstract
Suppose a measurand can be computed by two different but consistent measurement models. Then, the output of one of the models would serve as prior knowledge to the other. In this paper, two alternative methods to produce a PDF for the measurand that take into account both models are presented. The first method proceeds by propagating the PDFs for the input quantities through the corresponding models in the usual way and then merging the resulting PDFs using the logarithmic or linear pooling techniques. The result is a kind of ‘compromise distribution’ of the pooled PDFs. The second method starts by propagating the PDFs for all input quantities except one, say X 1 , through the model that relates the former quantities to the latter. In this way the PDF for X 1 is obtained, which is then updated using its likelihood. The resulting PDF, which encodes all information available, is finally propagated through the model that relates X 1 to the measurand. This second method is the preferred way of analysis, because it results in a PDF that is narrower than the one obtained with the first method. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
141. An improved LLE algorithm based on iterative shrinkage for machinery fault diagnosis.
- Author
-
Liu, Yuanhong, Yu, Zhiwei, Zeng, Ming, and Zhang, Yansheng
- Subjects
- *
THRESHOLDING algorithms , *ALGORITHMS , *PERFORMANCE , *COMPUTED tomography , *COEFFICIENTS (Statistics) - Abstract
Local linear embedding (LLE) algorithm is widely utilized to feature extraction for fault diagnosis, but the diagnosis result is sensitive to reconstruction weight W of LLE. To make W more significant and robust, in this paper, ISLLE algorithm is proposed with the aid of iterative shrinkage technology and LLE algorithm. In ISLLE algorithm, a surrogate function is introduced, upon which the high-dimensional optimization problem can be decoupled into a set of one-dimensional equations, then W can be easily computed by iterative shrinkage method. In each iteration, the small and negative weight coefficients are eliminated, while the large ones are shrunk, which can be regarded as feature extraction and noise reduction. Hence, the signals processed by ISLLE are more beneficial to diagnosis. Three real datasets are used to examine the proposed method. The experimental results demonstrate that the proposed method is valid, and the performance of ISLLE outperforms that of the original LLE. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
142. Dealing with incomplete datasets with a confidence attribution algorithm.
- Author
-
Horstmann, Leonardo Passig, Wagner, Matheus, Scheffel, Roberto Milton, and Fröhlich, Antônio Augusto
- Subjects
- *
MISSING data (Statistics) , *ALGORITHMS , *CONFIDENCE , *BIG data , *SOLAR power plants , *WIND turbines , *MACHINE learning - Abstract
In this paper, we use multivariate machine learning-based predictors to replace missing data and propose a mechanism to evaluate and track correctness by estimating its confidence level whenever successive missing data points occur. The proposed solution relies on the idea of confidence attribution, which assigns a value to every measurement, indicating how much it is believed to be accurate based on the difference between measured and predicted data. When data is missing, we perform data imputation using the predicted value and estimate confidence. We estimate confidence based solely on parameters used for confidence attribution and information acquired during the predictor's training. We evaluate the solution with two real datasets, one collected from a solar farm and another from a collection of wind turbines. The results show that the accuracy of multivariate models can decrease significantly when input data goes missing, demonstrating the need for the proposed confidence tracking mechanism. • Imputing successive missing data can impact the precision of multivariate predictors. • Multiple missing variables impact the precision of multivariate predictors. • Tracking the prediction error for each imputation based on the model properties. • A mechanism to evaluate the validity of imputing data based on error thresholds. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
143. Pose error identification algorithm based on hemispherical resonant gyroscope assembly capacitance uniformity.
- Author
-
Yu, H., Jin, X., Liu, X.H., Liu, D.P., Li, Z.X., Li, S.L., J., Duan, Zhang, J.C., and Li, C.J.
- Subjects
- *
ELECTRIC capacity , *GYROSCOPES , *UNIFORMITY , *BACK propagation , *POSE estimation (Computer vision) , *CARBON nanofibers , *ALGORITHMS , *MATHEMATICAL models - Abstract
• A novel algorithm was proposed to guide the adjustment of spatial pose, improving capacitance uniformity of HRG; • The effect of spatial poses on capacitance was discussed; • A method for predicting the pose of resonator based on BP neural network was proposed. The assembly performance of hemispherical resonant gyroscope (HRG) is directly affected by the spatial pose of the resonator and the electrode carrier, which can be reflected in the capacitance uniformity. This paper proposed a fast identification algorithm for pose error based on HRG assembly capacitance uniformity. A forward mathematical model for calculating capacitance from the resonator's pose was established and verified by experiments and simulations. The effect of pose on the capacitance was analyzed by using the mathematical model. Based on the data of pose and capacitance obtained from the mathematical model, the capacitance-pose inverse model was constructed by back propagation (BP) neural network. This novel algorithm provides a swift way to identify the pose error of resonator and achieves capacitance uniformity in HRG assembly, which can significantly improve the assembly quality and efficiency of HRG. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
144. A method for reducing transient electromagnetic Noise: Combination of variational mode decomposition and wavelet denoising algorithm.
- Author
-
Qi, Tingye, Wei, Xiaoya, Feng, Guorui, Zhang, Fan, Zhao, Dekang, and Guo, Jun
- Subjects
- *
ELECTROMAGNETIC noise , *ELECTRIC transients , *HILBERT-Huang transform , *ALGORITHMS , *MINES & mineral resources , *SIGNAL denoising - Abstract
• A New Method for Transient Electromagnetic Noise Reduction:VMD-WTD. • Using GWO to optimize the parameters of VMD to realize signal adaptive decomposition. • This VMD-WTD method has good applicability in the mined-out area location determination. The transient electromagnetic method (TEM) is used to detect mineral resources and mined-out areas, to eliminate the secondary field signal noise interference, this paper proposes an algorithm for denoising the TEM signal based on Variational Mode Decomposition (VMD) and Wavelet Threshold Denoising (WTD) methods. The Grey Wolf Optimization (GWO) algorithm solves the parameter combination (K , α) in the VMD, and the K intrinsic mode functions (IMFs) are then obtained using the decomposed TEM signal. Subsequently, all IMFs are analyzed according to the correlation coefficient and divided into signal modes, mixed modes, and invalid modes. Finally, the mixed modes are denoised by WTD, the signal and denoised modes are combined. To validate the effectiveness of the VMD-WTD algorithm, the method was compared in simulations with the Ensemble Empirical Mode Decomposition (EEMD) algorithm and various other methods previously developed to improve the TEM, its performance in a field experiment was also confirmed. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
145. Predicting electrical power output of combined cycle power plants using a novel artificial neural network optimized by electrostatic discharge algorithm.
- Author
-
Zhao, Yinghao and Kok Foong, Loke
- Subjects
- *
ELECTROSTATIC discharges , *ARTIFICIAL neural networks , *POWER plants , *COMBINED cycle power plants , *METAHEURISTIC algorithms , *ALGORITHMS , *ELECTRICAL energy - Abstract
• Electrical power output of power plants is predicted with high accuracy. • Electrostatic discharge algorithm is for the first time used for this aim. • Regular MLPs are compared to ESDA- and ASO-optimized versions. • ESDA-MLP is the most efficient model. • An explicit predictive formula is derived for convenient early prediction. Combined cycle power plants (CCPP) are among the most sophisticated, yet efficient, systems for producing electrical energy. Hence, simulating their performance has been an engineering hotspot toward sustainable developments. This paper employs novel soft computing methods for predicting electrical power (P E) output of CCPPs. To this end, a metaheuristic technique called electrostatic discharge algorithm (ESDA) is coupled with an artificial neural network (ANN) to create the proposed hybrid. Its performance is compared to several conventionally trained ANNs to investigate the effect of hybridization. By considering the influence of ambient temperature, exhaust vacuum, atmospheric pressure, and relative humidity, the P E is predicted through a 4 × 9 × 1 network. Among the conventional trainers, Levenberg-Marquardt emerged as the most promising one. However, the ESDA outperformed this algorithm in both training and testing phases. Accordingly, the used metaheuristic optimization could improve the robustness of the regular ANN in surmounting computational drawbacks. The ESDA-ANN is, therefore, introduced as a reliable predictive tool for the P E modeling, and the corresponding predictive formula is presented in the last part of this research. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
146. Automatic LiDAR-based lighting inventory in buildings.
- Author
-
Díaz-Vilariño, L., González-Jorge, H., Martínez-Sánchez, J., and Lorenzo, H.
- Subjects
- *
LIDAR , *LIGHTING , *ENERGY consumption , *CONSTRUCTION industry , *AUTOMATIC detection in radar , *ALGORITHMS - Abstract
Construction industry is a large contributor in terms of energy consumption for all stages of the building life-cycle. Among building features, lightning management is a crucial element for energy saving. In this paper, an algorithm for the automatic detection of ceiling lightings is developed and tested. The main sections of the algorithm consist of ceiling extraction, point cloud to image conversion, and luminaires detection. Ceiling extraction is performed using RANSAC algorithm for plane detection. Point cloud conversion uses nearest neighbor rasterization and image binarization. The final step deals with luminaires detection and considers two types of lightning present in the dataset: fluorescent lightings are distinguished using a refined Harris corner detector while a Hough transformation is applied to find circular low energy bulbs. The algorithm results reflect a completeness of 100% with a geometric accuracy of 5.8 cm in the centroid determination of fluorescent lighting and 3.0 cm in low energy bulbs. The computing time ranges from 148.8 s in the detection of the fluorescent lighting to 105.9 s for the case of low energy bulbs, with point clouds of 90 and 60 million points, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
147. A signal pre-processing algorithm designed for the needs of hardware implementation of neural classifiers used in condition monitoring.
- Author
-
Dabrowski, Dariusz, Hashemiyan, Zahra, and Adamczyk, Jan
- Subjects
- *
ALGORITHMS , *ARTIFICIAL neural networks , *GEARBOXES , *POWER transmission , *EXCAVATING machinery - Abstract
Gearboxes have a significant influence on the durability and reliability of a power transmission system. Currently, extensive research studies are being carried out to increase the reliability of gearboxes working in the energy industry, especially with a focus on planetary gears in wind turbines and bucket wheel excavators. In this paper, a signal pre-processing algorithm designed for condition monitoring of planetary gears working in non-stationary operation is presented. The algorithm is dedicated for hardware implementation on Field Programmable Gate Arrays (FPGAs). The purpose of the algorithm is to estimate the features of a vibration signal that are related to failures, e.g. misalignment and unbalance. These features can serve as the components of an input vector for a neural classifier. The approach proposed here has several important benefits: it is resistant to small speed fluctuations up to 7%, it can be performed in real-time conditions and its implementation does not require many resources of FPGAs. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
148. Analysis of changes in coordinate measuring machines accuracy made by different nodes density in geometrical errors correction matrix.
- Author
-
Gąska, A., Sładek, J., Ostrowska, K., Kupiec, R., Krawczyk, M., Harmatys, W., Gąska, P., Gruza, M., Owczarek, D., Knapik, R., and Kmita, A.
- Subjects
- *
COORDINATE measuring machines , *MANUFACTURING processes , *PRODUCTION (Economic theory) , *MACHINE dynamics , *ALGORITHMS , *DENSITY - Abstract
Advances in modern manufacturing techniques increase production efficiency but, at the same time, present new tasks and challenges for coordinate metrology and the manufacturers of Coordinate Measuring Machines (CMMs). The main goal of current research efforts is improving measurement accuracy. Seeing as many of the possible solutions regarding CMM construction had already been explored, there seems to be little left for improvement in that field. Further efforts at accuracy improvement rely mostly on using sophisticated mathematical algorithms designed to correct relevant errors. Many types of errors could be compensated using this approach, including: probe head errors, machine dynamics errors and, most importantly, machine geometrical errors. Almost all coordinate measuring machines produced nowadays are equipped with geometrical errors compensation matrix known as the CAA matrix (Computer Aided Accuracy). CAA matrices are based on a grid of reference points (nodes) in which certain values of the components of geometrical errors are determined experimentally. The error values between the nodes are estimated using simple interpolation methods. Theoretically, a higher density of reference points on the grid describing the CAA matrix should improve the accuracy of the machine utilizing the matrix. On the other hand, increasing the number of nodes simultaneously increases the amount of workload, time and money spent on constructing the CAA matrix. This paper presents a number of experiments aimed at creating CAA matrices with different number of matrix nodes using the LaserTracer system. The relations between maximum permissible errors obtained on a machine using matrices with different densities of nodes are also discussed. Additionally, the authors attempt to tackle the question of determining the most optimal density of nodes with regards to the ratio of time spent on matrix creation and the effect on accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
149. Lithium-ion battery remaining useful life estimation with an optimized Relevance Vector Machine algorithm with incremental learning.
- Author
-
Liu, Datong, Zhou, Jianbao, Pan, Dawei, Peng, Yu, and Peng, Xiyuan
- Subjects
- *
LITHIUM-ion batteries , *MACHINE theory , *ALGORITHMS , *INDUSTRIAL engineering , *PREDICTION theory , *DATA analysis - Abstract
Lithium-ion battery plays a key role in most industrial systems, which is critical to the system availability. It is important to evaluate the performance degradation and estimate the remaining useful life (RUL) for those batteries. With the capability of uncertainty representation and management, Relevance Vector Machine (RVM) becomes an effective approach for lithium-ion battery RUL estimation. However, small sample size and low precision of multi-step prediction limits its application in battery RUL estimation for sparse RVM algorithm. Due to the continuous on-line update of monitoring data, to improve the prediction performance of battery RUL model, dynamic training and on-line learning capability is desirable. Another challenge in on-line and real-time processing is the operating efficiency and computational complexity. To address these issues, this paper implements a flexible and effective on-line training strategy in RVM algorithm to enhance the prediction ability, and presents an incremental optimized RVM algorithm to the model via efficient on-line training. The proposed on-line training strategy achieves a better prediction precision as well as improves the operating efficiency for battery RUL estimation. Experiments based on NASA battery data set show that the proposed method yields a satisfied performance in RUL estimation of lithium-ion battery. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
150. Image segmentation for laser triangulation based on Chan–Vese model.
- Author
-
Mueller, T. and Reithmeier, E.
- Subjects
- *
IMAGE segmentation , *SIGNAL-to-noise ratio , *ALGORITHMS , *MICROSTRUCTURE , *LASER triangulation , *METROLOGY - Abstract
Laser triangulation is a well-established technique in 3D surface metrology. However, scattering surfaces and reflectivity variations cause measurement uncertainties due to a reduced signal-to-noise ratio. To improve the measurement accuracy of such surfaces a new laser line detection algorithm is proposed. Within this work, the laser line segmentation and, therefore, the separation of the laser line and the noisy background in the camera images are based on the Chan–Vese model. The Chan–Vese model is adapted to reduce the computation time and increase the measurement rate, which is important in industrial applications. In this paper, complete instructions on how to apply the Chan–Vese segmentation algorithm to laser line triangulation measurements are given, including initialization and a parameter set for the segmentation process. Further, an example of laser triangulation measurement of a microstructured, highly scattering surface is presented. Closing, the proposed approach is compared with a conventional line detection method. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.