17 results on '"Yuhu Cheng"'
Search Results
2. Improved generative adversarial network for retinal image super-resolution
- Author
-
Defu Qiu, Yuhu Cheng, and Xuesong Wang
- Subjects
Image Processing, Computer-Assisted ,Humans ,Health Informatics ,Signal-To-Noise Ratio ,Algorithms ,Software ,Computer Science Applications - Abstract
The retina is the only organ in the body that can use visible light for non-invasive observation. By analyzing retinal images, we can achieve early screening, diagnosis and prevention of many ophthalmological and systemic diseases, helping patients avoid the risk of blindness. Due to the powerful feature extraction capabilities, many deep learning super-resolution reconstruction networks have been applied to retinal image analysis and achieved excellent results.Given the lack of high-frequency information and poor visual perception in the current reconstruction results of super-resolution reconstruction networks under large-scale factors, we present an improved generative adversarial network (IGAN) algorithm for retinal image super-resolution reconstruction. Firstly, we construct a novel residual attention block, improving the reconstruction results lacking high-frequency information and texture details under large-scale factors. Secondly, we remove the Batch Normalization layer that affects the quality of image generation in the residual network. Finally, we use the more robust Charbonnier loss function instead of the mean square error loss function and the TV regular term to smooth the training results.Experimental results show that our proposed method significantly improves objective evaluation indicators such as peak signal-to-noise ratio and structural similarity. The obtained image has rich texture details and a better visual experience than the state-of-the-art image super-resolution methods.Our proposed method can better learn the mapping relationship between low-resolution and high-resolution retinal images. This method can be effectively and stably applied to the analysis of retinal images, providing an effective basis for early clinical treatment.
- Published
- 2022
- Full Text
- View/download PDF
3. End-to-end residual attention mechanism for cataractous retinal image dehazing
- Author
-
Defu, Qiu, Yuhu, Cheng, and Xuesong, Wang
- Subjects
Retinal Diseases ,Disease Progression ,Image Processing, Computer-Assisted ,Humans ,Health Informatics ,Neural Networks, Computer ,Cataract ,Retina ,Software ,Computer Science Applications - Abstract
Cataract is one of the most common causes of vision loss. Light scattering due to clouding of the lens in cataract patients makes it extremely difficult to image the retina of cataract patients with fundus cameras, resulting in a serious decrease in the quality of the retinal images taken. Furthermore, the age of cataract patients is generally too old, in addition to cataracts, the patients often have other retinal diseases, which brings great challenges to experts in the clinical diagnosis of cataract patients using retinal imaging.In this paper, we present the End-to-End Residual Attention Mechanism (ERAN) for Cataractous Retinal Image Dehazing, which it includes four modules: encoding module, multi-scale feature extraction module, feature fusion module, and decoding module. The encoding module encodes the input cataract haze image into an image, facilitating subsequent feature extraction and reducing memory usage. The multi-scale feature extraction module includes a hole convolution module, a residual block, and an adaptive skip connection, which can expand the receptive field and extract features of different scales through weighted screening for fusion. The feature fusion module uses adaptive skip connections to enhance the network's ability to extract haze density images to make haze removal more thorough. Furthermore, the decoding module performs non-linear mapping on the fused features to obtain the haze density image, and then restores the haze-free image.The experimental results show that the proposed method has achieved better objective and subjective evaluation results, and has a better dehazing effect.We proposed ERAN method not only provides visually better images, but also helps experts better diagnose other retinal diseases in cataract patients, leading to better care and treatment.
- Published
- 2022
- Full Text
- View/download PDF
4. Dual U-Net residual networks for cardiac magnetic resonance images super-resolution
- Author
-
Defu, Qiu, Yuhu, Cheng, and Xuesong, Wang
- Subjects
Heart Diseases ,Disease Progression ,Image Processing, Computer-Assisted ,Humans ,Health Informatics ,Signal-To-Noise Ratio ,Magnetic Resonance Imaging ,Algorithms ,Software ,Computer Science Applications - Abstract
Heart disease is a vital disease that has threatened human health, and is the number one killer of human life. Moreover, with the added influence of recent health factors, its incidence rate keeps showing an upward trend. Today, cardiac magnetic resonance (CMR) imaging can provide a full range of structural and functional information for the heart, and has become an important tool for the diagnosis and treatment of heart disease. Therefore, improving the image resolution of CMR has an important medical value for the diagnosis and condition assessment of heart disease. At present, most single-image super-resolution (SISR) reconstruction methods have some serious problems, such as insufficient feature information mining, difficulty to determine the dependence of each channel of feature map, and reconstruction error when reconstructing high-resolution image.To solve these problems, we have proposed and implemented a dual U-Net residual network (DURN) for super-resolution of CMR images. Specifically, we first propose a U-Net residual network (URN) model, which is divided into the up-branch and the down-branch. The up-branch is composed of residual blocks and up-blocks to extract and upsample deep features; the down-branch is composed of residual blocks and down-blocks to extract and downsample deep features. Based on the URN model, we employ this a dual U-Net residual network (DURN) model, which combines the extracted deep features of the same position between the first URN and the second URN through residual connection. It can make full use of the features extracted by the first URN to extract deeper features of low-resolution images.When the scale factors are 2, 3, and 4, our DURN can obtain 37.86 dB, 33.96 dB, and 31.65 dB on the Set5 dataset, which shows (i) a maximum improvement of 4.17 dB, 3.55 dB, and 3.22dB over the Bicubic algorithm, and (ii) a minimum improvement of 0.34 dB, 0.14 dB, and 0.11 dB over the LapSRN algorithm.Comprehensive experimental study results on benchmark datasets demonstrate that our proposed DURN can not only achieve better performance for peak signal to noise ratio (PSNR) and structural similarity index (SSIM) values than other state-of-the-art SR image algorithms, but also reconstruct clearer super-resolution CMR images which have richer details, edges, and texture.
- Published
- 2022
- Full Text
- View/download PDF
5. Heterogeneous domain adaptation network based on autoencoder
- Author
-
Yuhu Cheng, Yuting Ma, Xuesong Wang, Liang Zou, and Joel J. P. C. Rodrigues
- Subjects
Manifold alignment ,Computer Networks and Communications ,Computer science ,business.industry ,Feature vector ,Pattern recognition ,02 engineering and technology ,Conditional probability distribution ,Autoencoder ,Theoretical Computer Science ,Consistency (database systems) ,Artificial Intelligence ,Hardware and Architecture ,Feature (computer vision) ,020204 information systems ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Probability distribution ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Software - Abstract
Heterogeneous domain adaptation is a more challenging problem than homogeneous domain adaptation. The transfer effect is not ideally caused by shallow structure which cannot adequately describe the probability distribution and obtain more effective features. In this paper, we propose a heterogeneous domain adaptation network based on autoencoder, in which two sets of autoencoder networks are used to project the source-domain and target-domain data to a shared feature space to obtain more abstractive feature representations. In the last feature and classification layer, the marginal and conditional distributions can be matched by empirical maximum mean discrepancy metric to reduce distribution difference. To preserve the consistency of geometric structure and label information, a manifold alignment term based on labels is introduced. The classification performance can be improved further by making full use of label information of both domains. The experimental results of 16 cross-domain transfer tasks verify that HDANA outperforms several state-of-the-art methods.
- Published
- 2018
- Full Text
- View/download PDF
6. Computational performance optimization of support vector machine based on support vectors
- Author
-
Yuhu Cheng, Xuesong Wang, and Fei Huang
- Subjects
0209 industrial biotechnology ,Structured support vector machine ,business.industry ,Computer science ,Cognitive Neuroscience ,Pattern recognition ,02 engineering and technology ,computer.software_genre ,Computer Science Applications ,Support vector machine ,Set (abstract data type) ,ComputingMethodologies_PATTERNRECOGNITION ,020901 industrial engineering & automation ,Dimension (vector space) ,Hyperplane ,Artificial Intelligence ,Sample size determination ,Ranking SVM ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Data mining ,Intrinsic dimension ,business ,computer - Abstract
The computational performance of support vector machine (SVM) mainly depends on the size and dimension of training sample set. Because of the importance of support vectors in the determination of SVM classification hyperplane, a kind of method for computational performance optimization of SVM based on support vectors is proposed. On one hand, at the same time of the selection of super-parameters of SVM, according to Karush-Kuhn-Tucker condition and on the precondition of no loss of potential support vectors, we eliminate non-support vectors from training sample set to reduce sample size and thereby to reduce the computation complexity of SVM. On the other hand, we propose a simple intrinsic dimension estimation method for SVM training sample set by analyzing the correlation between number of support vectors and intrinsic dimension. Comparative experimental results indicate the proposed method can effectively improve computational performance.
- Published
- 2016
- Full Text
- View/download PDF
7. Thermal analysis of a canned switched reluctance drive with a novel network
- Author
-
Xuesong Wang, Yuhu Cheng, and Qiang Yu
- Subjects
Engineering ,business.industry ,020209 energy ,020208 electrical & electronic engineering ,Process (computing) ,Energy Engineering and Power Technology ,Mechanical engineering ,02 engineering and technology ,Industrial and Manufacturing Engineering ,Finite element method ,Switched reluctance motor ,Compensation (engineering) ,Control theory ,Thermal ,0202 electrical engineering, electronic engineering, information engineering ,Thermal analysis ,business ,Hydraulic pump ,Network model - Abstract
This paper presents thermal characteristics of a novel canned Switched Reluctance Machine (SRM) as a hydraulic pump drive. Due to considerable ohmic loss from the can shield structure, thermal analysis is essential. A novel lumped parameter network model featured by using compensation elements is proposed. As a result, calculation accuracy is improved by removing traditional systematic error. The modeling process is described in detail, including thermal resistances and compensation elements. Accuracy of the model is validated by both finite element method (FE) and measurement.
- Published
- 2016
- Full Text
- View/download PDF
8. A non-negative sparse semi-supervised dimensionality reduction algorithm for hyperspectral data
- Author
-
Yang Gao, Xuesong Wang, and Yuhu Cheng
- Subjects
business.industry ,Cognitive Neuroscience ,Dimensionality reduction ,Hyperspectral imaging ,020206 networking & telecommunications ,Pattern recognition ,Sample (statistics) ,02 engineering and technology ,Sparse approximation ,Semi-supervised learning ,computer.software_genre ,Computer Science Applications ,Statistics::Machine Learning ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Margin (machine learning) ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,Adjacency list ,020201 artificial intelligence & image processing ,Data mining ,Artificial intelligence ,business ,computer ,Mathematics - Abstract
A non-negative sparse semi-supervised dimensionality reduction algorithm is proposed for hyperspectral data by making adequate use of a few labeled samples and a large number of unlabeled samples. The objective function of the proposed algorithm consists of two terms: (1) a discriminant item is designed to analyze a few labeled samples from the global viewpoint, which can assess the separability between surface objects; (2) a regularization term is used to build a non-negative sparse representation graph based on the unlabeled samples, which can adaptively find an adjacency graph for each sample and then find valuable samples with huge information volume from the original hyperspectral data. Based on the objective function and the maximum margin criterion, a dimensionality reduction algorithm, the non-negative sparse semi-supervised maximum margin algorithm, is proposed. Experimental results on the ROSIS University and AVIRIS 92AV3C hyperspectral data sets show that the proposed algorithm can effectively utilize the unlabeled samples to achieve higher overall classification accuracy and Kappa coefficient when compared with some representative supervised, unsupervised and semi-supervised dimensionality reduction algorithms.
- Published
- 2016
- Full Text
- View/download PDF
9. Multi-window back-projection residual networks for reconstructing COVID-19 CT super-resolution images
- Author
-
Defu Qiu, Xuesong Wang, Yuhu Cheng, and Xiaoqiang Zhang
- Subjects
Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Multi-window ,Health Informatics ,Residual ,Convolutional neural network ,Article ,030218 nuclear medicine & medical imaging ,Convolution ,03 medical and health sciences ,Back-projection ,Deep Learning ,0302 clinical medicine ,Humans ,Image resolution ,SARS-CoV-2 ,business.industry ,COVID-19 ,Pattern recognition ,Filter (signal processing) ,Image Enhancement ,Coronavirus disease ,Computer Science Applications ,Residual networks ,Feature (computer vision) ,Super-resolution ,Benchmark (computing) ,Artificial intelligence ,Tomography, X-Ray Computed ,business ,Algorithms ,030217 neurology & neurosurgery ,Software ,Dilated convolution - Abstract
Highlights • Developed an effective multi-window back-projection residual network to reconstruct COVID-19 CT super-resolution images. • Designed a multi-window back-projection residual network structure • Combined with the advantages of deep super-resolution reconstruction network, residual blocks are used to deepen the network and effectively improve the image quality. • To solve the problem of lack of relevance between COVID-19 CT images feature information. • The proposed super-resolution method has good performance than the state-of-the-art methods., Background and objective With the increasing problem of coronavirus disease 2019 (COVID-19) in the world, improving the image resolution of COVID-19 computed tomography (CT) becomes a very important task. At present, single-image super-resolution (SISR) models based on convolutional neural networks (CNN) generally have problems such as the loss of high-frequency information and the large size of the model due to the deep network structure. Methods In this work, we propose an optimization model based on multi-window back-projection residual network (MWSR), which outperforms most of the state-of-the-art methods. Firstly, we use multi-window to refine the same feature map at the same time to obtain richer high/low frequency information, and fuse and filter out the features needed by the deep network. Then, we develop a back-projection network based on the dilated convolution, using up-projection and down-projection modules to extract image features. Finally, we merge several repeated and continuous residual modules with global features, merge the information flow through the network, and input them to the reconstruction module. Results The proposed method shows the superiority over the state-of-the-art methods on the benchmark dataset, and generates clear COVID-19 CT super-resolution images. Conclusion Both subjective visual effects and objective evaluation indicators are improved, and the model specifications are optimized. Therefore, the MWSR method can improve the clarity of CT images of COVID-19 and effectively assist the diagnosis and quantitative assessment of COVID-19.
- Published
- 2021
- Full Text
- View/download PDF
10. Super-parameter selection for Gaussian-Kernel SVM based on outlier-resisting
- Author
-
Yuhu Cheng, Xuesong Wang, and Fei Huang
- Subjects
Computational complexity theory ,business.industry ,Applied Mathematics ,Pattern recognition ,Condensed Matter Physics ,Support vector machine ,Set (abstract data type) ,symbols.namesake ,Kernel (statistics) ,Outlier ,Benchmark (computing) ,Gaussian function ,symbols ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Instrumentation ,Selection (genetic algorithm) ,Mathematics - Abstract
The learning ability and generalizing performance of the support vector machine (SVM) mainly relies on the reasonable selection of super-parameters. When the scale of the training sample set is large and the parameter space is huge, the existing popular super-parameter selection methods are impractical due to high computational complexity. In this paper, a novel super-parameter selection method for SVM with a Gaussian kernel is proposed, which can be divided into the following two stages. The first one is choosing the kernel parameter to ensure a sufficiently large number of potential support vectors retained in the training sample set. The second one is screening out outliers from the training sample set by assigning a special value to the penalty factor, and training out the optimal penalty factor from the remained training sample set without outliers. The whole process of super-parameter selection only needs two train-validate cycles. Therefore, the computational complexity of our method is low. The comparative experimental results concerning 8 benchmark datasets show that our method possesses high classification accuracy and desirable training time.
- Published
- 2014
- Full Text
- View/download PDF
11. Multi-source transfer ELM-based Q learning
- Author
-
Yuhu Cheng, Xuesong Wang, Ge Cao, and Jie Pan
- Subjects
business.industry ,Active learning (machine learning) ,Computer science ,Cognitive Neuroscience ,Q-learning ,Multi-task learning ,Semi-supervised learning ,Machine learning ,computer.software_genre ,Generalization error ,Computer Science Applications ,Inductive transfer ,Artificial Intelligence ,Learning disability ,medicine ,Unsupervised learning ,Artificial intelligence ,Instance-based learning ,medicine.symptom ,Transfer of learning ,business ,computer ,Extreme learning machine - Abstract
Extreme learning machine (ELM) has advantages of good generalization property, simple structure and convenient calculation. Therefore, an ELM-based Q learning is proposed by using an ELM as a Q-value function approximator, which is suitable for large-scale or continuous space problems. This is the first contribution of this paper. Because the number of ELM hidden layer nodes is equal to that of training samples, large sample size will seriously affect the learning speed. Therefore, a rolling time-window mechanism is introduced into the ELM-based Q learning to reduce the size of training samples of the ELM. In addition, in order to reduce the learning difficulty of new tasks, transfer learning technology is introduced into the ELM-based Q learning. The transfer learning technology can reuse past experience and knowledge to solve current issues. Thus the second contribution is to propose a multi-source transfer ELM-based Q learning (MST-ELMQ), which can take full advantage of valuable information from multiple source tasks and avoid negative transfer resulted from irrelevant information. According to the Bayesian theory, each source task is assigned with a task transfer weight and each source sample is assigned with a sample transfer weight. The task and sample transfer weights determine the number and the manner of transfer samples. Samples with large sample transfer weights are selected from each source task, and assist Q learning agent in quick decision-making for the target task. Simulations results concerning on a boat problem show that MST-ELMQ has better performance than that of Q learning algorithms without or with a single source task, i.e., it can effectively reduce learning difficulty and find an optimal solution with fewer number of training.
- Published
- 2014
- Full Text
- View/download PDF
12. Experimental study on the anti-fouling effects of Ni–Cu–P-PTFE deposit surface of heat exchangers
- Author
-
T. C. Jen, Yuhu Cheng, Zhencai Zhu, Yu Xing Peng, and Chen Hao
- Subjects
Matrix (chemical analysis) ,Morphology (linguistics) ,Materials science ,Fouling ,Heat exchanger ,Metallurgy ,Energy Engineering and Power Technology ,Adhesion ,Composite material ,Microstructure ,Indentation hardness ,Industrial and Manufacturing Engineering ,Surface energy - Abstract
The purpose of the present study was to investigate the effect of the electroless Ni–Cu–P-PTFE deposit surface on anti-fouling of heat exchangers, which was considered as a way to mitigate the accumulation of mineral fouling in the heat exchangers. Electroless Ni–Cu–P-PTFE deposits with various PTFE content were prepared on mild steel (1015) substrate surface by different process parameters. Surface morphology and microhardness were investigated by using SEM, MH-6 Vickers, respectively. The results showed that the addition of PTFE particles into the Ni–Cu–P matrix hardly affected the microstructure of the deposits. Microhardness was decreased with the addition of PTFE in the deposits. Moreover, the surface free energy of Ni–Cu–P-PTFE deposits was decreased with the increase of PTFE particles in the deposits. Further fouling experiments indicated that the surfaces of Ni–Cu–P-PTFE deposits with different PTFE content inhibited the adhesion of fouling compared with the mild steel surface of the heat exchangers. The adhesion weight of fouling was approximately in inverse proportion with the addition of PTFE particles in the deposits, but not the value of surface roughnesss. The anti-fouling property can not be improved ideally even considering the option of making Ni–Cu–P-PTFE coatings smooth.
- Published
- 2014
- Full Text
- View/download PDF
13. Efficient data use in incremental actor–critic algorithms
- Author
-
Xuesong Wang, Huan-Ting Feng, and Yuhu Cheng
- Subjects
Mathematical optimization ,Markov chain ,Computer science ,Covariance matrix ,Cognitive Neuroscience ,Population-based incremental learning ,Computer Science Applications ,Inverted pendulum ,Function approximation ,Artificial Intelligence ,Reinforcement learning ,Temporal difference learning ,Algorithm ,TRACE (psycholinguistics) - Abstract
Actor–critic (AC) reinforcement learning methods are on-line approximations to policy iterations and have wide application in solving large-scale Markov decision and high-dimensional learning control problems. In order to overcome data inefficiency of incremental AC algorithms based on temporal difference (AC-TD), two new incremental AC algorithms (i.e., AC-RLSTD and AC-iLSTD) are proposed by applying a recursive least-squares TD (RLSTD(λ)) algorithm and an incremental least-squares TD (iLSTD(λ)) algorithm to the Critic evaluation, which can make more efficient use of data than TD. The Critic estimates a value-function using the RLSTD(λ) or iLSTD(λ) algorithm and the Actor updates the policy based on a regular gradient obtained by the TD error. The improvement in learning evaluation efficiency of the Critic will contribute to the improvement in policy learning performance of the Actor. Simulation results on the learning control of an inverted pendulum and a mountain-car problem illustrate the effectiveness of the two proposed AC algorithms in comparison to the AC-TD algorithm. In addition the AC-iLSTD, using a greedy selection mechanism, can perform much better than the AC-iLSTD using a random selection mechanism. In the simulation, the effect of different parameter settings of the eligibility trace on the learning performance of AC algorithms is analyzed. Furthermore, it is found that different initial values of the variance matrix in the AC-RLSTD algorithm should be chosen appropriately to obtain better performance for different learning problems.
- Published
- 2013
- Full Text
- View/download PDF
14. Fault diagnosis using a probability least squares support vector classification machine
- Author
-
Yuhu Cheng, Jie Pan, Xuesong Wang, and Yang Gao
- Subjects
Engineering ,Structured support vector machine ,Artificial neural network ,business.industry ,Generalization ,Energy Engineering and Power Technology ,Pattern recognition ,Geotechnical Engineering and Engineering Geology ,computer.software_genre ,Fault (power engineering) ,Least squares ,Support vector machine ,Relevance vector machine ,Geochemistry and Petrology ,Least squares support vector machine ,Data mining ,Artificial intelligence ,business ,computer - Abstract
Coal mines require various kinds of machinery. The fault diagnosis of this equipment has a great impact on mine production. The problem of incorrect classification of noisy data by traditional support vector machines is addressed by a proposed Probability Least Squares Support Vector Classification Machine (PLSSVCM). Samples that cannot be definitely determined as belonging to one class will be assigned to a class by the PLSSVCM based on a probability value. This gives the classification results both a qualitative explanation and a quantitative evaluation. Simulation results of a fault diagnosis show that the correct rate of the PLSSVCM is 100%. Even though samples are noisy, the PLSSVCM still can effectively realize multi-class fault diagnosis of a roller bearing. The generalization property of the PLSSVCM is better than that of a neural network and a LSSVCM.
- Published
- 2010
- Full Text
- View/download PDF
15. On the use of differential evolution for forward kinematics of parallel manipulators
- Author
-
Yuhu Cheng, Xuesong Wang, and Ming-Lin Hao
- Subjects
Mathematical optimization ,Forward kinematics ,Optimization problem ,Inverse kinematics ,Applied Mathematics ,Parallel manipulator ,Parallel algorithm ,Computer Science::Robotics ,Computational Mathematics ,Kinematics equations ,Control theory ,Differential evolution ,Global optimization ,Mathematics - Abstract
Differential evolution (DE) is a real-valued number encoded evolutionary strategy for global optimization. It has been shown to be an efficient, effective and robust optimization algorithm, especially for problems containing continuous variables. We have applied a DE algorithm to solve forward kinematics problems of parallel manipulators. The forward kinematics of a parallel manipulator is transformed into an optimization problem by making full use of the property that it is easy to obtain its inverse kinematics and then DE is used to obtain a globally optimal solution of forward kinematics. A comparison of numerical simulation results of a pneumatic 6-SPS parallel manipulator with DE, genetic algorithm and particle swarm optimization is given, which shows that the DE-based method performs well in terms of quality of the optimal solution, reliability and speed of convergence. It should be especially noted that the proposed method is also suitable for various other types of parallel manipulators, which provides a new way to solve the forward kinematics of parallel manipulators.
- Published
- 2008
- Full Text
- View/download PDF
16. Modeling and self-tuning pressure regulator design for pneumatic-pressure–load systems
- Author
-
Xuesong Wang, Yuhu Cheng, and Guangzheng Peng
- Subjects
Engineering ,Adaptive control ,Pressure control ,business.industry ,Applied Mathematics ,Self-tuning ,Orifice plate ,Control engineering ,Kalman filter ,Pressure regulator ,Linear-quadratic-Gaussian control ,Computer Science Applications ,Nonlinear system ,Control and Systems Engineering ,Control theory ,Electrical and Electronic Engineering ,business - Abstract
This paper presents a dynamic model and a design method for an accurate self-tuning pressure regulator for pneumatic-pressure–load systems that have some special characteristics such as being nonlinear and time-varying. A mathematical model is derived, which consists of a chamber continuity equation, an orifice flow equation and a force balance equation of the spool. Based on a theoretical analysis of the system dynamics, a three-order controlled auto-regressive moving average (CARMA) model is used to describe the practical pressure–load systems. Then a linear quadratic Gaussian self-tuning pressure regulator is designed to realize an adaptive control of pressure in the chamber. Because the system parameters are time-varying and the system states are difficult to detect, the recursive forgetting factor least-squares algorithm and the Kalman filtering method are adopted to estimate the system parameters and the system states. Experimental results show that the proposed self-tuning pressure regulator can be adapted to parameters which vary with such factors as the volume of the chamber and the setting pressure and that better dynamic and static performances can be obtained.
- Published
- 2007
- Full Text
- View/download PDF
17. A fuzzy Actor–Critic reinforcement learning network
- Author
-
Xuesong Wang, Yuhu Cheng, and Jianqiang Yi
- Subjects
Information Systems and Management ,Learning classifier system ,Artificial neural network ,Computer science ,business.industry ,Q-learning ,Machine learning ,computer.software_genre ,Fuzzy logic ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Bellman equation ,Reinforcement learning ,Radial basis function ,Artificial intelligence ,business ,computer ,Software ,Curse of dimensionality - Abstract
One of the difficulties encountered in the application of reinforcement learning methods to real-world problems is their limited ability to cope with large-scale or continuous spaces. In order to solve the curse of the dimensionality problem, resulting from making continuous state or action spaces discrete, a new fuzzy Actor-Critic reinforcement learning network (FACRLN) based on a fuzzy radial basis function (FRBF) neural network is proposed. The architecture of FACRLN is realized by a four-layer FRBF neural network that is used to approximate both the action value function of the Actor and the state value function of the Critic simultaneously. The Actor and the Critic networks share the input, rule and normalized layers of the FRBF network, which can reduce the demands for storage space from the learning system and avoid repeated computations for the outputs of the rule units. Moreover, the FRBF network is able to adjust its structure and parameters in an adaptive way with a novel self-organizing approach according to the complexity of the task and the progress in learning, which ensures an economic size of the network. Experimental studies concerning a cart-pole balancing control illustrate the performance and applicability of the proposed FACRLN.
- Published
- 2007
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.