18 results on '"Zou, Yuan"'
Search Results
2. Predictive Energy Management of Plug-in Hybrid Electric Vehicles by Real-Time Optimization and Data-Driven Calibration.
- Author
-
Guo, Ningyuan, Zhang, Xudong, Zou, Yuan, Du, Guangze, Wang, Chao, and Guo, Lingxiong
- Subjects
HYBRID electric vehicles ,PLUG-in hybrid electric vehicles ,ENERGY management ,CALIBRATION ,HARDWARE-in-the-loop simulation ,POLYNOMIAL approximation - Abstract
This article proposes a predictive energy management strategy of plug-in hybrid electric vehicles by real-time optimization and data-driven calibration. The powertrain modelling and physical constraints, including engine, battery, and generator, are simplified by polynomial fitting approximations, which reserve the system nonlinearities with acceptable accuracy. To mitigate the control complexity, the physical constraints of engine, generator, and battery, are merged into a unified one by methodical derivatives. The nonlinear model predictive control problem is established, and the continuation/ general minimal residual (C/GMRES) algorithm is proposed for real-time optimization. Since the original C/GMRES algorithm can only deal with the equality constraints, the external penalty method is adopted for inequality constraints handling. To tackle the parameters’ tuning difficulties, the Bayesian optimization (BO) algorithm is proposed. Based on the prior knowledges of closed-loop experiments, the map between parameters and objective can be described by Gaussian process, and the control parameters can be optimized with few evaluations in BO. Moreover, owing to the real-time applicability of C/GMRES algorithm, the time of closed-loop experiments is reduced so that the calculation time of BO calibration can be saved, exhibiting the superior design for predictive energy management. Simulation and hardware-in-the-loop validations are carried out and verify the energy-saving effectiveness and real-time applicability for proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. An integrated control strategy of path following and lateral motion stabilization for autonomous distributed drive electric vehicles.
- Author
-
Zou, Yuan, Guo, Ningyuan, and Zhang, Xudong
- Subjects
AUTONOMOUS vehicles ,ELECTRIC drives ,ELECTRIC vehicles ,MOTOR vehicle driving ,QUADRATIC programming ,HYBRID electric vehicles ,AUTOMOBILE steering gear - Abstract
This article proposes an integrated control strategy of autonomous distributed drive electric vehicles. First, to handle the multi-constraints and integrated problem of path following and the yaw motion control, a model predictive control technique is applied to determine optimal front wheels' steering angle and external yaw moment synthetically and synchronously. For ensuring the desired path-tracking performance and vehicle lateral stability, a series of imperative state constraints and control references are transferred in the form of a matrix and imposed into the rolling optimization mechanism of model predictive control, where the detailed derivation is also illustrated and analyzed. Then, the quadratic programming algorithm is employed to optimize and distribute each in-wheel motor's torque output. Finally, numerical simulation validations are carried out and analyzed in depth by comparing with a linear quadratic regulator–based strategy, proving the effectiveness and control efficacy of the proposed strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Implementation of real-time energy management strategy based on reinforcement learning for hybrid electric vehicles and simulation validation.
- Author
-
Kong, Zehui, Zou, Yuan, and Liu, Teng
- Subjects
- *
HYBRID electric vehicles , *COMPUTER simulation , *AUTOMOTIVE fuel consumption standards , *ENERGY management , *REINFORCEMENT learning , *ENERGY consumption - Abstract
To further improve the fuel economy of series hybrid electric tracked vehicles, a reinforcement learning (RL)-based real-time energy management strategy is developed in this paper. In order to utilize the statistical characteristics of online driving schedule effectively, a recursive algorithm for the transition probability matrix (TPM) of power-request is derived. The reinforcement learning (RL) is applied to calculate and update the control policy at regular time, adapting to the varying driving conditions. A facing-forward powertrain model is built in detail, including the engine-generator model, battery model and vehicle dynamical model. The robustness and adaptability of real-time energy management strategy are validated through the comparison with the stationary control strategy based on initial transition probability matrix (TPM) generated from a long naturalistic driving cycle in the simulation. Results indicate that proposed method has better fuel economy than stationary one and is more effective in real-time control. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
5. Reinforcement learning-based real-time energy management for a hybrid tracked vehicle.
- Author
-
Zou, Yuan, Liu, Teng, Liu, Dexing, and Sun, Fengchun
- Subjects
- *
ENERGY management , *REINFORCEMENT learning , *HYBRID electric vehicles , *MARKOV processes , *ENERGY consumption - Abstract
To realize the optimal energy allocation between the engine-generator and battery of a hybrid tracked vehicle (HTV), a reinforcement learning-based real-time energy-management strategy was proposed. A systematic control-oriented model for the HTV was built and validated through the test bench, including the battery pack, the engine-generator set (EGS), and the power request. To use effectively the statistical information of power request online, a Markov chain-based real-time power request recursive algorithm for learning transition probabilities was derived and validated. The Kullback–Leibler (KL) divergence rate was adopted to determine when the transition probability matrix and the optimal control strategy update in real time. Reinforcement learning (RL) was applied to compare quantitatively the effects of different forgetting factors and KL divergence rates on reducing fuel consumption. RL has also been used to optimize the control strategy for HTV, compared to preliminary and dynamic programming-based control strategies. The real-time and robust performance of the proposed online energy management strategy was verified under two driving schedules collected in the field test. The simulation results indicate the proposed RL-based energy management strategy can significantly improve fuel efficiency and can be applied in real time. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
6. Reinforcement Learning of Adaptive Energy Management With Transition Probability for a Hybrid Electric Tracked Vehicle.
- Author
-
Liu, Teng, Zou, Yuan, Liu, Dexing, and Sun, Fengchun
- Subjects
- *
ENERGY management , *HYBRID electric vehicles , *DYNAMIC programming , *ELECTRIC generators , *INDUSTRIAL electronics - Abstract
A reinforcement learning-based adaptive energy management (RLAEM) is proposed for a hybrid electric tracked vehicle (HETV) in this paper. A control oriented model of the HETV is first established, in which the state-of-charge (SOC) of battery and the speed of generator are the state variables, and the engine's torque is the control variable. Subsequently, a transition probability matrix is learned from a specific driving schedule of the HETV. The proposed RLAEM decides appropriate power split between the battery and engine-generator set (EGS) to minimize the fuel consumption over different driving schedules. With the RLAEM, not only is driver's power requirement guaranteed, but also the fuel economy is improved as well. Finally, the RLAEM is compared with the stochastic dynamic programming (SDP)-based energy management for different driving schedules. The simulation results demonstrate the adaptability, optimality, and learning ability of the RLAEM and its capacity of reducing the computation time. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
7. High robustness energy management strategy of hybrid electric vehicle based on improved soft actor-critic deep reinforcement learning.
- Author
-
Sun, Wenjing, Zou, Yuan, Zhang, Xudong, Guo, Ningyuan, Zhang, Bin, and Du, Guodong
- Subjects
- *
REINFORCEMENT learning , *ENERGY management , *HYBRID electric vehicles , *DYNAMIC programming , *ENERGY consumption , *GREENHOUSE gas mitigation - Abstract
As a hybrid electric vehicle (HEV) key control technology, intelligent energy management strategies (EMSs) directly affect fuel consumption. Investigating the robustness of EMSs to maximize the advantages of energy savings and emission reduction in different driving environments is necessary. This article proposes a soft actor-critic (SAC) deep reinforcement learning (DRL) EMS for hybrid electric tracked vehicles (HETVs). Munchausen reinforcement learning (MRL) is adopted in the SAC algorithm, and the Munchausen SAC (MSAC) algorithm is constructed to achieve lower fuel consumption than the traditional SAC method. The prioritized experience replay (PER) is proposed to achieve more reasonable experience sampling and improve the optimization effect. To enhance the "cold start" performance, a dynamic programming (DP)-assisted training method is proposed that substantially improves the training efficiency. The proposed method optimization result is compared with the traditional SAC and deep deterministic policy gradient (DDPG) with PER through the simulation. The result shows that the proposed strategy improves both fuel consumption and possesses excellent robustness under different driving cycles. • A robust energy management strategy is established based on the SAC algorithm. • Munchausen reinforcement learning method is adopted to the SAC algorithm. • Prioritized experience replay is applied to improve training efficiency. • DP-assisted training method is proposed to enhance the "cold start" performance. • The proposed framework realizes better performance in fuel-saving and robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Comparative Study of Dynamic Programming and Pontryagin's Minimum Principle on Energy Management for a Parallel Hybrid Electric Vehicle.
- Author
-
Zou Yuan, Liu Teng, Sun Fengchun, and Huei Peng
- Subjects
- *
DYNAMIC programming , *PONTRYAGIN'S minimum principle , *ENERGY management , *HYBRID electric vehicles , *ENERGY conservation - Abstract
This paper compares two optimal energy management methods for parallel hybrid electric vehicles using an Automatic Manual Transmission (AMT). A control-oriented model of the powertrain and vehicle dynamics is built first. The energy management is formulated as a typical optimal control problem to trade off the fuel consumption and gear shifting frequency under admissible constraints. The Dynamic Programming (DP) and Pontryagin's Minimum Principle (PMP) are applied to obtain the optimal solutions. Tuning with the appropriate co-states, the PMP solution is found to be very close to that from DP. The solution for the gear shifting in PMP has an algebraic expression associated with the vehicular velocity and can be implemented more efficiently in the control algorithm. The computation time of PMP is significantly less than DP [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
9. Dynamic Programming-based Energy Management Strategy Optimization for Hybrid Electric Commercial Vehicle.
- Author
-
Zou Yuan, Hou Shijie, Han Erliang, Liu Lin, and Chen Rui
- Subjects
- *
HYBRID electric vehicles , *DYNAMIC programming , *AUTOMOTIVE fuel consumption standards , *MATHEMATICAL optimization , *ENERGY management - Abstract
A feed-forward Simulink model for hybrid electric commercial vehicle is established, considering the basic dynamic features of its components. Dynamic programming (DP) theory is adopted to find the optimal control strategies for vehicle fuel economy including the strategy of power split between engine and motor and the shift control strategy. Improved control rules for vehicle control are extracted based on DP optimal control results and implemented as the control strategy. The results of simulation over the heavy-duty vehicle cycle from natural driving statistics demonstrate the potential of improved vehicle control strategy based on DP optimization technique in significantly enhancing fuel economy. [ABSTRACT FROM AUTHOR]
- Published
- 2012
10. Energy management for a hybrid electric vehicle based on prioritized deep reinforcement learning framework.
- Author
-
Du, Guodong, Zou, Yuan, Zhang, Xudong, Guo, Lingxiong, and Guo, Ningyuan
- Subjects
- *
REINFORCEMENT learning , *DEEP learning , *HYBRID electric vehicles , *ENERGY management , *DYNAMIC programming , *ENERGY consumption - Abstract
A novel deep reinforcement learning (DRL) control framework for the energy management strategy of the series hybrid electric tracked vehicle (SHETV) is proposed in this paper. Firstly, the powertrain model of the vehicle is established, and the formulation of the energy management problem is given. Then, an efficient deep reinforcement learning framework based on the double deep Q-learning (DDQL) algorithm is built for the optimal problem solving, which also contains a modified prioritized experience replay (MPER) and an adaptive optimization method of network weights called AMSGrad. The proposed framework is verified by the realistic driving cycle, then is compared to the dynamic programming (DP) method and the previous deep reinforcement learning method. Simulation results show that the newly constructed deep reinforcement learning framework achieves higher training efficiency and lower energy consumption than the previous deep reinforcement learning method does, and the fuel economy is proved to approach the global optimality. Besides, its adaptability and robustness are validated by different driving schedules. • The powertrain model of the series hybrid electric tracked vehicle is established. • A new control framework based on double deep Q-learning algorithm is constructed. • Modified prioritized experience replay is designed to improve training efficiency. • An adaptive optimization method is applied to update weights of the neural network. • The proposed deep reinforcement learning framework realizes better performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. An improved soft actor-critic-based energy management strategy of heavy-duty hybrid electric vehicles with dual-engine system.
- Author
-
Zhang, Dongfang, Sun, Wei, Zou, Yuan, Zhang, Xudong, and Zhang, Yiwei
- Subjects
- *
DEEP reinforcement learning , *REINFORCEMENT learning , *EXPECTATION-maximization algorithms , *ENERGY consumption , *ENERGY management , *HYBRID electric vehicles - Abstract
While deep reinforcement learning (DRL) based energy management strategies (EMSs) have shown potential for optimizing energy utilization in recent years, challenges such as convergence difficulties and suboptimal control still persist. In this research, a novel DRL algorithm, i.e. an improved soft actor-critic (ISAC) algorithm is applied to the EMS of a heavy-duty hybrid electric vehicles (HDHEV) with dual auxiliary power units (APU) in which the priority experience replay (PER), emphasizing recent experience (ERE) and Muchausen reinforcement learning (MRL) methods are adopted to improve the convergence performance and the HDHEV fuel economy. Simultaneously, a bus voltage calculation model suitable for dual-APUs is proposed and validated using real-world data to ensure the precision of the HDHEV model. Results indicate that the proposed EMS reduces HDHEV fuel consumption by 4.59 % and 2.50 % compared to deep deterministic policy gradient (DDPG) and twin delayed deep deterministic policy gradient (TD3)-based EMSs respectively, narrowing the gap to dynamic programming-based EMS to 7.94 %. The proposed EMS exhibits superior training performance, with a 91.28 % increase in convergence speed compared to other DRL-based EMSs. Furthermore, the ablation experiments also validate the effectiveness of each method in the proposed EMS for the SAC algorithm, further demonstrating its superiority. • A bus voltage calculation model for a heavy vehicle with dual engines is proposed. • A novel DRL algorithm, i.e. the ISAC is newly applied to the EMS of a HDHEV. • PER, ERE and MRL methods are applied to enhance the performance of proposed EMS. • Better performance of proposed EMS in convergence, optimality, and adaptability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Deep reinforcement learning based energy management for a hybrid electric vehicle.
- Author
-
Du, Guodong, Zou, Yuan, Zhang, Xudong, Liu, Teng, Wu, Jinlong, and He, Dingbo
- Subjects
- *
REINFORCEMENT learning , *HYBRID electric vehicles , *DEEP learning , *ENERGY management , *MACHINE learning , *ENERGY consumption - Abstract
This research proposes a reinforcement learning-based algorithm and a deep reinforcement learning-based algorithm for energy management of a series hybrid electric tracked vehicle. Firstly, the powertrain model of the series hybrid electric tracked vehicle (SHETV) is constructed, then the corresponding energy management formulation is established. Subsequently, a new variant of reinforcement learning (RL) method Dyna, namely Dyna-H, is developed by combining the heuristic planning step with the Dyna agent and is applied to energy management control for SHETV. Its rapidity and optimality are validated by comparing with DP and conventional Dyna method. Facing the problem of the "curse of dimensionality" in the reinforcement learning method, a novel deep reinforcement learning algorithm deep Q-learning (DQL) is designed for energy management control, which uses a new optimization method (AMSGrad) to update the weights of the neural network. Then the proposed deep reinforcement learning control system is trained and verified by the realistic driving condition with high-precision, and is compared with the benchmark method DP and the traditional DQL method. Results show that the proposed deep reinforcement learning method realizes faster training speed and lower fuel consumption than traditional DQL policy does, and its fuel economy quite approximates to global optimum. Furthermore, the adaptability of the proposed method is confirmed in another driving schedule. • The powertrain model of the series hybrid electric tracked vehicle is constructed. • A novel reinforcement learning-based energy management strategy is proposed. • The rapidity and optimality of the reinforcement learning method are validated. • A new optimization method is applied to update the weights of the neural network. • The proposed deep reinforcement learning method realizes better performance. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
13. Intelligent energy management for hybrid electric tracked vehicles using online reinforcement learning.
- Author
-
Du, Guodong, Zou, Yuan, Zhang, Xudong, Kong, Zehui, Wu, Jinlong, and He, Dingbo
- Subjects
- *
HYBRID electric vehicles , *REINFORCEMENT learning , *ONLINE education , *ENERGY consumption , *SHARING economy , *ENERGY shortages , *DYNAMIC programming - Abstract
• The overall model for the hybrid electric tracked vehicle is built in detail. • Fast Q -learning algorithm is applied to derive energy management strategy. • An efficient online energy management strategy update framework is constructed. • Hardware-in-loop simulation experiment is conducted to validate the performance. • The strategy improves fuel economy and has potential for real-time applications. The energy management approach of hybrid electric vehicles has the potential to overcome the increasing energy crisis and environmental pollution by reducing the fuel consumption. This paper proposes an online updating energy management strategy to improve the fuel economy of hybrid electric tracked vehicles. As the basis of the research, the overall model for the hybrid electric tracked vehicle is built in detail and validated through the field experiment. To accelerate the convergence rate of the control policy calculation, a novel reinforcement learning algorithm called fast Q -learning is applied which improves the computational speed by 16%. The cloud-computation is presented to afford the main computation burden to realize the online updating energy management strategy in hardware-in-loop simulation bench. The Kullback-Leibler divergence rate to trigger the update of the control strategy is designed and realized in hardware-in-loop simulation bench. The simulation results show that the fuel consumption of the fast Q -learning based online updating strategy is 4.6% lower than that of stationary strategy, and is close to that of dynamic programming strategy. Besides, the computation time of the proposed method is only 1.35 s which is much shorter than that of dynamic programming based method. The results indicate that the proposed energy management strategy can greatly improve the fuel economy and have the potential to be applied in the real-time application. Moreover, the adaptability of the online energy management strategy is validated in three realistic driving schedules. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
14. Co-optimization strategy of unmanned hybrid electric tracked vehicle combining eco-driving and simultaneous energy management.
- Author
-
Guo, Lingxiong, Zhang, Xudong, Zou, Yuan, Han, Lijin, Du, Guodong, Guo, Ningyuan, and Xiang, Changle
- Subjects
- *
ENERGY management , *HYBRID electric vehicles , *PREDICTIVE control systems , *TRACKING control systems , *DYNAMIC programming - Abstract
Combining eco-driving optimization and simultaneous proper energy management, this paper proposes an efficient co-optimization strategy of unmanned hybrid electric tracked vehicles (HETVs) based on a hierarchical control framework to achieve accurate path tracking and optimal energy management simultaneously. Constrained by a pre-known reference path, a deep Q-learning (DQL) algorithm with the AMSGrad optimizer is designed in the upper layer to optimize the velocity of both side tracks to find the best trade-off between energy economy and accurate path tracking. Based on the optimal velocity profile obtained from the upper layer, an explicit model predictive control method is designed in the lower layer to distribute the power between the engine generator and battery in real time to achieve approximate optimal fuel economy. Simulation results verify that the designed DQL method only requires 0.67 s on average for real-time velocity planning, which is markedly lower than the dynamic programming algorithm. In addition, the proposed method also exhibits higher rapidity and optimality for velocity planning than the traditional DQL algorithm. Compared with the model predictive control, dynamic programming and a process without velocity planning, the proposed co-optimization strategy achieves good fuel economy, accurate path tracking and high computational efficiency. • Co-optimization strategy is designed for path tracking and energy management. • Deep Q-learning algorithm with AMSGrad optimizer is developed to plan velocity. • EMPC controller is designed to distribute the power in real time. • The proposed strategy is compared with different methods and analyzed in depth. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Adaptive unscented Kalman filtering for state of charge estimation of a lithium-ion battery for electric vehicles
- Author
-
Sun, Fengchun, Hu, Xiaosong, Zou, Yuan, and Li, Siguang
- Subjects
- *
KALMAN filtering , *ADAPTIVE filters , *ELECTRIC vehicles , *LITHIUM-ion batteries , *ESTIMATION theory , *HYBRID electric vehicles , *ALGORITHMS , *LOCOMOTIVES , *MATCHING theory , *PERFORMANCE - Abstract
Abstract: An accurate battery State of Charge estimation is of great significance for battery electric vehicles and hybrid electric vehicles. This paper presents an adaptive unscented Kalman filtering method to estimate State of Charge of a lithium-ion battery for battery electric vehicles. The adaptive adjustment of the noise covariances in the State of Charge estimation process is implemented by an idea of covariance matching in the unscented Kalman filter context. Experimental results indicate that the adaptive unscented Kalman filter-based algorithm has a good performance in estimating the battery State of Charge. A comparison with the adaptive extended Kalman filter, extended Kalman filter, and unscented Kalman filter-based algorithms shows that the proposed State of Charge estimation method has a better accuracy. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
16. Cost-optimal energy management strategy for plug-in hybrid electric vehicles with variable horizon speed prediction and adaptive state-of-charge reference.
- Author
-
Guo, Lingxiong, Zhang, Xudong, Zou, Yuan, Guo, Ningyuan, Li, Jianwei, and Du, Guodong
- Subjects
- *
HYBRID electric vehicles , *PLUG-in hybrid electric vehicles , *ENERGY management , *K-means clustering , *SPEED , *ENERGY consumption , *TRAFFIC safety - Abstract
In this paper, an energy management strategy (EMS) based on model predictive control (MPC) is proposed to minimize fuel cost, electricity usage and battery ageing. To fulfil the MPC framework, a novel speed predictor with a variable horizon based on a K-means algorithm and a radius basis function neural network, which contains various predictive submodels, is designed to cope with different input drive states. In addition, a Q-learning algorithm is applied to construct an adaptive multimode state-of-charge (SOC) reference generator, which takes advantage of velocity forecasts for each prediction horizon. The algorithm fully considers the model nonlinearities and physical constraints and requires less computational effort. Based on the SOC reference and predictive velocity, the MPC problem is formulated to coordinate fuel consumption and battery degradation. Moreover, considering the influence of real-time traffic information, a traffic model that simulates actual road conditions is constructed in VISSIM to evaluate the performance of the proposed EMS. The simulation results show that the proposed speed predictor can effectively improve the predictive accuracy, and the multimode control laws based on drive condition classification present superior adaptability in SOC reference generation compared to single-mode law. With the aforementioned two improvements, the proposed EMS achieves desirable performance in fuel economy and battery lifetime extension. • Cost-optimal problem is built for coordinating fuel economy and battery lifetime. • A novel speed predictor with variable horizon is constructed. • Q-learning algorithm is applied as the adaptive multimode SOC reference generator. • A traffic model is constructed in VISSIM to evaluate the performance of the proposed EMS. • Influences of SOC reference and predictive speed accuracy are discussed in depth. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Real-time predictive energy management of plug-in hybrid electric vehicles for coordination of fuel economy and battery degradation.
- Author
-
Guo, Ningyuan, Zhang, Xudong, Zou, Yuan, Guo, Lingxiong, and Du, Guodong
- Subjects
- *
PLUG-in hybrid electric vehicles , *HYBRID electric vehicles , *ENERGY management , *RADIAL basis functions , *ENERGY consumption , *ALGORITHMS , *QUADRATIC programming - Abstract
This paper proposes a real-time predictive energy management strategy (PEMS) of plug-in hybrid electric vehicles for coordination control of fuel economy and battery lifetime, including velocity predictor, state-of-charge (SOC) reference generator, and online optimization. In velocity predictor, the radial basis function neural network algorithm is adopted to accurately estimate the future drive velocity. Based on predictive velocity and current driven distance, the SOC reference in predictive horizon can be determined online by reference generator. To coordinate fuel consumption and battery degradation, a model predictive control problem of cost minimization including fuel consumption cost, electricity cost of battery charging/discharging, and equivalent cost of battery degradation, is formulated. To mitigate the huge calculation burden in optimization, the continuation/generalized minimal residual (C/GMRES) algorithm is delegated to find the expected engine power command in real time. Since original C/GMRES algorithm cannot directly handle inequality constraints, the external penalty method is employed to meet physical inequality limits of powertrain. Numerical simulations are carried out and yield the desirable performance of the proposed PEMS in fuel consumption minimization and battery aging restriction. More importantly, the proposed C/GMRES algorithm shows great solving quality and real-time applicability in PEMS by comparing with sequence quadratic programming and genetic algorithms. • Battery degradation is considered in predictive energy management. • RBF-NN algorithm is adopted to accurately predict future velocity. • SOC reference in predictive horizon is obtained by a modified method. • C/GMRES algorithm is applied to achieve real-time predictive energy management. • Performance and computational efficiency by proposed strategy are verified. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. Bi-level Energy Management of Plug-in Hybrid Electric Vehicles for Fuel Economy and Battery Lifetime with Intelligent State-of-charge Reference.
- Author
-
Zhang, Xudong, Guo, Lingxiong, Guo, Ningyuan, Zou, Yuan, and Du, Guodong
- Subjects
- *
HYBRID electric vehicles , *PLUG-in hybrid electric vehicles , *ENERGY management , *ENERGY consumption , *RADIAL basis functions , *FUEL , *ELECTRIC batteries - Abstract
This paper proposes a bi-level energy management strategy of plug-in hybrid electric vehicles with intelligent state-of-charge (SOC) reference for satisfactory fuel economy and battery lifetime. In the upper layer, Q-learning algorithm is delegated to generate the SOC reference before departure, by taking the model nonlinearities and physical constraints into account while paying less computing labor. In the lower layer, with the short-term drive velocity accurately predicted by the radial basis function neural network, the model predictive control (MPC) controller is designed to online distribute the system power flows and track the SOC reference for the superior fuel economy and battery lifetime extension. Moreover, the terminal SOC constraints are transferred as soft ones by the relaxation operations to guarantee the solving feasibility and smooth tracking effects. Finally, the simulations are carried out to validate the effectiveness of the proposed strategy, which shows the considerable improvements in fuel economy and battery lifetime extension compared with the charge-depleting and charge-sustaining method. More importantly, the great robustness of the proposed approach is verified under the cases of inaccurately pre-known drive information, indicating the favorable adaptability for practical application. • Q-learning algorithm is delegated to fast generate the SOC reference. • MPC is used in energy management for good fuel and battery lifetime effects. • Terminal SOC constraint is relaxed to hold solving feasibility and smooth tracking. • Good robustness of proposed method is verified for inaccurate drive information. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.