Search

Your search keyword '"Meyn, Sean P."' showing total 365 results

Search Constraints

Start Over You searched for: Author "Meyn, Sean P." Remove constraint Author: "Meyn, Sean P."
365 results on '"Meyn, Sean P."'

Search Results

1. Quickest Change Detection Using Mismatched CUSUM

2. Markovian Foundations for Quasi-Stochastic Approximation in Two Timescales: Extended Version

3. Design of Interacting Particle Systems for Fast and Efficient Reinforcement Learning

4. Revisiting Step-Size Assumptions in Stochastic Approximation

5. Dual Ensemble Kalman Filter for Stochastic Optimal Control

6. Reinforcement Learning Design for Quickest Change Detection

7. Convex Q Learning in a Stochastic Environment: Extended Version

8. The Curse of Memory in Stochastic Approximation: Extended Version

9. Stability of Q-Learning Through Design and Optimism

10. High-Impedance Non-Linear Fault Detection via Eigenvalue Analysis with low PMU Sampling Rates

11. High Impedance Fault Detection Through Quasi-Static State Estimation: A Parameter Error Modeling Approach

12. Uncertainty Error Modeling for Non-Linear State Estimation With Unsynchronized SCADA and $\mu$PMU Measurements

13. Sufficient Exploration for Convex Q-learning

14. Model-Free Characterizations of the Hamilton-Jacobi-Bellman Equation and Convex Q-Learning in Continuous Time

15. Feature Projection for Optimal Transport

16. Markovian Foundations for Quasi-Stochastic Approximation with Applications to Extremum Seeking Control

17. Extremely Fast Convergence Rates for Extremum Seeking Control with Polyak-Ruppert Averaging

18. The ODE Method for Asymptotic Statistics in Stochastic Approximation and Reinforcement Learning

19. Controlled Interacting Particle Algorithms for Simulation-based Reinforcement Learning

20. The Conditional Poincar\'e Inequality for Filter Stability

21. Reliable Power Grid: Long Overdue Alternatives to Surge Pricing

22. Accelerating Optimization and Reinforcement Learning with Quasi-Stochastic Approximation

23. Convex Q-Learning, Part 1: Deterministic Optimal Control

24. Lecture Notes on Control System Theory and Design

25. Variance Reduction in Simulation of Multiclass Processing Networks

26. Kullback-Leibler-Quadratic Optimal Control

27. Q-learning with Uniformly Bounded Variance: Large Discounting is Not a Barrier to Fast Learning

28. Explicit Mean-Square Error Bounds for Monte-Carlo and Linear Stochastic Approximation

29. Zap Q-Learning With Nonlinear Function Approximation

30. Model-Free Primal-Dual Methods for Network Optimization with Application to Real-Time Optimal Power Flow

31. Aggregate capacity of TCLs with cycling constraints

32. State Space Collapse in Resource Allocation for Demand Dispatch

33. Zap Q-Learning for Optimal Stopping Time Problems

34. What is the Lagrangian for Nonlinear Filtering?

35. Optimal Rate of Convergence for Quasi-Stochastic Approximation

36. Diffusion map-based algorithm for Gain function approximation in the Feedback Particle Filter

37. Differential Temporal Difference Learning

38. An Approach to Duality in Nonlinear Filtering

39. Optimal Matrix Momentum Stochastic Approximation and Applications to Q-learning

40. Diffusion approximations and control variates for MCMC

41. Action-Constrained Markov Decision Processes With Kullback-Leibler Cost

43. Geometric Ergodicity in a Weighted Sobolev Space

44. Fastest Convergence for Q-learning

45. Error Estimates for the Kernel Gain Function Approximation in the Feedback Particle Filter

47. Demand Dispatch with Heterogeneous Intelligent Loads

48. Estimation and Control of Quality of Service in Demand Dispatch

49. Ordinary Differential Equation Methods For Markov Decision Processes and Application to Kullback-Leibler Control Cost

50. Ergodic Theory for Controlled Markov Chains with Stationary Inputs

Catalog

Books, media, physical & digital resources