15,259 results on '"markov process"'
Search Results
2. Stochastic heat equations with Markovian switching
- Author
-
Fan, Qianzhu, Zhang, Tusheng, and Loeffen, Ronnie
- Subjects
519.2 ,stability ,strong Feller property ,Feller property ,Ito lemma ,Markov chain ,Markov process - Abstract
This thesis consists of three parts. In the first part, we recall some background theory that will be used throughout the thesis. In the second part, we studied the existence and uniqueness of solutions of the stochastic heat equations with Markovian switching. In the third part, we investigate the properties of solutions, such as Feller property, strong Feller property and stability.
- Published
- 2017
3. Performance Modelling and Quantitative Analysis of Vehicular Edge Computing With Bursty Task Arrivals
- Author
-
Jia Hu, Geyong Min, Wang Miao, Xu Zhang, and Zhiwei Zhao
- Subjects
Schedule ,Mobile edge computing ,Markov chain ,Computer Networks and Communications ,Computer science ,Distributed computing ,Markov process ,Data modeling ,symbols.namesake ,Burstiness ,symbols ,Resource allocation ,Electrical and Electronic Engineering ,Software ,Edge computing - Abstract
The quantitative performance analysis plays a critical role in assessing the capability of Vehicular Edge Computing (VEC) systems to meet the requirements of vehicular applications. However, developing accurate analytical models for VEC systems is extremely challenging due to the unique features of intelligent vehicular applications. Specifically, recent work revealed that the tasks generated by intelligent vehicular applications exhibit a high degree of burstiness, rendering the existing models that were designed based on the assumption of the non-bursty Poisson process unsuitable for VEC systems. To fill this gap, we developed an original analytical model to investigate the performance of VEC systems with bursty task arrivals. To facilitate vehicle cooperation, a new priority-based resource allocation scheme is exploited to schedule the tasks of vehicular applications, which are modelled by a Markov Modulated Poisson Process (MMPP). Next, a multi-state Markov chain is established to investigate the impact of load sharing strategy on the performance of VEC systems. Then, the end-to-end transmission latency is derived based on the proposed model. Comprehensive experiments are conducted to validate the accuracy of this analytical model under various system configurations. Furthermore, the developed model is used as a cost-effective tool to investigate the performance bottleneck of VEC systems.
- Published
- 2023
4. Finite-Time Estimation for Markovian BAM Neural Networks With Asymmetrical Mode-Dependent Delays and Inconstant Measurements
- Author
-
Renquan Lu, Yong Xu, Zhuo Wang, Tingwen Huang, and Chang Liu
- Subjects
Artificial neural network ,Markov chain ,Computer Networks and Communications ,Computer science ,Estimator ,Markov process ,Interval (mathematics) ,Stability (probability) ,Computer Science Applications ,symbols.namesake ,Artificial Intelligence ,Robustness (computer science) ,Control theory ,symbols ,Bidirectional associative memory ,Software - Abstract
The issue of finite-time state estimation is studied for discrete-time Markovian bidirectional associative memory neural networks. The asymmetrical system mode-dependent (SMD) time-varying delays (TVDs) are considered, which means that the interval of TVDs is SMD. Because the sensors are inevitably influenced by the measurement environments and indirectly influenced by the system mode, a Markov chain, whose transition probability matrix is SMD, is used to describe the inconstant measurement. A nonfragile estimator is designed to improve the robustness of the estimator. The stochastically finite-time bounded stability is guaranteed under certain conditions. Finally, an example is used to clarify the effectiveness of the state estimation.
- Published
- 2023
5. Output-Feedback Control for Fuzzy Singularly Perturbed Systems: A Nonhomogeneous Stochastic Communication Protocol Approach
- Author
-
Jun Cheng, Huaicheng Yan, Ju H. Park, and Guangdeng Zong
- Subjects
Singular perturbation ,Observer (quantum physics) ,Markov chain ,Computer science ,Markov process ,Fuzzy logic ,Computer Science Applications ,Human-Computer Interaction ,symbols.namesake ,Computer Science::Systems and Control ,Control and Systems Engineering ,Control theory ,Asynchronous communication ,symbols ,Electrical and Electronic Engineering ,Hidden Markov model ,Software ,Information Systems - Abstract
In this study, the output-feedback control (OFC) strategy design problem is explored for a type of Takagi-Sugeno fuzzy singular perturbed system. To alleviate the communication load and improve the reliability of signal transmission, a novel stochastic communication protocol (SCP) is proposed. In particular, the SCP is scheduled based on a nonhomogeneous Markov chain, where the time-varying transition probability matrix is characterized by a polytope-structure-based set. Different from the existing homogeneous Markov SCP, a nonhomogeneous Markov SCP depicts the data transmission in a more reasonable manner. To detect the actual network mode, a hidden Markov process observer is addressed. By virtue of the hidden Markov model with partly unidentified detection probabilities, an asynchronous OFC law is formulated. By establishing a novel Lyapunov-Krasovskii functional with a singular perturbation parameter and a nonhomogeneous Markov process, a sufficient condition is exploited to guarantee the stochastic stability of the resulting system, and the solution for the asynchronous controller is portrayed. Eventually, the validity of the attained methodology is expressed through a practical example.
- Published
- 2023
6. Methodology for the formation of a special course on applications of Markov processes.
- Author
-
I. V., SUKHORUKOVA and N. A., CHISTYAKOVA
- Subjects
MARKOV processes ,TEACHING methods ,CURRICULUM ,CULTURAL competence ,COLLEGE students - Abstract
Copyright of Revista Espacios is the property of Talleres de Impresos Oma and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2020
7. Discrete-Time Markov Chains
- Author
-
Alfa, Attahiru S. and Alfa, Attahiru S.
- Published
- 2016
- Full Text
- View/download PDF
8. Event-Triggered Resilient L∞ Control for Markov Jump Systems Subject to Denial-of-Service Jamming Attacks
- Author
-
Xiaobin Gao, Pengyu Zeng, Xiaohua Liu, and Feiqi Deng
- Subjects
Lyapunov function ,Markov chain ,Computer science ,Markov process ,Jamming ,Denial-of-service attack ,Upper and lower bounds ,Computer Science Applications ,Human-Computer Interaction ,symbols.namesake ,Control and Systems Engineering ,Control theory ,symbols ,Jump ,Electrical and Electronic Engineering ,Software ,Information Systems - Abstract
In this article, the event-triggered resilient L∞ control problem is concerned for the Markov jump systems in the presence of denial-of-service (DoS) jamming attacks. First, a fixed lower bound-based event-triggering scheme (ETS) is presented in order to avoid the Zeno problem caused by exogenous disturbance. Second, when DoS jamming attacks are involved, the transmitted data are blocked and the old control input is kept by using the zero-order holder (ZOH). On the basis of this process, the effect of DoS attacks on ETS is further discussed. Next, by utilizing the state-feedback controller and multiple Lyapunov functions method, some criteria incorporating the restriction of DoS jamming attacks are proposed to guarantee the L∞ control performance of the event-triggered Markov closed-loop jump system. In particular, the bounded transition rates rather than the exact ones are taken into account. That is appropriate for the practical environment in which transition rates of the Markov process are difficult to measure accurately. Correspondingly, some criteria are proposed to obtain state-feedback gains and event-triggering parameters simultaneously. Finally, we provide two examples to show the effectiveness of the proposed method.
- Published
- 2022
9. Service Availability Analysis in a Virtualized System: A Markov Regenerative Model Approach
- Author
-
Zhenjiang Zhang, Jing Bai, Gaorong Ning, Xiaolin Chang, and Kishor S. Trivedi
- Subjects
Service (systems architecture) ,Steady state (electronics) ,Markov chain ,Computer Networks and Communications ,business.industry ,Computer science ,Distributed computing ,Markov process ,Cloud computing ,Petri net ,Computer Science Applications ,symbols.namesake ,Software ,Hardware and Architecture ,Attractor ,symbols ,business ,Information Systems - Published
- 2022
10. Precise Approximations of the Probability Distribution of a Markov Process in Time: An Application to Probabilistic Invariance
- Author
-
Esmaeil Zadeh Soudjani, Sadegh, Abate, Alessandro, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Ábrahám, Erika, editor, and Havelund, Klaus, editor
- Published
- 2014
- Full Text
- View/download PDF
11. Solution of Optimal Stopping Problem Based on a Modification of Payoff Function
- Author
-
Presman, Ernst, Kabanov, Yuri, editor, Rutkowski, Marek, editor, and Zariphopoulou, Thaleia, editor
- Published
- 2014
- Full Text
- View/download PDF
12. On the Lyapunov Foster Criterion and Poincaré Inequality for Reversible Markov Chains
- Author
-
Prashant G. Mehta and Amirhossein Taghvaei
- Subjects
Lyapunov function ,Markov chain ,Poincaré inequality ,Markov process ,Function (mathematics) ,Computer Science Applications ,Connection (mathematics) ,symbols.namesake ,Control and Systems Engineering ,Elementary proof ,symbols ,Applied mathematics ,Spectral gap ,Electrical and Electronic Engineering ,Mathematics - Abstract
This paper presents an elementary proof of stochastic stability of a discrete-time reversible Markov chain starting from a Foster-Lyapunov drift condition. Besides its relative simplicity, there are two salient features of the proof: (i) it relies entirely on functional-analytic non-probabilistic arguments; and (ii) it makes explicit the connection between a Foster-Lyapunov function and Poincare inequality. The proof is used to derive an explicit bound for the spectral gap. An extension to the non-reversible case is also presented.
- Published
- 2022
13. Using data envelopment analysis in markovian decision making
- Author
-
Emmanuel Thanassoulis, Alexandra K. Papadopoulou, and Andreas C. Georgiou
- Subjects
Information Systems and Management ,General Computer Science ,Operations research ,Markov chain ,Computer science ,Judgement ,Markov process ,Context (language use) ,Time horizon ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,symbols.namesake ,Modeling and Simulation ,Goal programming ,Data envelopment analysis ,symbols ,State (computer science) - Abstract
This paper introduces a modelling framework which combines Data Envelopment Analysis and Markov Chains into an integrated decision aid. Markov Chains are typically used in contexts where a system (e.g. staff profile in a large organisation) is at the start of the planning horizon in a given state, and the aim is to transform the system to a new state by the end of the horizon. The planning horizon can involve several steps and the system transits to a new state after each step. The transition probabilities from one step to the next are influenced by both organisational and external (non-organisational) factors. We develop our generic methodology using as a vehicle the homogeneous Markov manpower planning system. The paper recognizes a gap in existing Markovian manpower planning methods to handle stochasticity and optimization in a more tractable manner and puts forward an approach to harness the power of DEA to fill this gap. In this context, the Decision Maker (DM) can specify potential anticipated future outcomes (e.g. personnel flows) and then use DEA to identify additional feasible courses of action through convexity. These feasible strategies can be evaluated according to the DM's judgement over potential future states of nature and then employed to guide the organisation in making interventions that would affect transition probabilities to improve the probability of attaining the ultimate state desired for the system. The paper includes a numerical illustration of the suggested approach, including data from a manpower planning model previously addressed using classical Markov modelling.
- Published
- 2022
14. Predicting the Evolution of Controlled Systems Modeled by Finite Markov Processes
- Author
-
Shuo Li and Matteo Pozzi
- Subjects
Mathematical optimization ,Computational complexity theory ,Markov chain ,Computer science ,media_common.quotation_subject ,Time evolution ,Markov process ,Observable ,symbols.namesake ,symbols ,Markov decision process ,State (computer science) ,Electrical and Electronic Engineering ,Safety, Risk, Reliability and Quality ,Function (engineering) ,media_common - Abstract
The operation and maintenance of infrastructure components and systems can be modeled as a Markov process, partially or fully observable. Information about the current condition can be summarized by the “inner” state of a finite state controller. When a control policy is assigned, the stochastic evolution of the system is completely described by a Markov transition function. This article applies finite state Markov chain analyses to identify relevant features of the time evolution of a controlled system. We focus on assessing if some critical conditions are reachable (or if some actions will ever be taken), in identifying the probability of these critical events occurring within a time period, their expected time of occurrence, their long-term frequency, and the probability that some events occur before others. We present analytical methods based on linear algebra to address these questions, discuss their computational complexity and the structure of the solution. The analyses can be performed after a policy is selected for a Markov decision process (MDP) or a partially observable MDP. Their outcomes depend on the selected policy and examining these outcomes can provide the decision makers with deeper understanding of the consequences of following that policy, and may also suggest revising it.
- Published
- 2022
15. Event-based asynchronous dissipative filtering for fuzzy nonhomogeneous Markov switching systems with variable packet dropouts
- Author
-
Jinde Cao, Jun Cheng, Ju H. Park, and Xia Zhou
- Subjects
0209 industrial biotechnology ,Markov chain ,Logic ,Network packet ,Markov process ,02 engineering and technology ,Fuzzy logic ,Inverted pendulum ,symbols.namesake ,020901 industrial engineering & automation ,Artificial Intelligence ,Control theory ,Asynchronous communication ,0202 electrical engineering, electronic engineering, information engineering ,Filtering problem ,symbols ,020201 artificial intelligence & image processing ,Hidden Markov model ,Mathematics - Abstract
This work concerns the event-based asynchronous filtering problem for T-S fuzzy nonhomogeneous Markov switching systems with variable packet dropouts. The discrete-time nonhomogeneous Markov process is adopted to depict mode switching among subsystems, in which the time-varying transition probabilities are characterized by a polytope structure. The variable packet dropouts are developed to describe the randomly occurring packet dropouts, where the arriving rate remains variable and uncertain. Aiming to save the limited network bandwidth, event-triggered strategy and quantization scheme are presented. By establishing the fuzzy-rule-dependent Lyapunov functional and applying a hidden Markov model policy, sufficient criteria are gained and asynchronous filters are designed by solving linear matrix inequalities (LMIs). Finally, the applicability of the proposed filtering strategy is verified by an inverted pendulum model.
- Published
- 2022
16. Analog Solutions of Discrete Markov Chains via Memristor Crossbars
- Author
-
Fernando Corinto, Anil Korkmaz, Gianluca Zoppo, R. Stanley Williams, Samuel Palermo, and Francesco Marrone
- Subjects
Stationary distribution ,Markov chain ,Computer science ,Markov process ,Memristor ,law.invention ,Accuracy ,analog computation ,crossbars ,discrete Markov chains ,memristor ,pagerank ,precision ,symbols.namesake ,law ,symbols ,Multiplication ,Electrical and Electronic Engineering ,Crossbar switch ,Algorithm ,Eigenvalues and eigenvectors ,Electronic circuit - Abstract
Problems involving discrete Markov Chains are solved mathematically using matrix methods. Recently, several research groups have demonstrated that matrix-vector multiplication can be performed analytically in a single time step with an electronic circuit that incorporates an open-loop memristor crossbar that is effectively a resistive random-access memory. Ielmini and co-workers have taken this a step further by demonstrating that linear algebraic systems can also be solved in a single time step using similar hardware with feedback. These two approaches can both be applied to Markov chains, in the first case using matrix-vector multiplication to compute successive updates to a discrete Markov process and in the second directly calculating the stationary distribution by solving a constrained eigenvector problem. We present circuit models for open-loop and feedback configurations, and perform detailed analyses that include memristor programming errors, thermal noise sources and element nonidealities in realistic circuit simulations to determine both the precision and accuracy of the analog solutions. We provide mathematical tools to formally describe the trade-offs in the circuit model between power consumption and the magnitude of errors. We compare the two approaches by analyzing Markov chains that lead to two different types of matrices, essentially random and ill-conditioned, and observe that ill-conditioned matrices suffer from significantly larger errors. We compare our analog results to those from digital computations and find a significant power efficiency advantage for the crossbar approach for similar precision results.
- Published
- 2021
17. State Estimation for Networked Systems With Markov Driven Transmission and Buffer Constraint
- Author
-
Lixin Yang, Renquan Lu, Hongxia Rao, Zhuo Wang, and Yong Xu
- Subjects
Mathematical optimization ,Markov chain ,Computer science ,Stability (learning theory) ,Estimator ,Markov process ,Computer Science Applications ,Exponential function ,Human-Computer Interaction ,Constraint (information theory) ,symbols.namesake ,Transmission (telecommunications) ,Control and Systems Engineering ,symbols ,Electrical and Electronic Engineering ,Software ,Communication channel - Abstract
This article investigates the problem of state estimation for discrete-time systems with a Markov driven transmission strategy. A buffer with limited capacity is used to store the latest measurements, and they are transmitted simultaneously once the system accesses to the shared channel. A buffer-dependent smart estimator is then proposed to process the received measurements. A convex sufficient condition concerning the exponential mean-square stability and the $l_{2}-l_{\infty }$ performance is established for the estimation error system to design the estimator gains. Finally, two examples are presented to illustrate the effectiveness of the derived result under different conditions.
- Published
- 2021
18. Stochastic Approximation With Iterate-Dependent Markov Noise Under Verifiable Conditions in Compact State Space With the Stability of Iterates Not Ensured
- Author
-
Prasenjit Karmakar and Shalabh Bhatnagar
- Subjects
Markov chain ,Computer science ,Markov process ,Stochastic approximation ,Computer Science Applications ,symbols.namesake ,Convergence of random variables ,Control and Systems Engineering ,Iterated function ,Attractor ,symbols ,State space ,Applied mathematics ,Electrical and Electronic Engineering - Abstract
This paper compiles several aspects of the dynamics of stochastic approximation algorithms with Markov iterate-dependent noise when the iterates are not known to be stable beforehand. We achieve the same by extending the lock-in probability (i.e. the probability of convergence of the iterates to a specific attractor of the limiting o.d.e. given that the iterates are in its domain of attraction after a sufficiently large number of iterations (say) n0) framework to such recursions. We use these results to prove almost sure convergence of the iterates to the specified attractor when the iterates satisfy an asymptotic tightness condition. The novelty of our approach is that if the state space of the Markov process is compact we prove almost sure convergence under much weaker assumptions compared to the work by Andrieu et al. which solves the general state space case under much restrictive assumptions. We also extend our single timescale results to the case where there are two separate recursions over two different timescales. This, in turn, is shown to be useful in analyzing the tracking ability of general adaptive algorithms.
- Published
- 2021
19. On Periodic Scheduling and Control for Networked Systems Under Random Data Loss
- Author
-
Daniel E. Quevedo and Atreyee Kundu
- Subjects
Control and Optimization ,Markov chain ,Computer Networks and Communications ,Computer science ,Scheduling (production processes) ,Markov process ,Dynamic priority scheduling ,Stability conditions ,symbols.namesake ,Control and Systems Engineering ,Control theory ,Control system ,Signal Processing ,symbols ,Numerical stability - Abstract
This paper deals with Networked Control Systems (NCSs) whose shared networks have limited communication capacity and are prone to data losses. Our contributions are twofold. First, we present necessary and sufficient conditions on the plant and controller dynamics and the network parameters such that there exist purely time-dependent periodic scheduling sequences under which stability of each plant is preserved for all admissible data loss signals. Second, given a period for the scheduling sequence, we identify sufficient conditions on the plant dynamics and the network parameters such that the plants admit static state-feedback controllers that are favourable for the stability conditions presented. The main apparatus for our analysis is a switched systems representation of the individual plants in an NCS whose switching signals are time-inhomogeneous Markov chains. Our stability conditions involve the existence of sets of symmetric and positive definite matrices that satisfy certain (in)equalities. We identify the existence of stabilizing periodic scheduling sequences and their corresponding favourable state-feedback controllers under ideal communication between the plants and their controllers as special cases of our results. A numerical experiment is presented to demonstrate the proposed techniques.
- Published
- 2021
20. Solution of the Optimal Stopping Problem for One-Dimensional Diffusion Based on a Modification of the Payoff Function
- Author
-
Presman, Ernst, Shiryaev, Albert N., editor, Varadhan, S. R. S., editor, and Presman, Ernst L., editor
- Published
- 2013
- Full Text
- View/download PDF
21. Maximizing Entropy over Markov Processes
- Author
-
Biondi, Fabrizio, Legay, Axel, Nielsen, Bo Friis, Wąsowski, Andrzej, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Dediu, Adrian-Horia, editor, Martín-Vide, Carlos, editor, and Truthe, Bianca, editor
- Published
- 2013
- Full Text
- View/download PDF
22. Hierarchical Dynamics
- Author
-
Nilsson Jacobi, Martin and Meyers, Robert A., editor
- Published
- 2012
- Full Text
- View/download PDF
23. Control of Inventories with Markov Demand
- Author
-
Bensoussan, Alain, Decreusefond, Laurent, editor, and Najim, Jamal, editor
- Published
- 2012
- Full Text
- View/download PDF
24. Probabilistic Methods for Stochastic Reachability
- Author
-
Bujorianu, Luminita Manuela and Bujorianu, Luminita Manuela
- Published
- 2012
- Full Text
- View/download PDF
25. Markov Models
- Author
-
Bujorianu, Luminita Manuela and Bujorianu, Luminita Manuela
- Published
- 2012
- Full Text
- View/download PDF
26. General drawdown of general tax model in a time-homogeneous Markov framework
- Author
-
Shu Li, Florin Avram, and Bin Li
- Subjects
Statistics and Probability ,Markov chain ,General Mathematics ,Mathematical finance ,Probability (math.PR) ,Markov process ,Regret ,CUSUM ,Lévy process ,symbols.namesake ,FOS: Mathematics ,symbols ,Drawdown (economics) ,Optimal stopping ,Statistics, Probability and Uncertainty ,Mathematical economics ,Mathematics - Probability ,Mathematics - Abstract
Drawdown/regret times feature prominently in optimal stopping problems, in statistics (CUSUM procedure), and in mathematical finance (Russian options). Recently it was discovered that a first passage theory with more general drawdown times, which generalize classic ruin times, may be explicitly developed for spectrally negative Lévy processes [9, 20]. In this paper we further examine the general drawdown-related quantities in the (upward skip-free) time-homogeneous Markov process, and then in its (general) tax process by noticing the pathwise connection between general drawdown and the tax process.
- Published
- 2021
27. Modeling and Performance Optimization of Wireless Sensor Network Based on Markov Chain
- Author
-
Yutang Liu and Qin Zhang
- Subjects
Mathematical optimization ,Markov chain ,Computer science ,Node (networking) ,Particle swarm optimization ,Markov process ,Load balancing (computing) ,symbols.namesake ,Path (graph theory) ,Convergence (routing) ,symbols ,Electrical and Electronic Engineering ,Instrumentation ,Wireless sensor network - Abstract
Wireless sensor networks are usually deployed in areas with relatively harsh natural environments, and the collection node transmits data to the destination node through a multi-hop route. Therefore, how to effectively plan the transmission path is an important issue. This paper combines the unbiased gray model with the Markov chain model to establish an unbiased gray Markov chain model, and points out that the unbiased gray Markov chain model also has shortcomings in parameter selection. The particle swarm algorithm is used to improve it, and the mathematical model, calculation principle and various parameters of the particle swarm optimization algorithm are introduced, and the implementation flow chart of the particle swarm algorithm is given. Aiming at the shortcomings of the unbiased gray Markov chain model, the particle swarm algorithm and the unbiased gray Markov chain model are combined to form the particle swarm unbiased gray Markov chain model. The simulation environment and the training environment of the particle swarm unbiased grey Markov chain model were designed in the experiment. The node scheduling optimization experiment proves that the scheduling method based on the particle swarm unbiased grey Markov chain model has achieved better results in coverage and energy consumption balance than the random and shortest distance method. In the routing experiment, the experimental analysis of the node’s Q value proved the convergence of the algorithm, and compared with other protocols, it proved that the routing algorithm can effectively extend the network life cycle and achieve load balancing.
- Published
- 2021
28. Automaton-ABC: A statistical method to estimate the probability of spatio-temporal properties for parametric Markov population models
- Author
-
Mahmoud Bentriou, Paul-Henry Cournède, and Paolo Ballarini
- Subjects
General Computer Science ,Markov chain ,Computer science ,Markov process ,Probability density function ,Parameter space ,Theoretical Computer Science ,symbols.namesake ,Trajectory ,symbols ,Temporal logic ,Approximate Bayesian computation ,Algorithm ,Parametric statistics - Abstract
We present an adaptation of the Approximate Bayesian Computation method to estimate the satisfaction probability function of a temporal logic property for Markov Population Models. In this paper, we tackle the problem of estimating the satisfaction probability function of a temporal logic property w.r.t. a parametric Markovian model of Chemical Reaction Network. We want to assess the probability with which the trajectories generated by a parametric Markov Population Model (MPM) satisfy a logical formula over the whole parameter space. In the first step of the work, we formally define a distance between a trajectory of an MPM and a logical property. If the distance is 0, the trajectory satisfies the property. The larger the distance is, further the trajectory is from satisfying the property. In the second step, we adapt the Approximate Bayesian Computation method using the distance defined in the first step. This adaptation yields a new algorithm, called automaton-ABC, whose output is a density function that directly leads to the estimation of the desired satisfaction probability function. We apply our methodology to several examples and models, and we compare it to state-of-the-art techniques. We show that the sequential version of our algorithm relying on ABC-SMC leads to an efficient exploration of the parameter space with respect to the formula and gives good approximations of the satisfaction probability function at a reduced computational cost.
- Published
- 2021
29. An Enhanced Spectrum Reservation Framework for Heterogeneous Users in CR-Enabled IoT Networks
- Author
-
Abd Ullah Khan, Jamel Nebhen, Ming Zeng, Wali Ullah Khan, Octavia A. Dobre, Muhammad Tanveer, and Xingwang Li
- Subjects
Markov chain ,business.industry ,Computer science ,Quality of service ,Reservation ,Markov process ,Blocking (statistics) ,symbols.namesake ,Control and Systems Engineering ,Key (cryptography) ,symbols ,Resource management ,Electrical and Electronic Engineering ,business ,Computer network ,Communication channel - Abstract
Fulfilling the diverse communication requirements of heterogeneous users is envisioned to be one of the key challenges in cognitive radio-enabled IoT networks. To this end, we propose a novel scheme focused on multi-tier prioritized and heterogeneous regime of users and reservation-based resources allocation. The proposed scheme manipulates the secondary users’ (SUs’) heterogeneity based on their priorities for improving their capacities and blocking probabilities. Besides, the scheme systematically deals with SUs’ dropping for bringing fairness in the service provisioning to SUs. Furthermore, the scheme allows primary users (PUs) to follow an organized approach when accessing channels for mitigating the number of handoffs. Additionally, we propose a dynamic channel reservation algorithm for PUs to enhance spectrum utilization. We leverage the continuous-time Markov chain for system modeling and consider channel failures for making the analysis more realistic. Results confirm the effectiveness of our scheme as compared to the state-of-the-art.
- Published
- 2021
30. A Rapid PN Code Acquisition Method for Low Spreading Factor Satellite Communication Systems
- Author
-
Liu Bingkun, Lin Zhiyuan, Chunxiao Jiang, Zuyao Ni, Zhen Huang, and Linling Kuang
- Subjects
Markov chain ,Computer science ,Message passing ,Markov process ,Belief propagation ,Chip ,Computer Science Applications ,symbols.namesake ,Pseudorandom noise ,Modeling and Simulation ,Communications satellite ,symbols ,Electrical and Electronic Engineering ,Algorithm ,Factor graph - Abstract
A novel rapid code acquisition scheme is proposed for low spreading factor satellite communication systems relying on message passing algorithms (MPA), where the acquisition of pseudonoise (PN) codes is converted to a sequence estimation problem. Due to the presence of data transitions in low spreading factor systems, conventional acquisition methods based on correlation would show poor capture performance. To overcome the impact of data transitions, this letter proposes a joint data chip and PN sequence estimation technique with the aid of factor graph and belief propagation (BP) algorithm. Meanwhile, in view of the short-term invariance of data chips, this letter models data chips as Markov chains to improve prior knowledge. Simulation results demonstrate that the proposed algorithm can save the mean acquisition time (MAT) about three orders of magnitude and improve the acquisition performance by 2 dB at 99% detection probability compared with serial search.
- Published
- 2021
31. Stochastic Computing Max & Min Architectures Using Markov Chains: Design, Analysis, and Implementation
- Author
-
Nikos Temenos and Paul P. Sotiriadis
- Subjects
Very-large-scale integration ,Stochastic computing ,Markov chain ,Computer science ,Markov process ,Image processing ,Parallel computing ,symbols.namesake ,Hardware and Architecture ,Median filter ,symbols ,Electrical and Electronic Engineering ,Accumulator (computing) ,MATLAB ,computer ,Software ,computer.programming_language - Abstract
Max & min architectures for stochastic computing (SC) are introduced. Their key characteristic is the utilization of an accumulator to store the signed difference between the two inputs, without randomizing sources. This property results in fast-converging and highly accurate computations using short sequence lengths, improving on the latency–accuracy tradeoff of existing SC max–min architectures. The operation of the proposed architectures is modeled using Markov Chains, resulting in in-depth analysis, the derivation of their statistical properties, and guidelines for selecting the register’s size to achieve overall design optimization. The computational accuracy and the hardware requirements of the proposed architectures are compared to those of existing ones in the SC literature, using MATLAB and Synopsys Tools. The efficacy of the proposed architectures is demonstrated by realizing a $3 \times 3$ median filter and using it in an image processing application.
- Published
- 2021
32. Markovian descriptors based stochastic analysis of large-scale climate indices
- Author
-
Asif Iqbal and Tanveer Ahmed Siddiqi
- Subjects
Environmental Engineering ,Markov chain ,Stochastic process ,Interdecadal Pacific Oscillation ,Stochastic matrix ,Multivariate ENSO index ,Markov process ,symbols.namesake ,symbols ,Environmental Chemistry ,Entropy (information theory) ,Statistical physics ,Safety, Risk, Reliability and Quality ,Randomness ,General Environmental Science ,Water Science and Technology ,Mathematics - Abstract
The investigation of the interrelationships among different oceanic and atmospheric circulation patterns is crucial for future climate projections in the current century. This paper presents the transition matrix approach of the stochastic Markov chain process to investigate the state/event based relationship between the new index of the Interdecadal Pacific Oscillation (IPO) named as the IPO Tripole index (TPI) and different sea surface temperature anomalies (SSTA) based El Nino-Southern Oscillation (ENSO) indices such as Nino 1.2, Nino 3, Nino 3.4, Nino 4 and the Multivariate ENSO Index (MEI) for the period of 120 years (1900–2019) respectively. Several Markovian descriptors like state dependency, temporal stationarity, expected number of state visits and entropy are derived from the estimated transition matrix. These descriptors are helpful in establishing the validity of Markov chain method and useful to characterize the dynamical properties of a time series like persistence, randomness and behaviour of cycles. Through the Markov chain analysis and by derived descriptors, this study finds similar self-communication (periodic) pattern between the transition states, resemblance in expected number of visits from one transition state to another, asymmetric and truncated cyclic nature of the data sequence and the existence of randomness in the transition states. Finally, a strong 2-dimensional correlation values endorses the existence of strong relations between selected indices datasets. This analysis approach may be helpful in understanding the role of the IPO and ENSO in modulating future climate variability and to formulate effective predictive models at the climatic state.
- Published
- 2021
33. Hidden Markov Model Based Fault Detection for Networked Singularly Perturbed Systems
- Author
-
Jianqi An, Tizhuang Han, Min Wu, and Xiongbo Wan
- Subjects
0209 industrial biotechnology ,Singular perturbation ,Markov chain ,Computer science ,Network packet ,Mode (statistics) ,Markov process ,02 engineering and technology ,Filter (signal processing) ,Fault detection and isolation ,Computer Science Applications ,Human-Computer Interaction ,symbols.namesake ,020901 industrial engineering & automation ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Applied mathematics ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Hidden Markov model ,Software - Abstract
Based on a hidden Markov model (HMM), the issue of fault detection (FD) is investigated for singularly perturbed systems with their measurements transmitted over a bandwidth-limited communication network. A homogeneous Markov chain is adopted to model packet dropouts and time delays simultaneously, whose mode transition probabilities are assumed to be partially unknown. The discrepancies between the Markov modes and their observed ones have been noted, which are reflected by a hidden Markov process. An HMM-based FD filter (FDF) is aimed to be designed such that the stochastic stability and prescribed $H_{\infty }$ performance are ensured for the resulting filtering error dynamics of FD. A new Lyapunov–Krasovskii functional is constructed, which is with the Markov mode and the singular perturbation parameter (SPP). With the aid of up-to-date techniques in handing time delays, a sufficient condition based on linear matrix inequalities (LMIs) is derived which provides a design scheme of such an FDF. The FDF parameters are given and the SPP’s admissible bounds are evaluated when the LMIs have feasible solutions. The performance of the designed FDF is demonstrated by two examples.
- Published
- 2021
34. Asynchronous Mean Stabilization of Positive Jump Systems With Piecewise-Homogeneous Markov Chain
- Author
-
Liqing Wang, Zheng-Guang Wu, and Ying Shen
- Subjects
symbols.namesake ,Markov chain ,Bernoulli distribution ,Piecewise ,Jump ,symbols ,Applied mathematics ,Probability distribution ,Markov process ,Conditional probability distribution ,Electrical and Electronic Engineering ,Hidden Markov model ,Mathematics - Abstract
In this brief, the mean stabilization of positive Markov jump systems (PMJSs) with piecewise-homogeneous Markov chain with two switching signals simultaneously is considered. Different from the existing results, asynchronous state feedback controllers are considered to achieve the control objective. Based on the hidden Markov model, the closed-loop systems are modeled as hidden Markov jump systems (HMJSs) with piecewise-homogeneous Markov chain. The definitions of positivity and mean stabilization are introduced for the considered system. Some sufficient conditions are derived to ensure that the HMJSs with piecewise-homogeneous Markov chain are positive and mean stable when the Markov chain follows a conditional probability distribution and Bernoulli distribution, respectively. Simulation results are described to demonstrate the effectiveness of our approach.
- Published
- 2021
35. Event-triggered consensus control and fault estimation for time-delayed multi-agent systems with Markov switching topologies
- Author
-
Yangzhou Chen, Jingyuan Zhan, and Shanglin Li
- Subjects
Lyapunov function ,Markov chain ,Computational complexity theory ,Observer (quantum physics) ,Computer science ,Cognitive Neuroscience ,Stability (learning theory) ,Markov process ,Fault (power engineering) ,Computer Science Applications ,symbols.namesake ,Artificial Intelligence ,Control theory ,symbols - Abstract
This paper focuses on the consensus control and fault estimation problems for a class of time-delayed multi-agent systems with Markov switching topologies. Two different event-triggered mechanisms are adopted with hope to reduce burden of shared network and improve energy efficiency. Under Markov process, by establishing the consensus control protocol and designing a novel adaptive fault estimation observer, the consensus control and fault estimation problems are transformed into two stochastic stability problems in different forms. Then, according to the switching Lyapunov function method and free-weighting matrix technique, two delay-dependent stability criteria on the consensus control and fault estimation are derived, respectively. However, the two criteria containing nonlinear coupling terms are not standard linear matrix inequalities (LMIs) and cannot be solved directly with the LMI toolbox. In order to eliminate the coupling terms, two improved path-following algorithms are presented. These algorithms depend on the initial conditions, so it is very crucial to choose the appropriate preset parameters. The computational complexity is increasing with the number of iterations, system size and matrix dimension, which is a fully new challenge for the study of consensus control and fault estimation of multi-agent systems. Based on the algorithms, the switching consensus controller gains and model gain matrices of fault estimation can be efficiently solved out. Finally, a simulation example of tailless fighter airplanes is given to illustrate the practicality and validity of the theoretical results.
- Published
- 2021
36. Locally interacting diffusions as Markov random fields on path space
- Author
-
Ruoyu Wu, Daniel Lacker, and Kavita Ramanan
- Subjects
Statistics and Probability ,Pure mathematics ,Markov random field ,Random field ,Markov chain ,Euclidean space ,Applied Mathematics ,Markov process ,Vertex (geometry) ,Stochastic differential equation ,symbols.namesake ,Modeling and Simulation ,symbols ,Initial value problem ,Mathematics - Abstract
We consider a countable system of interacting (possibly non-Markovian) stochastic differential equations driven by independent Brownian motions and indexed by the vertices of a locally finite graph G = ( V , E ) . The drift of the process at each vertex is influenced by the states of that vertex and its neighbors, and the diffusion coefficient depends on the state of only that vertex. Such processes arise in a variety of applications including statistical physics, neuroscience, engineering and math finance. Under general conditions on the coefficients, we show that if the initial conditions form a second-order Markov random field on d -dimensional Euclidean space, then at any positive time, the collection of histories of the processes at different vertices forms a second-order Markov random field on path space. We also establish a bijection between (second-order) Gibbs measures on ( R d ) V (with finite second moments) and a set of (second-order) Gibbs measures on path space, corresponding respectively to the initial law and the law of the solution to the stochastic differential equation. As a corollary, we establish a Gibbs uniqueness property that shows that for infinite graphs the joint distribution of the paths is completely determined by the initial condition and the specifications, namely the family of conditional distributions on finite vertex sets given the configuration on the complement. Along the way, we establish approximation and projection results for Markov random fields on locally finite graphs that may be of independent interest.
- Published
- 2021
37. $\mathcal {H}_{\infty }$ Synchronization for Fuzzy Markov Jump Chaotic Systems With Piecewise-Constant Transition Probabilities Subject to PDT Switching Rule
- Author
-
Hao Shen, Mengping Xing, Jing Wang, Ju H. Park, and Jianwei Xia
- Subjects
Lyapunov stability ,Sequence ,Markov chain ,Applied Mathematics ,Markov process ,02 engineering and technology ,Fuzzy logic ,symbols.namesake ,Transformation matrix ,Computational Theory and Mathematics ,Exponential stability ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Piecewise ,symbols ,Applied mathematics ,020201 artificial intelligence & image processing ,Mathematics - Abstract
This article investigates the nonfragile $\mathcal {H}_{\infty }$ synchronization issue for a class of discrete-time Takagi–Sugeno (T–S) fuzzy Markov jump systems. With regard to the T–S fuzzy model, a novel processing method based on the matrix transformation is introduced to deal with the double summation inequality containing fuzzy weighting functions, which may be beneficial to obtain conditions with less conservatism. In view of the fact that the uncertainties may occur randomly in the execution of the actuator, a nonfragile controller design scheme is presented by virtue of the Bernoulli distributed white sequence. The main novelty of this article lies in that the transition probabilities of the Markov chain are considered to be piecewise time-varying, and whose variation characteristics are described by the persistent dwell-time switching regularity. Then, based on the Lyapunov stability theory, it is concluded that the resulting synchronization error system is mean-square exponentially stable with a prescribed $\mathcal {H}_{\infty }$ performance in the presence of actuator gain variations. Finally, an illustrative example about Lorenz chaotic systems is provided to show the effectiveness of the established results.
- Published
- 2021
38. A probabilistic model for risk assessment and predicting the health risk of occupational exposure to pesticides in agriculture
- Subjects
Markov chain ,Operations research ,Computer science ,Stochastic process ,Health, Toxicology and Mutagenesis ,Public Health, Environmental and Occupational Health ,Probabilistic logic ,Stochastic matrix ,Markov process ,Statistical model ,General Medicine ,State (functional analysis) ,Pollution ,symbols.namesake ,Operator (computer programming) ,symbols - Abstract
Introduction. The main point is the influence of a complex of chemical and physical stressors on agricultural machine operators. The processes of occurrence and interaction of harmful factors are probable. Markov processes are a convenient model that can describe the behaviour of physical processes with random dynamics. Purpose of the work: was to develop a probabilistic model of risk assessment for agriculture workers during the application of pesticides based on Markov processes’ theory and evaluate with the help of the developed model the probability of occurrence, the degree of severity and the prediction of the different influence of adverse factors on the operator. Materials and methods. The mechanized treatment of pesticide is presented in the form of a system, the states of which are ranked according to the degree of danger to the operator: from non-dangerous to dangerous. The transition occurs under the influence of negative factors and is characterized by the probability of pij transition. Based on the marked graph of the system states, a stochastic matrix P[ij] of transition probabilities was constructed in one step. There are formulas by which it is possible to calculate the state of systems in k steps for a homogeneous and non-homogeneous Markov chain. Results. Based on Markov chains’ theory, the system’s behaviour is modelled when using single-component preparations based on imidacloprid for rod spraying of field crops. Received vector of probabilities of possible hazardous conditions for the employee after each hour of spraying within 10 hours. After 6 hours of working, the probabilistic risk for the operator to stay in a non-dangerous state is about 50%, and the probability risk of going into a dangerous — at 24%. The stationary probability distribution results show the inevitability of the transition to a hazardous state of the system if enough steps have been taken. Conclusion. With this model, you can supplement the operator’s health risk assessment system, analyze, compare and summarize the results of years of research. The calculated statistical probabilities can be used in the development of new hygiene regulations with using pesticides. Contribution: Rakitskii V.N. — responsibility for the integrity of all parts of the manuscript, approval of the final version of the article; Zavolokina N.G. — concept and design of the study, collection and processing of material, writing a text; Bereznyak I.V. — editing, responsibility for the integrity of all parts of the article. All authors are responsible for the integrity of all parts of the manuscript and approval of the manuscript final version. Conflict of interest. The authors declare no conflict of interest. Acknowledgment. The study had no sponsorship.
- Published
- 2021
39. Nonscaling Adders and Subtracters for Stochastic Computing Using Markov Chains
- Author
-
Paul P. Sotiriadis and Nikos Temenos
- Subjects
Adder ,Stochastic computing ,Counting process ,Markov chain ,Stochastic process ,Computer science ,Markov process ,Computer Science::Hardware Architecture ,symbols.namesake ,Hardware and Architecture ,Subtractor ,symbols ,Electrical and Electronic Engineering ,Algorithm ,Software ,Block (data storage) - Abstract
This work presents adder and subtracter architectures for stochastic computing (SC). In contrast to standard approaches, the result of their operation is nonscaling, i.e., $X\,\pm \,Y$ , and this is achieved via a deterministic operation based on a counting process. These properties result in an improved tradeoff between accuracy and stochastic sequence length, fast convergence, and the potential for cascaded, scale-dependent (e.g., nonlinear) stochastic computations providing with flexibility on the design level. The architectures are modeled using Markov chains (MCs) allowing for detailed understanding of their proper operation supported with analytical derivations. Using modified MC models, the adder and subtracter’s internal register size is analytically calculated providing guidelines for its optimal size selection based on accuracy requirements and stochastic input sequences lengths. Both architectures are simulated in MATLAB and are designed in Synopsys to compare their performance to that of existing ones in terms of computational accuracy and hardware resources. Finally, to demonstrate the adder’s efficacy, we use it as a building block to realize a $3 \times 3$ convolution kernel and then perform a standard digital image processing task. The results are compared to those achieved using adder architectures from the SC literature.
- Published
- 2021
40. Equilibrium analysis of cloud user request based on the Markov queue with variable vacation and vacation interruption
- Author
-
Xiuli Xu and Yitong Zhang
- Subjects
Service (business) ,Operations research ,Markov chain ,business.industry ,Computer science ,Unit of time ,InformationSystems_INFORMATIONSYSTEMSAPPLICATIONS ,Markov process ,Cloud computing ,Management Science and Operations Research ,Computer Science Applications ,Theoretical Computer Science ,symbols.namesake ,Idle ,Variable (computer science) ,symbols ,business ,Queue - Abstract
This paper considers the equilibrium balking behavior of customers in a single-server Markovian queue with variable vacation and vacation interruption, where the server can switch across four states: vacation, working vacation, idle period, and busy period. Once the queue becomes empty, the server commences a working vacation and slows down its service rate. However, this period may be interrupted anytime by the vacation interruption. Upon the completion of a working vacation, the server takes a vacation in a probability-based manner and stops service if the system is empty. The system stays idle after a vacation until a new customer arrives. The comparisons between the equilibrium balking strategy of customers and the optimal expected social benefit per time unit for each type of queue are elucidated and the inconsistency between the individual optimization and the social optimization is revealed. Moreover, the sensitivity of the expected social benefit and the equilibrium threshold with respect to the several parameters as well as diverse precision levels is illustrated through numerical examples in a competitive cloud environment.
- Published
- 2021
41. Resilient Static Output Feedback Control of Linear Semi-Markov Jump Systems With Incomplete Semi-Markov Kernel
- Author
-
Hao Zhang, Xisheng Zhan, Huaicheng Yan, Yongxiao Tian, and Yan Peng
- Subjects
Lyapunov function ,0209 industrial biotechnology ,Markov kernel ,Markov chain ,Computer science ,Markov process ,02 engineering and technology ,Computer Science Applications ,Linear map ,symbols.namesake ,020901 industrial engineering & automation ,Control and Systems Engineering ,Control theory ,Kernel (statistics) ,symbols ,Piecewise ,Electrical and Electronic Engineering ,Numerical stability - Abstract
This article is concerned with the problem of the static output-feedback control for a class of discrete-time linear semi-Markov jump systems (SMJSs). Through a mode-dependent resilient control scheme and an invertible linear transformation, a resulting equivalent closed-loop system can be obtained. The embedded Markov chain (EMC) is piecewise homogeneous, which leads to incomplete semi-Markov kernel is variable in the finite interval. A novel class of multivariate dependent Lyapunov function is constructed, which is mode-dependent, elapsed-time-dependent, and variation-dependent. Numerically testable stabilization criteria are established for discrete-time linear SMJSs via abovementioned Lyapunov function. Under bound sojourn time, a desired stabilizing controller is designed such that the closed-loop system is mean-square stable. Finally, the theoretical results are applied to a practical RLC circuit system to show the effectiveness and applicability of the proposed control strategy.
- Published
- 2021
42. Understanding Selfish Mining in Imperfect Bitcoin and Ethereum Networks With Extended Forks
- Author
-
Hongyue Kang, Xiaolin Chang, Runkai Yang, Jelena Misic, and Vojislav B. Misic
- Subjects
Markov chain ,Computer Networks and Communications ,Computer science ,TheoryofComputation_GENERAL ,Markov process ,020206 networking & telecommunications ,02 engineering and technology ,Computer security ,computer.software_genre ,Transactions per second ,symbols.namesake ,Quantitative analysis (finance) ,Security metric ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Revenue ,Imperfect ,Electrical and Electronic Engineering ,computer ,Block (data storage) - Abstract
Selfish mining, as a serious threat to blockchain, has been attracting attentions from academic and industry. Stochastic modeling has been explored to quantitatively investigate selfish mining in imperfect blockchain networks. However, prior modeling-based analysis approaches have some of the following issues: (1) only focus on Bitcoin or Ethereum, or (2) ignore extended forks and just consider natural forks, or (3) only compute the mining revenue without assessing the performance and security of the blockchain system when the system suffers from selfish mining. In this paper, we aim to address these issues. We build a Markov chain to make quantitative analysis of selfish mining in imperfect Bitcoin and Ethereum networks with natural and extended forks. Formulas are derived to calculate the mining revenue for the selfish pool (comprising selfish miners) and honest miners, respectively. Moreover, we derive the formulas of performance metrics (namely, transactions per second and stale block ratio) and the formula of security metric (namely, the probability of double-spending success) of the system. These quantitative results can help understand the impact of selfish mining on imperfect blockchain networks and then help the detection of selfish mining.
- Published
- 2021
43. Statistical Properties and Airspace Capacity for Unmanned Aerial Vehicle Networks Subject to Sense-and-Avoid Safety Protocols
- Author
-
Ella M. Atkins, Mushuang Liu, Dapeng Oliver Wu, Frank L. Lewis, and Yan Wan
- Subjects
Mobility model ,Markov chain ,Computer science ,Mechanical Engineering ,Distributed computing ,Separation (aeronautics) ,Markov process ,Collision ,Capacity management ,Computer Science Applications ,Variable (computer science) ,symbols.namesake ,Capacity planning ,Automotive Engineering ,symbols - Abstract
Random mobility models (RMMs) capture the random mobility patterns of mobile agents, and have been widely used as the modeling framework for the evaluation and design of mobile networks. All existing RMMs in the literature assume independent movements of mobile agents, which does not hold for unmanned aircraft systems (UASs). In particular, UASs must maintain a safe separation distance to avoid collision. In this paper, we propose a new modeling framework of random mobility models equipped with physical sense-and-avoid protocols to capture the flexible, variable, and uncertain movement patterns of UASs subject to separation safety constraints. For the random direction (RD) RMM equipped with a commonly used sense-and-avoid (S&A) protocol, named sense-and-stop (S&S), we provide its statistical properties including stationary location distribution and stationary inter-vehicle distance distribution, using the Markov analysis. This study provides knowledge on the impact of S&A protocols to critical UAS networking statistics. In addition, we define collision probabilities and airspace capacity concepts for UASs based on the inter-vehicle distance distribution, and derive their closed-form expressions. This analytical framework mathematically bridges local autonomy with global airspace capacity, and allows the impact analysis of local autonomy configurations for effective UAS airspace capacity management.
- Published
- 2021
44. Synchronization for stochastic coupled networks with Lévy noise via event-triggered control
- Author
-
Mingqing Xiao, Hailing Dong, and Ming Luo
- Subjects
Stochastic Processes ,0209 industrial biotechnology ,Time Factors ,Markov chain ,Stochastic process ,Computer science ,Cognitive Neuroscience ,Markov process ,Topology (electrical circuits) ,02 engineering and technology ,Topology ,Lévy process ,Markov Chains ,symbols.namesake ,020901 industrial engineering & automation ,Artificial Intelligence ,Synchronization (computer science) ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Neural Networks, Computer ,Realization (systems) - Abstract
This paper addresses the realization of almost sure synchronization problem for a new array of stochastic networks associated with delay and Lévy noise via event-triggered control. The coupling structure of the network is governed by a continuous-time homogeneous Markov chain. The nodes in the networks communicate with each other and update their information only at discrete-time instants so that the network workload can be minimized. Under the framework of stochastic process including Markov chain and Lévy process, and the convergence theorem of non-negative semi-martingales, we show that the Markovian coupled networks can achieve the almost sure synchronization by event-triggered control methodology. The results are further extended to the directed topology, where the coupling structure can be asymmetric. Furthermore, we also proved that the Zeno behavior can be excluded under our proposed approach, indicating that our framework is practically feasible. Numerical simulations are provided to demonstrate the effectiveness of the obtained theoretical results.
- Published
- 2021
45. Extended Dissipativity-Based Control for Hidden Markov Jump Singularly Perturbed Systems Subject to General Probabilities
- Author
-
Shengyuan Xu, Feng Li, Zhengqiang Zhang, and Hao Shen
- Subjects
Markov chain ,Computer science ,Markov process ,State (functional analysis) ,Expression (mathematics) ,Computer Science Applications ,Human-Computer Interaction ,symbols.namesake ,Control and Systems Engineering ,Control theory ,Jump ,symbols ,Symmetric matrix ,Applied mathematics ,Electrical and Electronic Engineering ,Hidden Markov model ,Software - Abstract
This article deals with the extended dissipativity-based control issue for singularly perturbed systems (SPSs) with Markov jump parameters, in which the partial information issues of the Markov chain are fully considered. A comprehensive hidden Markov model (HMM) is established for the partial information issues on Markov chain, in which the transition probabilities of the hidden Markov state and the observation probabilities of the observed state are general, that is, the uncertainty and the unknown peculiarity of them may be encountered simultaneously. By using the HMM with general probabilities, a comprehensive criterion is derived to analyze the extended stochastic dissipativity of the hidden Markov jump SPSs with the different partial information issues on the Markov chain. Based on the derived criterion, an explicit expression to acquire the desired HMM-based controller is presented. An illustrative example and a vehicle active suspension system are, finally, show the validity of the established theoretical results.
- Published
- 2021
46. Survivability and Disaster Recovery Modeling of Cellular Networks Using Matrix Exponential Distributions
- Author
-
Appie van de Liefvoort, Rahul Arun Paropkari, Cory Beard, and Hasita Kaja
- Subjects
Markov chain ,Computer Networks and Communications ,Computer science ,Reliability (computer networking) ,Mission critical ,Survivability ,Disaster recovery ,Markov process ,Maintenance engineering ,Reliability engineering ,symbols.namesake ,Cellular network ,symbols ,Electrical and Electronic Engineering - Abstract
Cellular network design must incorporate disaster response and repair scenarios. Requirements for high reliability and low latency often fail to incorporate network survivability for mission critical services. This paper defines a practical modeling approach using a Markov chain Matrix Exponential (ME) model. Transient and the steady state representations of system repair models, namely, fast and slow (i.e., crew-based) repairs for networks consisting of a multiple repair crews have been analyzed. Failures are exponentially modeled as per common practice, but ME distributions describe the more complex recovery processes. The model used in this paper is evaluated for varying number of repair crews, base stations, repair models and squared coefficient of variation values. This model is scalable for larger networks, calculates the restoration and network availability times, consists of asymptotic approximations to estimate network availability and determines the optimal number repair crews required. This ME model shows how survivable networks can be designed based on controlling numbers of crews, times taken for individual repair stages in the repair process, and the balance between fast and slow repairs.
- Published
- 2021
47. PCA-LSTM Learning Networks With Markov Chain Models for Online Classification of Cyber-Induced Outages in Power System
- Author
-
Ravi Yadav and Ashok Kumar Pradhan
- Subjects
Markov chain ,Computer Networks and Communications ,business.industry ,Computer science ,Deep learning ,Principal (computer security) ,Probabilistic logic ,Markov process ,computer.software_genre ,Computer Science Applications ,Electric power system ,symbols.namesake ,Control and Systems Engineering ,Principal component analysis ,symbols ,Code injection ,Artificial intelligence ,Data mining ,Electrical and Electronic Engineering ,business ,computer ,Information Systems - Abstract
The existing power system utilizes communication infrastructure for fast and reliable transfer of control and protection inputs. This dependency of a power system on communication infrastructure for critical applications makes it vulnerable to cyber attacks. Cyber-induced outages trigger both randomized and intentional switchings in a power system producing relatively similar dynamics as natural events making their classification difficult. This article proposes a principal component analysis (PCA) assisted sequential deep learning approach for online classification of cyber outages and natural events in a power system. This article provides objective-driven models for false setting injection (FSI) and false command injection (FCI) type attacks. The proposed classification method uses PCA to deduce truncated z-score sequences [or principal test sequences (PTSs)] capturing distinct spatio-temporal progression patterns of natural disturbances and cyber outages. The PTSs in the training sets are shuffled and sampled using a stratified random sampling technique and classified using an ensemble long short-term memory network. The proposed method is tested for simulation examples of FSI and FCI attacks in the standard IEEE118-bus test system, where it showed improved accuracy and time performance.
- Published
- 2021
48. Construction of Markov Processes Associated With Quasi-Regular Dirichlet Forms
- Author
-
Albeverio, Sergio, Fan, Ruzong, Herzberg, Frederik, Albeverio, Sergio, Fan, Ruzong, and Herzberg, Frederik
- Published
- 2011
- Full Text
- View/download PDF
49. Standard Representation Theory
- Author
-
Albeverio, Sergio, Fan, Ruzong, Herzberg, Frederik, Albeverio, Sergio, Fan, Ruzong, and Herzberg, Frederik
- Published
- 2011
- Full Text
- View/download PDF
50. A Numerical Approach for Evaluating the Time-Dependent Distribution of a Quasi Birth-Death Process
- Author
-
Birgit Sollie, Michel Mandjes, and Mathematics
- Subjects
Statistics and Probability ,Mathematical optimization ,General Mathematics ,0211 other engineering and technologies ,Markov process ,Context (language use) ,02 engineering and technology ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,SDG 3 - Good Health and Well-being ,0101 mathematics ,Mathematics ,021103 operations research ,Series (mathematics) ,Markov chain ,Model selection ,Quasi birth-death processes ,Maximum likelihood estimation ,Uniformization (probability theory) ,Quasi-birth–death process ,symbols ,Matrix exponential ,Time-dependent probabilities ,Erlang distribution - Abstract
This paper considers a continuous-time quasi birth-death (qbd) process, which informally can be seen as a birth-death process of which the parameters are modulated by an external continuous-time Markov chain. The aim is to numerically approximate the time-dependent distribution of the resulting bivariate Markov process in an accurate and efficient way. An approach based on the Erlangization principle is proposed and formally justified. Its performance is investigated and compared with two existing approaches: one based on numerical evaluation of the matrix exponential underlying the qbd process, and one based on the uniformization technique. It is shown that in many settings the approach based on Erlangization is faster than the other approaches, while still being highly accurate. In the last part of the paper, we demonstrate the use of the developed technique in the context of the evaluation of the likelihood pertaining to a time series, which can then be optimized over its parameters to obtain the maximum likelihood estimator. More specifically, through a series of examples with simulated and real-life data, we show how it can be deployed in model selection problems that involve the choice between a qbd and its non-modulated counterpart.
- Published
- 2022
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.