176 results on '"Albert M. K. Cheng"'
Search Results
2. ARINC 653-inspired regularity-based resource partitioning on xen
- Author
-
Guangli Dai, Albert M. K. Cheng, and Pavan Kumar Paluri
- Subjects
business.industry ,Computer science ,020206 networking & telecommunications ,Hypervisor ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Virtualization ,020202 computer hardware & architecture ,Scheduling (computing) ,ARINC 653 ,Resource (project management) ,Virtual machine ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,business ,computer - Abstract
A multitude of cloud-native applications take up a significant share of today's world wide web, the majority of which implicitly require soft-real-time guarantees when hosted on servers at various data centers across the globe. With the rapid development of cloud computing and virtualization techniques, many applications have been moved onto cloud and edge platforms that require efficient virtualization techniques. This means a set of applications must be executed on a Virtual Machine (VM) and multiple VMs must be temporally and spatially scheduled on a set of CPUs. Designed to leverage the cloud infrastructure model, many of these cloud-native applications such as media servers strongly demand low data latency and high compute-resource availability, both of which must be predictable. However, state-of-art VM schedulers fail to satisfy these requirements simultaneously. The scheduling of cloud-native applications on VMs and the scheduling of VMs on physical resources (CPUs), collectively need to be real-time in nature as specified by the Hierarchical Real-Time Scheduling (HiRTS) framework. Conforming to the specifications of this framework, the Regularity-based Resource Partitioning (RRP) model has been proposed that introduces the concept of regularity to provide a near-ideal resource supply to all VMs. In this paper, we make the theoretically superior Regularity-based Resource Partitioning (RRP) model ready for prime time by implementing its associated resource partitioning algorithms for the first time ever on the popular x-86 open-source hypervisor Xen, i.e., RRP-Xen. This paper also compares and contrasts the real-time performance of RRP-Xen against contemporary Xen schedulers such as Credit and RTDS. Our contributions include: (1) a novel implementation of the RRP model on Xen's x-86 based hypervisor, thereby providing a test-bed for future researchers; (2) the first-ever multi-core ARINC 653 VM scheduler prototype on Xen; and (3) numerous experiments and theoretical analysis to determine the real-time performance of RRP-Xen under a stringent workload environment.
- Published
- 2021
3. Work in Progress: Heart Disease Detection Methodology using E-Stethoscope
- Author
-
Sayeda Farzana Aktar, Albert M. K. Cheng, and Stefan Andrei
- Subjects
Sound (medical instrument) ,Heartbeat ,Heart disease ,Stethoscope ,business.industry ,Computer science ,Speech recognition ,Deep learning ,medicine.disease ,law.invention ,Noise ,law ,Heart sounds ,cardiovascular system ,medicine ,Heart murmur ,Artificial intelligence ,medicine.symptom ,business - Abstract
Detecting heart diseases has been a research interest for centuries. Many of these approaches are based on heartbeat analysis using a stethoscope and some of these are digitally analyzed. In an ordinary system, doctors use an acoustic stethoscope to detect any aberration in the heartbeat and predict atypical conditions of the human heart. One major problem is that the frequency range and intensity of the heart sounds are flat as well as the sound may contain noise. Hence, even a cardiac specialist doctor may encounter difficulties to analyze the heart sound perfectly. This paper describes a new methodology to detect heart diseases by examining heart sounds in real-time. We consider the guts sound as our input data. Our methodology uses a deep learning approach to determine whether a patient has any disease or is healthy. To achieve that, we integrated an electronic stethoscope and a software solution known as a heartbeat audio classifier. Our proposed system solution should be able to differentiate normal heartbeats and heart murmurs with a prediction of probable heart problem type in real-time. We believe our approach assists in reducing the cardiac arrest rate.
- Published
- 2021
4. Processor Bounding for an Efficient Non-preemptive Task Scheduling Algorithm
- Author
-
Albert M. K. Cheng, Stefan Andrei, and Vlad Radulescu
- Subjects
Mathematical optimization ,Job shop scheduling ,Computer science ,Applied Mathematics ,010102 general mathematics ,Preemption ,0102 computer and information sciences ,01 natural sciences ,Nonpreemptive multitasking ,Scheduling (computing) ,Computational Mathematics ,Hybrid Scheduling ,Computational Theory and Mathematics ,010201 computation theory & mathematics ,Bounding overwatch ,Research community ,0101 mathematics ,Time complexity - Abstract
The scheduling problem, which is the core of all approaches related to real-time systems, has received proper attention from the research community. However, while preemptive scheduling has benefited from most of the results to date, the more difficult case of non-preemptive scheduling is still lacking similar achievements. This paper is approaching non-preemptive scheduling from two different angles. First, the number of processors that would allow a feasible schedule for a given task set is analyzed, yielding both lower and upper limits which can be determined in polynomial time. Second, a hybrid scheduling algorithm, combining two widely known techniques, namely EDF and LLF, is proposed and tested. A common feature of both objectives is the transition from a single-instance task to a periodic task. The relationships between these two cases are investigated, resulting in a better understanding of periodic behavior.
- Published
- 2019
5. Work-In-Progress: Fault Tolerance in a Two-State Checkpointing Regularity-Based System
- Author
-
Elena Torre and Albert M. K. Cheng
- Subjects
010302 applied physics ,Schedule ,Computer science ,Distributed computing ,Fault tolerance ,02 engineering and technology ,Work in process ,Fault (power engineering) ,01 natural sciences ,Partition (database) ,020202 computer hardware & architecture ,Task (project management) ,Scheduling (computing) ,Resource (project management) ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering - Abstract
Real-time embedded systems with safety-critical functions must often share a limited number of computational resources. Scheduling models within the Hierarchical Real-Time Scheduling (HiRTS) framework allow applications to share resources in an efficient manner. The Regularity-Based Resource Partition (RRP) model offers an efficient way to schedule resource partitions, ensuring transparent task scheduling. A model within this framework can manage resource distribution in real-time embedded systems, ensuring fault tolerance through a checkpointing system. However, checkpoint insertions are known to incur high time and energy overheads. This paper proposes the use of a Two-State checkpointing scheme in conjunction with the fault-tolerant hierarchical RRP model. By reducing the number of checkpoints during fault-free states, we can reduce time and energy overheads while still ensuring fault tolerance in worst-case fault scenarios. We outline our plan to construct a model that can offer low-overhead solution to transient faults in embedded real-time systems. We do so with the intent to build upon this model and eventually test it through simulation.
- Published
- 2020
6. Work-In-Progress: Designing a Server-Side Progressive JPEG Encoder for Real-Time Applications
- Author
-
Albert M. K. Cheng and Andrew Louie
- Subjects
Image quality ,Computer science ,Quality of service ,Real-time computing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Systems design ,Image segmentation ,Image file formats ,computer.file_format ,Work in process ,computer ,Server-side ,Scheduling (computing) - Abstract
Images are an important part of digital communication in this era. The optimization of image file size and algorithms for image transfer are vital to ensure quality of service levels and expected time requirements for services. This paper describes and evaluates a combination of file reduction and scheduling techniques from a system design level to ensure that image transfer timing constraints are satisfied.
- Published
- 2020
7. Intrusion Detection Using Principal Component Analysis and Support Vector Machines
- Author
-
Anukriti Mishra, Albert M. K. Cheng, and Yunpeng Zhang
- Subjects
Computer science ,Network security ,business.industry ,Dimensionality reduction ,Feature extraction ,020206 networking & telecommunications ,02 engineering and technology ,Intrusion detection system ,computer.software_genre ,Data modeling ,Support vector machine ,Binary classification ,Principal component analysis ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,business ,computer - Abstract
Intrusion detection nowadays is an integral part of network security. With the advancement in machine learning technologies, it has become easier than ever to construct a model to detect intrusions. While the accuracy of already existing models is quite high, another aspect of intrusion detection is the computation time of the model. As the speed of the network is increasing rapidly, the intrusion detection system (IDS) should be able to keep up with the high influx of network connections, and with them the potential attacks. In this paper, we present a supervised machine learning model to detect intrusion in the network. We have created a supervised classification model using principal component analysis (PCA) for dimensionality reduction in combination with support vector machines (SVM) for improved attack detection and faster computation time. Evaluation of the model is done using the UNSW-NB15 data set. Test study shows that the proposed model was able to improve model training and testing time by 33.75% for binary classification and 33.91% for multi-class classification with an overall accuracy of 99.99% and 99.97% respectively. Classification result compared to other model have also been presented.
- Published
- 2020
8. Work-in-Progress: ARTIC: An Adaptive Real-Time Imprecise Computation Pipeline for Audio Analysis
- Author
-
Albert M. K. Cheng and Michael Yantosca
- Subjects
Stream processing ,Voice activity detection ,Proof of concept ,Computer science ,business.industry ,Speech recognition ,Audio analyzer ,Computational linguistics ,Work in process ,business ,Pipeline (software) ,Automation - Abstract
One of the more complex issues facing natural language processing (NLP) is how to deal with overlapped speech, i.e., when two or more speakers interfere with or talk over each other, and the more general case of co-channel speech, i.e., when two or more speakers are present in an audio stream regardless of interference. Frequently, one speaker is selected as a primary speaker for the purpose of analysis with other speakers relegated to the category of interfering speakers. Despite the breadth of research into overlapped speech detection, few endeavors have been made into preserving the speech of so-called interfering speakers. A compelling case can be made for a more comprehensive analysis of co-channel speech in the fields of computational linguistics, accessibility automation, and entertainment, particularly under real-time constraints. Currently available open-source audio libraries, while technically capable of supporting such research endeavors, are cumbersome to work with. To this end, the work introduces the Adaptive Real-Time Imprecise Computation (ARTIC) pipeline for audio analysis, a simple but flexible approach to stream processing that tracks computation times and deadlines for the various pipeline stages and affords the user the ability to specify automatic precision reductions to avoid projected deadline misses as well as automatic precision increases to combat underutilization. A proof of concept is tested with the intent to build upon this groundwork for a more comprehensive project having the goal of multi-speaker interference detection and eventually speaker separation.
- Published
- 2019
9. Work-in-Progress: Reducing Response Time of Static Priority Task Sets by Varying Offsets
- Author
-
Albert M. K. Cheng and Aaron Wong
- Subjects
Mathematical optimization ,Offset (computer science) ,Computer science ,Iterated function ,Iterative method ,Response time ,Work in process - Abstract
The goal of this research is to reduce the maximum response time of tasks within a task set by introducing offsets. In this research, we propose an iterative method to determine the best offset for a given task. This method iterates through a set of tasks starting with the task having the highest priority to determine which offset gives the shortest maximum response time. Reducing the maximum response time can lead to a more efficient system and avoids over-provisioning of hardware resources.
- Published
- 2019
10. Work-in-Progress: Combining Two Security Methods to Detect Versatile Integrity Attacks in Cyber-Physical Systems
- Author
-
Albert M. K. Cheng, Binh Doan, and Victor M. Lopez Rodriguez
- Subjects
0209 industrial biotechnology ,Computer science ,Cyber-physical system ,02 engineering and technology ,Intrusion detection system ,Work in process ,Computer security ,computer.software_genre ,Stuxnet ,Identification (information) ,020901 industrial engineering & automation ,SCADA ,Control system ,computer ,Replay attack - Abstract
The rapid advancement and use of Cyber-Physical Systems (CPS) has brought about the necessity for enhancing security and attack identification in such systems to counter malicious attacks. One example of an attack on a CPS involves the Stuxnet worm which caused significant damage to both the control system and the physical world. Such events have created a need for a robust security system in order identify and prevent attacks. This paper focuses on the detection of integrity attacks, such as replay attacks, on Linear Time Invariant systems. More specifically, it considers the feasibility of combining a chi-squared failure detector and a moving target approach to identify malicious sensors.
- Published
- 2019
11. Work-in-Progress: Leveraging the Selfless Driving Model to Reduce Vehicular Network Congestion
- Author
-
Pavan Kumar Paluri, Thomas Carmichael, Risto Miikkulainen, Albert M. K. Cheng, and Guangli Dai
- Subjects
Speedup ,Computer science ,media_common.quotation_subject ,Distributed computing ,05 social sciences ,Throughput ,02 engineering and technology ,Work in process ,Grid ,Network congestion ,0502 economics and business ,Path (graph theory) ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,020201 artificial intelligence & image processing ,Quality (business) ,050203 business & management ,media_common - Abstract
With increasing traffic in urban areas, it is crucial to examine strategies to reduce traffic network congestion. Popular navigation policies currently tend to select the fastest path available for each vehicle. However, a top-down approach to navigation, which considers the traffic network as a whole, offers several speedup possibilities. Minimizing the average travel time of all vehicles in the network with respect to their separate travel deadlines improves traffic throughput. Because such a strategy does not guarantee an optimal navigation route for individual vehicles, we refer to it as a "selfless" policy and based on this observation we propose the Selfless Traffic Routing (STR) model. Hence, we propose a test bed based on Simulation of Urban MObility (SUMO) that can evaluate the performance of a traffic routing policy based on the average travel time of all vehicle agents in a given traffic grid. Continuously calculating optimal actions for multiple agents in real-time is computationally complex. We therefore introduce a value-based reinforcement learning strategy to achieve the benefits offered by a selfless traffic routing model. We explore how this approach can potentially achieve an optimal balance between action quality and the real-time performance of each decision.
- Published
- 2019
12. Work-in-Progress: Simplifying CPS Development with Real-Time Virtual Resources
- Author
-
Albert M. K. Cheng
- Subjects
0209 industrial biotechnology ,business.industry ,Computer science ,Cyber-physical system ,02 engineering and technology ,Application software ,computer.software_genre ,Virtualization ,020202 computer hardware & architecture ,Software development process ,020901 industrial engineering & automation ,Software ,Resource (project management) ,Safety assurance ,Component-based software engineering ,0202 electrical engineering, electronic engineering, information engineering ,Software engineering ,business ,computer - Abstract
The specification, design, prototyping, analysis, implementation, information management, verification, privacy and security guarantees, safety assurance, and maintenance of cyber-physical systems (CPS) are extremely complex, owing to the multitude of operating systems, software components, hardware platforms, communication infrastructures, sensors and activators, human-machine interfaces, and numerous intertwined feedback loops. This paper describes a project to simplify all these life-cycle phases of developing and maintaining CPS by introducing Real-Time Virtual Resources (RTVR). RTVR forms a virtual layer between application software components and physical resources consisting of hardware platforms, communication infrastructures, and sensors and activators so that the software components can be implemented without knowledge of the details of the physical resources and thus can be ported from one physical resource into another with ease. Such open systems make it easy to add and remove software applications and reduce implementation cost when compared to systems which physically assign distinct computing resources to run different applications. However, most existing virtualization schemes significantly under-utilize the underlying physical resources in order to maintain the schedulability of real-time tasks as if they were scheduled on dedicated physical resources. Also, these schemes are not transparent to the software applications in that they need to be aware of each other and modification of the software may be necessary. Our proposed RTVR based on the Regularity-based Resource Partition (RRP) Model overcomes the above shortcomings, making it a true contender in simplifying all phases of CPS development and maintenance. This paper outlines the first of four project tasks to be performed: the specification, design, prototyping, analysis, implementation, verification, and maintenance of CPS with RTVR.
- Published
- 2019
13. Bounding execution resources for the task scheduling problem in cyber-physical systems
- Author
-
Vlad Radulescu, Albert M. K. Cheng, and Stefan Andrei
- Subjects
Schedule ,Job shop scheduling ,Computer science ,Distributed computing ,020208 electrical & electronic engineering ,Cyber-physical system ,Preemption ,02 engineering and technology ,Multiprocessor scheduling ,020202 computer hardware & architecture ,Scheduling (computing) ,Task (project management) ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,Set (psychology) ,Engineering (miscellaneous) - Abstract
In designing and analyzing real-time systems, the central problem resides in finding a feasible schedule for a given task set, if one exists. A lot of research effort has been carried out in approaching the various aspects of task scheduling. While most results have been achieved for preemptive scheduling, the non-preemptive case has still room for improvement, due to its complexity. In addition, the widespread usage of cyber-physical systems (CPS) is putting real-time scheduling in the position to deal with new challenges, as additional (and sometimes particular) requirements are raised by such systems. This paper, which continues the previous work of the authors, introduces a new lower bound on the number of processing units that allows a feasible schedule of a task set for both preemptive and non-preemptive scheduling. As a specific CPS issue, the necessity of moving the processing units in space, which incurs additional time requirements, is considered. The single-instance case is first discussed, then the results are extended to the periodic case.
- Published
- 2018
14. P-FRP task scheduling with preemption threshold
- Author
-
Albert M. K. Cheng and Jian (Denny) Lin
- Subjects
Software_OPERATINGSYSTEMS ,Computer science ,Preemption ,0102 computer and information sciences ,02 engineering and technology ,Lower priority ,01 natural sciences ,Reliability engineering ,Scheduling (computing) ,010201 computation theory & mathematics ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,Engineering (miscellaneous) ,Functional reactive programming - Abstract
Abort-and-Restart model is used in Priority-based Functional Reactive Programming in which higher priority tasks can preempt lower priority tasks and the lower priority tasks are aborted and restarted after the higher priority tasks have finished execution. This paper discusses a potential of improving schedulability in P-FRP systems by using preemption threshold that it allows a task to only disable preemption of tasks up to a specified threshold priority. Also, a sufficient schedulability test condition is studied in this paper for P-FRP tasks using preemption threshold, which is a critical problem to be solved in order to explore the potential benefit.
- Published
- 2018
15. Real-Time Multiprocessor Scheduling Algorithm Based on Information Theory Principles
- Author
-
Albert M. K. Cheng, A C Carlos Rincon, and Xingliang Zou
- Subjects
Rate-monotonic scheduling ,General Computer Science ,Computer science ,Distributed computing ,020206 networking & telecommunications ,02 engineering and technology ,Dynamic priority scheduling ,Parallel computing ,Information theory ,Fair-share scheduling ,Multiprocessor scheduling ,Control and Systems Engineering ,Two-level scheduling ,0202 electrical engineering, electronic engineering, information engineering ,Overhead (computing) ,020201 artificial intelligence & image processing ,Algorithm design ,Algorithm - Abstract
Reducing job migrations is essential for any global multiprocessor scheduling algorithm. In this letter, we present a global, dynamic-priority, laxity-based algorithm that reduces the number of migrations on multiprocessor embedded systems by leveraging information theory principles. A simplification of the proposed scheduling theory is presented to reduce the overhead caused by using information theory. Our results show that the proposed algorithm is able to reduce the number of migrations by up to 41.21% when compared with other global, dynamic-priority, laxity-based algorithms. As the utilization per task set and the number of processors increase, simplified information-theoretic scheduling algorithm is able to improve its performance in terms of the number of migrations.
- Published
- 2017
16. Toward a Practical Regularity-based Model
- Author
-
Albert M. K. Cheng and Yu Li
- Subjects
Mathematical optimization ,Unit of time ,Computer science ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Partition (database) ,Fair-share scheduling ,Global scheduling ,020202 computer hardware & architecture ,Scheduling (computing) ,010201 computation theory & mathematics ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Overall performance ,Software ,Resource utilization - Abstract
Most Hierarchical Real-time Scheduling (HiRTS) techniques have focused on temporal resource partitions in which time units are periodically distributed. Although such periodic partitions could provide great flexibility for the resource-level scheduling, engineers face significant obstacles when trying to determine the schedulability of real-time tasks running on them. The main reason is that periodic partitions fail to effectively bound the difference between the ideal and the actual resource allocation. To solve this problem, some researchers introduced the Regular Partition, a type of temporal resource partition that is almost evenly distributed. Recent research has shown that it achieves maximal transparency for task scheduling—some classical real-time scheduling problems on a regular partition can be easily transformed into equivalent problems on a dedicated single resource. However, the resource partitioning problem for regular partitions is much more complicated than the one for periodic partitions. Based on a practical two-layer HiRTS platform, this article introduces MulZ (Multiple Z-seqences), which is the first to solve this problem with a partitioned scheduling strategy. By using a more complicated approximation methodology, our experimental results show that MulZ outperforms the current best global scheduling algorithm on this problem. After that, it compares the overall performance of the periodic partition and the regular partition. We conclude that the regular partition is a better choice for the integration of real-time applications.
- Published
- 2017
17. Approximation algorithms in partitioning real-time tasks with replications
- Author
-
Albert M. K. Cheng, Jian (Denny) Lin, and Gokhan Gercek
- Subjects
Computer Networks and Communications ,Computer science ,Approximation algorithm ,Symmetric multiprocessor system ,Fault tolerance ,Multiprocessing ,02 engineering and technology ,Parallel computing ,ComputerSystemsOrganization_PROCESSORARCHITECTURES ,Multiprocessor scheduling ,020202 computer hardware & architecture ,Range (mathematics) ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Software - Abstract
Today is an era where multiprocessor technology plays a major role in designs of modern computer architecture. While multiprocessor systems offer extra computing power, it also opens a new range of...
- Published
- 2017
18. Partitioning Real-Time Tasks With Replications on Multiprocessor Embedded Systems
- Author
-
Albert M. K. Cheng, Jian Denny Lin, and Gokhan Gercek
- Subjects
General Computer Science ,Computer science ,business.industry ,Distributed computing ,Approximation algorithm ,Processor scheduling ,Multiprocessing ,Fault tolerance ,02 engineering and technology ,Parallel computing ,Multiprocessor scheduling ,020202 computer hardware & architecture ,Control and Systems Engineering ,Embedded system ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business - Abstract
Executing computing tasks with replications is an essential choice to achieve fault-tolerance in designing real-time, embedded systems. A problem of maximizing the number of real-time tasks with replications running on a multiprocessor embedded system is discussed in this letter. The partitioning problem can be modeled as a variant of the bin-packing problem. In the original problem, it is known that the first-fit (FF) method has a good worst-case performance bound of 4/3. Whether or not the same bound is achievable in the variant problem remains an open question. This letter closes the question by proving that the worst-case performance bound of using the FF method approaches to 2 but it never reaches it. Then, a tight bound of asymptotic worst-case performance is shown.
- Published
- 2016
19. EDF Scheduling of Industrial Robotic Manufacturing Tasks
- Author
-
Albert M. K. Cheng and Pallovi Romero
- Subjects
Correctness ,Job shop scheduling ,business.industry ,Computer science ,Distributed computing ,Robot ,Uniprocessor system ,Multiprocessing ,business ,Automation ,Time complexity ,Scheduling (computing) - Abstract
As industry leans more towards automation, there has been an increase in productivity with the integration of industrial robots. Industrial robots are able to provide greater flexibility with stringent timing and reliability in addition to functional correctness. We ask the question whether the traditional task scheduling algorithm-Earliest Deadline First (EDF)-is suitable for robots in an industrial setting. A task set is schedulable under EDF if and only if it satisfies the condition that the total processor utilization due to the task set is less than or equal to 1, for a set of periodic real-time tasks {T1, T2,…, Tn}. The selected scheduling algorithm is optimal for scheduling a set of independent and preemptable tasks, which is heavily applicable to the industrial robotic manufacturing industry. In this paper, we address the scheduling problem for a single processor device that executes preemptable time-critical tasks and evaluate its performance and explore the possibilities of modified EDF in a multiprocessor environment. The full solution to the multi-robot coordination problem relies on an efficient solution to this single robot scheduling problem. For real-time scheduling problems like the one that we are exploring, the criteria will be scheduled in polynomial time of any task set which satisfies some specific conditions. The overall performance of EDF in both uniprocessor and multiprocessor states will be explored.
- Published
- 2019
20. Fault-Tolerant Regularity-Based Real-Time Virtual Resources
- Author
-
Albert M. K. Cheng, Mansoor Ansari, Guangli Dai, Yu Li, Darrel Knape, and Pavan Kumar Paluri
- Subjects
010302 applied physics ,Schedule ,Computer science ,Distributed computing ,Parameterized complexity ,Fault tolerance ,02 engineering and technology ,01 natural sciences ,Partition (database) ,Partition model ,020202 computer hardware & architecture ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,Redundancy (engineering) ,Fault rate - Abstract
Many safety-critical applications employ embedded real-time systems where both timing and fault tolerance requirements must be continually satisfied. The Regularity-based Resource Partition Model (RRP), which is known for its code level independence between resource level and task level, is used to schedule resource partitions in virtualized real-time systems. This paper presents a fault tolerance model for Regularity-based Real-Time Virtual Resources to recover from transient hardware faults without modifying user applications. The proposed framework consists of a checkpointing mechanism called Fault-Tolerant RRP with a checkpointing partition followed by a redundancy partition prepared for re-execution to satisfy task deadlines despite the occurrence of faults. The frequency of checkpoints and the number of time slices in the redundancy partition are parameterized by the fault rate of the hardware resource and the sum of the availability factors of the original partition sets. Extensive theoretical analysis and simulation-based experiments show the effectiveness of the proposed framework while incurring minimal overhead.
- Published
- 2019
21. Work-in-Progress: Incorporating Deadline-Based Scheduling in Tasking Programming Model for Extreme-Scale Parallel Computing
- Author
-
Albert M. K. Cheng and Panruo Wu
- Subjects
020203 distributed computing ,Job shop scheduling ,Computer science ,business.industry ,Distributed computing ,Big data ,Message Passing Interface ,02 engineering and technology ,Thread (computing) ,Work in process ,Virtualization ,computer.software_genre ,Scheduling (computing) ,0202 electrical engineering, electronic engineering, information engineering ,Programming paradigm ,020201 artificial intelligence & image processing ,business ,computer - Abstract
Processing and analyzing big data sets updated in real time in an increasing number of applications such as severe weather prediction and particle-physics experiments require the computational power of extreme-scale high-performance computing (HPC) systems. To address the scheduling of massive task/thread sets on these extreme-scale systems, current strategies rely on improving centralized, distributed, and parallel scheduling algorithms as well as virtualization developed for HPC systems which aim to reduce the makespan and balance the load among the computing nodes in these systems. However, these HPC schedulers provide no guarantees on meeting timing constraints such as deadlines that are required in an increasing number of these real-time science workflows. This paper describes a new project which departs from this established trend of best-effort scheduling of large-scale HPC Message Passing Interface (MPI) tasks and ensemble workloads found in fine-grain many-task computing (MTC) applications. The new approach brings real-time scheduling to address the demands of real-time science workloads. This new framework abstracts information about the tasks or threads, and continuously dispatch this workload to meet deadlines and other timing constraints associated with individual tasks or groups of tasks in extreme-scale HPC systems to reduce execution time and energy consumption. This paper introduces deadline-based scheduling in the tasking programming model.
- Published
- 2018
22. Transparent Real-Time Task Scheduling on Temporal Resource Partitions
- Author
-
Albert M. K. Cheng and Yu Li
- Subjects
020203 distributed computing ,Schedule ,Computer science ,Distributed computing ,Processor scheduling ,02 engineering and technology ,Dynamic priority scheduling ,Partition (database) ,Fair-share scheduling ,020202 computer hardware & architecture ,Theoretical Computer Science ,Scheduling (computing) ,Fixed-priority pre-emptive scheduling ,Computational Theory and Mathematics ,Hardware and Architecture ,Two-level scheduling ,0202 electrical engineering, electronic engineering, information engineering ,Resource management ,Software - Abstract
The Hierarchical Real-Time Scheduling (HiRTS) technique helps improve overall resource utilization in real-time embedded systems. With HiRTS, a computation resource is divided into a group of temporal resource partitions, each of which accommodates multiple real-time tasks. Besides the computation resource partitioning problem, real-time task scheduling on resource partitions is also a major problem of HiRTS. The existing scheduling techniques for dedicated resources, like schedulability tests and utilization bounds, are unable to work without changes on temporal resource partitions in most cases. In this paper, we show how to achieve maximal transparency for task scheduling on Regular Partitions, a type of resource partition introduced by the Regularity-based Resource Partition (RRP) Model. We show that several classes of real-time scheduling problems on a regular partition can be transformed into equivalent problems on a dedicated single resource, such that comprehensive single-resource scheduling techniques provide optimal solutions. Furthermore, this transformation method could be applied to different types of real-time tasks such as periodic tasks, sporadic tasks and aperiodic tasks.
- Published
- 2016
23. Timing analysis of P-FRP systems
- Author
-
Albert M. K. Cheng and Danielle Underwood
- Subjects
010302 applied physics ,Cover (telecommunications) ,Computer science ,Distributed computing ,Static timing analysis ,02 engineering and technology ,Fibre-reinforced plastic ,01 natural sciences ,Industrial engineering ,Field (computer science) ,020202 computer hardware & architecture ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,Engineering (miscellaneous) - Abstract
P-FRP systems are relatively new, so little research has been done on how tasks are best scheduled using the P-FRP paradigm. This paper provides a background of the subjects that timing analysis must cover, and concludes with suggestions for future research to be done in this field.
- Published
- 2016
24. A new scheduling algorithm for non-preemptive independent tasks on a multi-processor platform
- Author
-
Albert M. K. Cheng, Stefan Andrei, Sharfuddin Alam, Vlad Radulescu, and Suresh Vadlakonda
- Subjects
Rate-monotonic scheduling ,Earliest deadline first scheduling ,Computer science ,Distributed computing ,020208 electrical & electronic engineering ,02 engineering and technology ,Dynamic priority scheduling ,Parallel computing ,Deadline-monotonic scheduling ,Multiprocessor scheduling ,Fair-share scheduling ,020202 computer hardware & architecture ,Priority inversion ,Fixed-priority pre-emptive scheduling ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,Engineering (miscellaneous) - Abstract
The abort-and-restart scheme from the Priority-based Functional Reactive Programming (PFRP) paradigm eliminates the priority inversion problem. This paper is similar by solving the priority inversion problem using the task order restrictions sets of relations. One of the core problems in real-time systems is finding a feasible schedule for a task set on a multiprocessor platform. While preemptive scheduling has benefited from a large number of significant results, the non-preemptive case has still room for improvement. Despite the fact that the well-known EDF and LLF scheduling techniques are optimal for preemptive uniprocessor platform there is no known optimal scheduling algorithm for non-preemptive task sets on a multi-processor platform. Our paper continues our previous works [1] and [2] by describing the experimental results together with their findings about our alternate scheduling method to the EDF and LLF techniques. Our algorithm, called A, is able to schedule all considered non-preemptive independent tasks on a multi-processor platform, while EDF and LLF fail to sometimes provide a feasible schedule. The experiments indicate no execution time overhead for the cases where all the scheduling algorithms were able to provide the schedule. However, when EDF and LLF fail to provide a schedule, A gives the schedule with a 60% execution time overhead.
- Published
- 2016
25. Worst case response time and schedulability analysis for real-time software transactional memory-lazy conflict detection (STM-LCD)
- Author
-
Albert M. K. Cheng, Yu Jiang, Yakun Li, Qiang Zhou, and Xingliang Zou
- Subjects
Schedule (computer science) ,Computer science ,business.industry ,Real-time computing ,Transactional memory ,020206 networking & telecommunications ,02 engineering and technology ,Parallel computing ,020202 computer hardware & architecture ,Set (abstract data type) ,Software ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,Software transactional memory ,business ,Engineering (miscellaneous) ,Reactive system ,Execution model ,Real-time operating system - Abstract
Software transactional memory (STM) is a transactional mechanism of controlling access to shared resources in memory. This transactional mechanism is similar to the abort-and-restart execution model in a functional reactive system (FRS). Due to its abort-and-restart nature, the execution semantics of STM are different from the classic preemptive or nonpreemptive model. Some research has strong constraints for its worst case response time (WCRT) analysis. In this paper, we research on worst case response time and schedulability analysis for real-time software transactional memory-lazy conflict detection (STM-LCD). Specifically, we introduce a parameter the remainder factor m , formally derive an exact WCRT for a 2-task set on STM systems using lazy conflict detection (LCD), propose an exact schedulability test for a 2-task set. Also, we present a near-exact WCRT for an n -task set on STM-LCD, and propose a new necessary condition and a new sufficient condition to schedule an n -task set. Finally, we show that experimental results are accordant with the aforementioned analysis.
- Published
- 2016
26. DwarfCode: A Performance Prediction Tool for Parallel Applications
- Author
-
Albert M. K. Cheng, Jaspal Subhlok, and Weizhe Zhang
- Subjects
020203 distributed computing ,business.industry ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,Parallel computing ,Tracing ,Theoretical Computer Science ,Scheduling (computing) ,Software ,Computational Theory and Mathematics ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Performance prediction ,Benchmark (computing) ,Algorithm design ,business ,Time complexity ,Data compression - Abstract
We present DwarfCode, a performance prediction tool for MPI applications on diverse computing platforms. The goal is to accurately predict the running time of applications for task scheduling and job migration. First, DwarfCode collects the execution traces to record the computing and communication events. Then, it merges the traces from different processes into a single trace. After that, DwarfCode identifies and compresses the repeating patterns in the final trace to shrink the size of the events. Finally, a dwarf code is generated to mimic the original program behavior. This smaller running benchmark is replayed in the target platform to predict the performance of the original application. In order to generate such a benchmark, two major challenges are to reduce the time complexity of trace merging and repeat compression algorithms. We propose an O ( mpn ) trace merging algorithm to combine the traces generated by separate MPI processes, where m denotes the upper bound of tracing distance , p denotes the number of processes, and n denotes the maximum of event numbers of all the traces. More importantly, we put forward a novel repeat compression algorithm, whose time complexity is O ( nlogn ). Experimental results show that DwarfCode can accurately predict the running time of MPI applications. The error rate is below 10 percent for compute and communication intensive applications. This toolkit has been released for free download as a GNU General Public License v3 software.
- Published
- 2016
27. Special issue on Real-Time Scheduling on Heterogeneous Multi-core Processors
- Author
-
Weizhe Zhang, Albert M. K. Cheng, Marc Geilen, Electronic Systems, and CompSOC Lab- Predictable & Composable Embedded Systems
- Subjects
Multi-core processor ,Computer Networks and Communications ,Computer science ,Distributed computing ,020208 electrical & electronic engineering ,02 engineering and technology ,Gang scheduling ,Multiprocessor scheduling ,Fair-share scheduling ,020202 computer hardware & architecture ,Scheduling (computing) ,Artificial Intelligence ,Hardware and Architecture ,Two-level scheduling ,0202 electrical engineering, electronic engineering, information engineering ,Software - Published
- 2016
28. Real-Time Security Through a TEE
- Author
-
Albert M. K. Cheng and Roberto Duenez
- Subjects
Information sensitivity ,Focus (computing) ,business.industry ,Computer science ,Embedded system ,Task analysis ,Dimension (data warehouse) ,business ,Process isolation - Abstract
The incrementing complexity in embedded and cyber-physical systems has demanded a new dimension of security. Trusted Execution Environments (TEE) provide process isolation and dedicated hardware to prevent breach of sensitive information. This paper will focus on the use of TEE on TrustZone architectures for embedded systems security and an implementation with real-time constraints.
- Published
- 2018
29. SITSA-RT: An Information Theory Inspired Real-Time Multiprocessor Scheduler
- Author
-
Albert M. K. Cheng and A C Carlos Rincon
- Subjects
Computer science ,010401 analytical chemistry ,Multiprocessing ,Avionics architecture ,02 engineering and technology ,Parallel computing ,Information theory ,01 natural sciences ,Multiprocessor scheduling ,020202 computer hardware & architecture ,0104 chemical sciences ,Set (abstract data type) ,Reduction (complexity) ,Task (computing) ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis - Abstract
In this paper, we describe how Shannon's information theory is used to develop the Simplified Information-Theoretic Scheduling algorithm for Real-time Systems (SITSA-RT), and we explain the mechanism used by this algorithm to reduce the number of job migrations in real-time systems implemented in a multiprocessor platform. We present a performance comparison of the proposed algorithm with different multiprocessor scheduling algorithms for synthetic and real-case task sets. The results of the performance comparison for the synthetic task sets case show that outperforms all the studied EDF-based (up to 41.65%) and P-Fair based algorithms (up to 93.22%) in terms of the reduction of the number of job migrations while offering a similar performance in terms of the number of preemptions, the number of tasks migrations, and deadline miss ratio. These results show that as the utilization per task set and the number of processors increase, SITSA-RT is able to improve its performance in terms of the number of migrations. The results from the real-case task set based on NASA's X-38 avionics architecture show that for the scheduler execution time, MLLF improves the performance of SITSA-RT by 5.96% and SITSA-RT improves the performance of LLF by 19%, and from the memory requirements we found that MLLF usage is 13.48% lower than SITSA-RT, and SITSA-RT usage is 52.97% lower than LLF.
- Published
- 2018
30. The Testing Execution Mechanism on Internetware Oriented Flow Dynamic Building
- Author
-
Ying Song, Min Song, Hongbin Wang, Zhengxian Wei, and Albert M. K. Cheng
- Subjects
Mechanism (engineering) ,Internetware ,Correctness ,Flow (mathematics) ,Computer science ,business.industry ,Business process ,Middleware ,Process (computing) ,Publish–subscribe pattern ,Software engineering ,business - Abstract
Based on the DDS middleware with publish and subscribe mechanism, we propose a Flow Model of Internetware oriented dynamic building (FMoI). Through FMoI, the business process of the network software can be flexible organized, the criteria to judge whether the network software is running normally is established, and the correctness of process is built also. In order to support the automatic online testing of the network software, this paper puts forward an Execution Mechanism on Internetware Online Testing, called EMIOT, which describes the framework and process of implementing the online test execution mechanism of the network software on the DDS middleware.
- Published
- 2017
31. Scheduling Mixed-Criticality Real-Time Tasks in a Fault-Tolerant System
- Author
-
Jian (Denny) Lin, Doug Steel, Michael Yu-Chi Wu, Albert M. K. Cheng, and Nanfei Sun
- Subjects
Rate-monotonic scheduling ,Earliest deadline first scheduling ,Fixed-priority pre-emptive scheduling ,General Computer Science ,Computer science ,Distributed computing ,Two-level scheduling ,Flow shop scheduling ,Dynamic priority scheduling ,Deadline-monotonic scheduling ,Fair-share scheduling - Abstract
Enabling computer tasks with different levels of criticality running on a common hardware platform has been an increasingly important trend in the design of real-time and embedded systems. On such systems, a real-time task may exhibit different WCETs (Worst Case Execution Times) in different criticality modes. It is well-known that traditional real-time scheduling methods are not applicable to ensure the timely requirement of the mixed-criticality tasks. In this paper, the authors study a problem of scheduling real-time, mixed-criticality tasks with fault tolerance. An optimal, off-line algorithm is designed to guarantee the most tasks completing successfully when the system runs into the high-criticality mode. A formal proof of the optimality is given. Also, a novel on-line slack-reclaiming algorithm is proposed to recover from computing faults before the tasks' deadline during the run-time. Simulations show that an improvement of about 30% in performance is obtained by using the slack-reclaiming method.
- Published
- 2015
32. Dynamic load balancing technology for cloud-oriented CDN
- Author
-
Hui He, Weizhe Zhang, Albert M. K. Cheng, Li Zhigang, Yana Feng, and Zhenguang Zhu
- Subjects
Network Load Balancing Services ,General Computer Science ,User experience design ,business.industry ,Computer science ,Server ,Content delivery network ,The Internet ,Round-robin DNS ,Cloud computing ,Load balancing (computing) ,business ,Computer network - Abstract
With soaring demands of Internet content services, content delivery network (CDN), one of the most effective content acceleration techniques, is applied into Internet services. Content routing functions in CDN are generally realized by load balancing system. Effectiveness of load balancing strategy determines response speed to users and user experience (UE) directly. This paper extracted the most important influencing factor of CDN loading from common network services and proposed the Variable Factor Weighted Least Connection. The proposed algorithm made real-time computing and dynamic regulation in considering of effect of network applications on server load index, performance changes of the server and workload changes. It has been applied in LVS kernel system successfully. The experiment confirmed that the CDN load schedule system with Variable-Factor-Weighted-Least-Connection could balance loads among cluster servers dynamically according to processing capacity changes of servers, thus enabling to provide users desired services and contents during high large-scale concurrence accesses of users.
- Published
- 2015
33. Disk failure prediction in heterogeneous environments
- Author
-
Darrell D. E. Long, Jehan-Francois Paris, Albert M. K. Cheng, Ricardo Vilalta, and Carlos Rincon
- Subjects
Artificial neural network ,Computer science ,business.industry ,Decision tree ,02 engineering and technology ,Machine learning ,computer.software_genre ,Logistic regression ,020202 computer hardware & architecture ,Data modeling ,Market research ,Homogeneous ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,Data mining ,business ,computer - Abstract
Recent studies have shown the benefits of using SMART attributes to predict disk failures in homogeneous populations of disks from the same make and model. We address here the case of data centers with more heterogeneous disk populations, such as the ones described in the BackBlaze datasets, and propose to build global disk failure predictors that would apply to disks of all makes and models. Our first challenge was the large number of SMART parameters that were missing for most makes and models in many disk instances of our dataset. As a result, we had to discard the SMART attributes that were missing in at least 90 percent of the disks, which left us with 21 SMART attributes. We then applied a Reverse Arrangement Test to these attributes to select the strongest disk failure indicators. We investigated three different machine learning models (Decision Trees, Neural Networks, and Logistic Regression) using the 2015 BackBlaze data to train and validate our predictors. Our best model was a decision tree that identified t rue failure events among the disks that tested positive for at least one of our failure indicators. We then used the 2016 BackBlaze data to evaluate its performance. Our results show that our decision tree identifies at least 52 percent of all disk failures and makes nearly all its predictions several days ahead: no more than 2.45 percent of the predicted failures occur within one day or two of the prediction. Finally, we compared the performance of our predictor with those of the RAIDShield and the original BackBlaze predictor. We found out that RAIDShield could predict at most 18 percent of disk failures, that is, 34 percent fewer failures than our decision tree while the BackBlaze predictor predicted 60 percent of disk failures but generated 4 to 5 false alarms per correct prediction.
- Published
- 2017
34. Finding a Steady State Point for Fixed Priority Independent Periodic Real-Time Tasks with Arbitrary Given Release Offsets
- Author
-
Yu Jiang, Yue Qin, Xingliang Zou, and Albert M. K. Cheng
- Subjects
Steady state (electronics) ,Computer science ,Node (networking) ,02 engineering and technology ,Linked list ,Interval (mathematics) ,Phaser ,Electronic mail ,020202 computer hardware & architecture ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Point (geometry) ,Time point ,Algorithm - Abstract
Minimal schedulability interval is one of the important considerations of both research motivation and practice stage. In this paper, we investigate the problem of finding a starting time point of the minimal schedulability interval for fixed priority independent periodic real-time preemptive tasks with arbitrary given release offsets (phasing). A linked list-based method is proposed for solving the problem. Each node in the linked list represents a pending-less busy period. Analysis and experimental results show that the linked list-based method outperforms the current best acyclic-idle-slot-based one.
- Published
- 2017
35. Multi-mode P-FRP Task Scheduling
- Author
-
Carlos Rincon, Yu Jiang, Albert M. K. Cheng, and Xingliang Zou
- Subjects
Commercial software ,Computer science ,Distributed computing ,Real-time computing ,02 engineering and technology ,computer.software_genre ,Expert system ,020202 computer hardware & architecture ,Scheduling (computing) ,Priority inversion ,Fixed-priority pre-emptive scheduling ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,User interface ,computer ,Functional reactive programming - Abstract
Functional Reactive Programming (FRP) provides an elegant way to express computation in domains such as interactive animations, robotics, computer vision, user interfaces, and simulation. Priority-based (preemptive) FRP (P-FRP), a variant of FRP with more real-time characteristics, demands research in its scheduling and timing analysis. Different from the classic preemptive model, in a P-FRP system, when a task is preempted, all changes made by the task are discarded and after higher priority tasks complete their execution the preempted task will restart from the beginning (abort-and-restart). P-FRP is thus able to capture changes of the task in time and provides an option other than the classic preemptive model in certain scenarios. In the P-FRP model, previous studies use the largest execution time of a task for all its restarted jobs. In practice, however, when considering the changing/unchanging inputs/outputs of the task or the memory effects such as cache-hit in loading code and data, the restarted jobs likely consume less time than its largest execution time. In this paper, for the first time we present a multi-mode P-FRP task framework and two particular scenarios for the framework that are able to reflect such effects and then improve the performance of a developing commercial software platform. We show that the multi-mode task P-FRP system has significant schedulability improvements over the original P-FRP model.
- Published
- 2017
36. Using information theory principles to schedule real-time tasks
- Author
-
A C Carlos Rincon and Albert M. K. Cheng
- Subjects
Rate-monotonic scheduling ,Earliest deadline first scheduling ,Theoretical computer science ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,Dynamic priority scheduling ,Fair-share scheduling ,Deadline-monotonic scheduling ,Priority inversion ,Fixed-priority pre-emptive scheduling ,Two-level scheduling ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Arithmetic - Abstract
The purpose of this paper is to present a scheduling solution based on information theory principles to schedule real-time tasks. We propose the mathematical background for using information as a parameter in real-time systems as well as the relationship between information and utilization. We present a new dynamic priority scheduling solution that selects the task with the highest amount of information per studied interval. We propose the feasibility analysis of the scheduling solution and we compare its performance against the Earliest Deadline First (EDF) scheduling algorithm using as dependent variables the number of context switches and the number of preemptions. We generated 16 test files (with 100 synthetic task sets per file) using as independent variables: (a) Utilization (from 70% to 100%), (b) Number of tasks per Task set (from 2 to 5) and (c) Hyper-period (fixed at 40). The results showed that: (a) Our scheduling solution improves the performance of EDF by 1.1384% in terms of the number of context switches and 2.0428% in terms of the number of preemptions for the studied task sets; (b) The similarity ratio (number of similar schedules) between the two algorithms was 42.68%.
- Published
- 2017
37. An auto-scaling mechanism for virtual resources to support mobile, pervasive, real-time healthcare applications in cloud computing
- Author
-
Albert M. K. Cheng, Hsiao-Hwa Chen, Yong woon Ahn, Minho Jo, and Jinsuk Baek
- Subjects
Context-aware pervasive systems ,Cloud computing security ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,Mobile computing ,Cloud computing ,Virtualization ,computer.software_genre ,Utility computing ,Hardware and Architecture ,End-user computing ,Resource allocation (computer) ,business ,computer ,Software ,Information Systems ,Computer network - Abstract
Cloud computing with virtualization technologies has become an important trend in the information technology industry. Due to its salient features of reliability and cost effectiveness, cloud computing has changed the paradigms of development for mobile pervasive services, effectively permeating the market. While most types of best effort mobile pervasive applications can be seamlessly migrated to cloud computing infrastructures, we need to consider specialized elements to make cloud computing infrastructures more effective in real-time healthcare applications. The client side of those applications dramatically increases its transmission rate whenever it detects an abnormal event. However, the existing server side mechanisms have limitations in adaptively allocating necessary computing resources in order to handle these various data volumes over time. In this article, we propose a novel server-side auto-scaling mechanism to autonomously allocate virtual resources on an on-demand basis. The mechanism is tested in an Amazon EC2, and the results show how the proposed mechanism can efficiently scale up and down the virtual resources, depending on the volume of requested real-time tasks.
- Published
- 2013
38. Feasibility interval for the transactional event handlers of P-FRP
- Author
-
Bo Liu, Albert M. K. Cheng, and Chaitanya Belwal
- Subjects
Functional programming ,Computer Networks and Communications ,Computer science ,Applied Mathematics ,Distributed computing ,Preemption ,Response time ,Theoretical Computer Science ,Computational Theory and Mathematics ,Asynchronous communication ,Bounded function ,Execution model ,Functional reactive programming ,Least common multiple - Abstract
Functional Reactive Programming (FRP) is a resource aware declarative approach for modeling and building safety-critical embedded systems. Recently, Priority-based FRP (P-FRP) was introduced as a formalism that guarantees real-time response. Due to the state-less nature of execution of functional programs, P-FRP implements a transactional nature of execution where preempted lower-priority tasks are aborted. This makes the response time of a lower-priority task completely dependent on the execution pattern of higher priority tasks. The feasibility interval in the classical preemptive model11In this paper the classical preemptive model refers to a real-time system in which tasks can be preempted by higher priority tasks and can resume execution from the point they were preempted. of real-time systems is known and is dependent on the least common multiple (LCM) of task periods. However, since the abort nature of preemption can induce side-effects on the execution of lower-priority tasks, it has been unknown to date if the feasibility in P-FRP is also dependent on the LCM. In this paper, we rigorously prove that these side-effects of preemption are bounded within the LCM and formally derive a value of the feasibility interval in P-FRP. This value of feasibility interval is vital for more robust schedulability analysis of the P-FRP execution model. Highlights� We rigorously prove that task abortions in P-FRP are bounded within the Least Common Multiple (LCM) of task periods. � We formally derive a value of the feasibility interval in P-FRP. � We address both synchronous and asynchronous releases of P-FRP tasks. � Feasibility intervals of synchronous releases in P-FRP are proved to be the same as those in the preemptive model. � A game-board method is proposed to compute the actual response time.
- Published
- 2013
39. Resource Bounding for Non-Preemptive Task Scheduling on a Multiprocessor Platform
- Author
-
Stefan Andrei, Vlad Radulescu, and Albert M. K. Cheng
- Subjects
Rate-monotonic scheduling ,Earliest deadline first scheduling ,Computer science ,Distributed computing ,020208 electrical & electronic engineering ,02 engineering and technology ,Dynamic priority scheduling ,Deadline-monotonic scheduling ,Fair-share scheduling ,Multiprocessor scheduling ,020202 computer hardware & architecture ,Fixed-priority pre-emptive scheduling ,Two-level scheduling ,0202 electrical engineering, electronic engineering, information engineering - Abstract
Task scheduling, which is the fundamental problem for real-time systems, has been approached from various points of view and for various classes of hardware/software configurations. Most of the results currently available have been determined for preemptive scheduling. However, the non-preemptive case is also of great interest, and its higher complexity requires different solutions. This paper builds on previous results of the authors regarding the minimum number of processors that is necessary to allow finding a feasible schedule for a given task set. As previous work was considering single-instance tasks, now the focus moves to periodic tasks, and the existing results are extended in such a way as to cover the new requirements. Also, an existing scheduling algorithm, which aims to combine the characteristics of the well-known EDF and LLF techniques, is being adapted for dealing with periodic tasks.
- Published
- 2016
40. A Scratchpad Memory-Based Execution Platform for Functional Reactive Systems and Its Static Timing Analysis
- Author
-
Zeinab Kazemi and Albert M. K. Cheng
- Subjects
business.industry ,Computer science ,Preemption ,Response time ,02 engineering and technology ,Parallel computing ,CAS latency ,020202 computer hardware & architecture ,Task (project management) ,Memory management ,Worst-case execution time ,Embedded system ,0202 electrical engineering, electronic engineering, information engineering ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,020201 artificial intelligence & image processing ,business ,Functional reactive programming ,Scratchpad memory - Abstract
Priority-based Functional Reactive Programming (P-FRP) is a new variant of FRP to model reactive applications in real-time systems. In P-FRP, when the currently running task is preempted by an arriving higher-priority task, the lower-priority running task is aborted and the higher-priority task will execute. The lower-priority task restarts when the higher-priority one completes. However, unlike the preemptive model, when a task aborts, all the changes made by this task are discarded. That is to say, when an aborted ask restarts, it should execute from the beginning. In order to provide a realistic Worst-Case Response Time (WCRT) of the tasks in P-FRP, it is therefore mandatory to derive a realistic Worst Case Execution Time (WCET) of each task. Previous studies have ignored memory latency in the derivation of the WCRT, making the resulting estimate inaccurate and unrealistic. Furthermore, these studies have also assumed that the WCET of each task is known a priori. In this paper, we introduce a scratchpad memory (SPM)-based platform for executing P-FRP tasks and an approach to determine the WCET of the tasks by considering the memory cost of the aborted tasks. We first compute the WCET of a task in a P-FRP system, and then calculate the memory penalty caused by preemption. In the next step, we derive the WCRT of the task sets in P-FRP by considering memory latency in the proposed platform. Experimental results from the derivations of the WCET and WCRT using task sets from the SNU real-time benchmarks and randomly generated tasks are presented to validate this approach.
- Published
- 2016
41. A Non-Work-Conserving Model for P-FRP Fixed Priority Scheduling
- Author
-
Yu Jiang, Albert M. K. Cheng, and Xingliang Zou
- Subjects
Earliest deadline first scheduling ,Rate-monotonic scheduling ,Mathematical optimization ,Computer science ,Distributed computing ,020208 electrical & electronic engineering ,02 engineering and technology ,Dynamic priority scheduling ,Deadline-monotonic scheduling ,020202 computer hardware & architecture ,Scheduling (computing) ,Fixed-priority pre-emptive scheduling ,Priority inheritance ,0202 electrical engineering, electronic engineering, information engineering ,Functional reactive programming - Abstract
In real-time systems, Functional Reactive Programming (FRP) is already playing an important role and will potentially be more so in the future. Priority-based (preemptive) FRP (P-FRP), a variant of FRP with more real-time characteristics, demands more research in its scheduling and timing analysis. Its Abort-and-Restart (AR) nature indicates that reducing preemptions can be critical for improving system performance. In this paper, we present a new non-work-conserving scheduling model, P-FRP Deferred Start (DS) model, to reduce certain preemptions under fixed priority scheduling. We analyze some properties of the P-FRP DS model. Experiments show the domination on schedulability rate and task response time to those of the P-FRP AR model. The schedulability rate increases up to 243.1% and the portion of task sets that have response time decreased ranges from 15.49% for 3-task sets to 98.28% for 10-task sets.
- Published
- 2016
42. P-FRP task scheduling: A survey
- Author
-
Xingliang Zou, Yu Jiang, and Albert M. K. Cheng
- Subjects
Earliest deadline first scheduling ,Rate-monotonic scheduling ,Priority inversion ,Fixed-priority pre-emptive scheduling ,Computer science ,Distributed computing ,Preemption ,Dynamic priority scheduling ,Deadline-monotonic scheduling ,Functional reactive programming - Abstract
Functional Reactive Programming (FRP) is a declarative approach for modeling and building reactive systems. The FRP has been shown to be an expressive formalism for building graphics, robotic, and vision applications. The Priority-based FRP (P-FRP) is a formalism of FRP that allows preemption of execution and guarantees real-time response. Since functional programs cannot maintain state and mutable data, changes made by programs that are preempted have to be rolled back, and the work done by the preempted programs has to be discarded. Hence in the P-FRP model, a preempted lower priority task will have to restart after higher priority tasks have completed execution. Current real-time research mainly focuses on the classic preemptive or non-preemptive models and plenty methods have been developed to analyze the real-time guarantees of these models. Unfortunately, due to its transactional nature where preempted tasks are aborted and have to restart, the execution semantics of the P-FRP model does not fit into the standard definitions of classic preemptive or non-preemptive execution. In this survey paper, we review existing researches on the P-FRP task scheduling, and present a few research areas for future work.
- Published
- 2016
43. Poster Abstract: Memory-Aware Response Time Analysis for P-FRP Tasks
- Author
-
Xingliang Zou and Albert M. K. Cheng
- Subjects
Rate-monotonic scheduling ,Earliest deadline first scheduling ,Priority inversion ,Fixed-priority pre-emptive scheduling ,Priority inheritance ,Worst-case execution time ,Computer science ,Parallel computing ,Priority ceiling protocol ,Deadline-monotonic scheduling - Abstract
Summary form only given. Functional Reactive Programming (FRP) is playing and potentially going to play a more important role in real-time systems. Priority-based (preemptive) FRP (P-FRP), a variant of FRP with more real-time characteristics, demands more research in its scheduling and timing analysis. In a P-FRP system, similar to a classic preemptive system, a higher priority task can preempt a lower-priority one and make the latter abort. The lower-priority task will restart after the higher priority tasks complete their execution. However, unlike the classic preemptive model, when a task aborts, all the changes made by the task are discarded (Abort and Restart). In previous studies, the value of Worst Case Execution Time (WCET) of a task is used for all its restarted tasks. However, in practice restarted tasks likely consume less time than WCET when considering the memory effect such as cache-hit in loading code and data. Here we consider a typical task life cycle without being interrupted (cold started task): (1) code is loaded from hard drive and data is loaded from main memory; (2) computation is done by processor(s); (3) results are committed to main memory. In the P-FRP model, the time spent in phase (2) and (3) is wasted when a task is aborted, however, since the existence of memory hierarchy, the time spent in phase (1) can be less when a task is restarted, for example, the task code is still in cache and does not need to be read from slow main memory again. This memory effect is not considered in previous studies of P-FRP systems. In this paper, we present our preliminary memory-aware P-FRP task response time analysis and experimental results. Our ongoing research is to present more theoretical response time analysis and priority assignment research in the memory-aware P-FRP task scheduling. And since the execution time difference is likely related to data placement/locality, we will address this difference in our multi-core P-FRP task scheduling research too.
- Published
- 2016
44. Poster Abstract: Using Linked List in Exact Schedulability Tests for Fixed Priority Scheduling
- Author
-
Yu Jiang, Albert M. K. Cheng, Xingliang Zou, and Jiaming Lv
- Subjects
Rate-monotonic scheduling ,Earliest deadline first scheduling ,Priority inversion ,Fixed-priority pre-emptive scheduling ,Least slack time scheduling ,Computer science ,Linked list ,Dynamic priority scheduling ,Parallel computing ,Arithmetic ,Deadline-monotonic scheduling - Abstract
Summary form only given. In the context of fixed priority preemptive real-time systems, for n periodic/sporadic tasks that comply with a restrictive system model and that have implicit deadlines the Rate-Monotonic (RM) scheduling is optimal. When these tasks are released simultaneously the time required by the first job of each task defines its response time. It thus needs only to make response time analysis or conduct exact schedulability test within a time length no more than the maximum task period (Tn) for RM scheduling, and these tests are thus known to be pseudo-polynomial in time complexity. Although the response time computation for RM schedules of implicit-deadline task-systems has been proved to be an NPhard problem, the scale of many commercial systems is such that pseudo-polynomial exact tests can be used, and to achieve more efficient exact tests such as for online response time analysis (RTA) is one of important considerations of both research motivation and practice stage. The innovative aspect of our solution is that we use a linked list for representing the schedule in the exact response-time schedulability test, referred to as the LList-based test. A busy period in the schedule is represented by a linked list node, recording the starting time and the end time of a busy period, and the pointer to the next node. The simulation is performed task per task in the priority order (from 1 to n), and, when the starting time or the end time of a busy period is the same as that of other busy periods, then the two nodes are merged into one node to represent a longer busy period. For improving the efficiency, memory allocation and recycle for each node are also performed in the user space. The time complexity of the LList-based test is O(N) where N is the total number of jobs within the time length Tn, while the total number of nodes in the linked list is no more than N - n + 1 in the worst case. Our experiments show that the LList-based exact test is a better candidate in exact response-time tests when task periods span no more than three orders of magnitude, since it outperforms the current best exact tests in this scenario, and the needed memory space is also affordable.
- Published
- 2016
45. Poster Abstract: Online Semi-Partitioned Multiprocessor Scheduling of Soft Real-Time Periodic Tasks for QoS Optimization
- Author
-
Behnaz Sanati and Albert M. K. Cheng
- Subjects
Earliest deadline first scheduling ,Rate-monotonic scheduling ,Fixed-priority pre-emptive scheduling ,Job shop scheduling ,Computer science ,Distributed computing ,Dynamic priority scheduling ,Parallel computing ,Fair-share scheduling ,Multiprocessor scheduling ,Deadline-monotonic scheduling - Abstract
Summary form only given. Multiprocessor real-time scheduling algorithms may follow a partitioned or global approach or some hybrid of the two, called semi-partitioning. Semi-partitioned real-time scheduling algorithms extend partitioned ones by allowing a subset of tasks to migrate. Given the goal of “less overhead”, it is desirable for such strategy to be boundary-limited, and allow a migrating task to migrate only between successive invocations (job boundaries). Non-boundary-limited schedulers allow jobs to migrate, which can be expensive in practice, if jobs maintain much cached state. Previously proposed semi-partitioned algorithms for soft real-time (SRT) tasks such as EDF-fm and EDF-os, have two phases: an offline assignment phase, where tasks are assigned to processors and fixed tasks (which do not migrate) are distinguished from migrating ones; and an online execution phase. In their execution phase, rules that extend EDF scheduling are used. These strategies aim to minimize tardiness. In this paper, we propose a new online reward-based semi-partitioning approach to schedule periodic soft real-time tasks in homogeneous multiprocessor systems. We use an online choice of two approximation algorithms, Greedy and Load-Balancing, for partitioning, which provides an optimized usage of processing time. In this method, no prior information is needed. Hence, there is no offline phase. Our objective is to enhance the QoS by minimizing tardiness and maximizing the total reward obtained by completed tasks in minimum makespan. Therefore, we allow different jobs of any task get assigned to different processors (migration at job boundaries) based on their reward-based priorities and workload of the processors. This method can also extend to direct SRT systems with mixed set of tasks (aperiodic, sporadic and periodic) by defining their deadline accordingly. Many real-time applications can benefit from this solution including but not limited to video streaming servers, multi-player video games, mobile online banking and medical monitoring systems.
- Published
- 2016
46. Poster Abstract: Preliminary Performance Evaluation of HEF Scheduling Algorithm
- Author
-
Carlos Rincon and Albert M. K. Cheng
- Subjects
Earliest deadline first scheduling ,Rate-monotonic scheduling ,Schedule ,Fixed-priority pre-emptive scheduling ,Worst-case execution time ,Computer science ,Real-time computing ,Dynamic priority scheduling ,Fair-share scheduling ,Task (project management) - Abstract
Summary form only given. The purpose of this paper is to analyze the performance of the Highest Entropy First (HEF) scheduling algorithm for real-time tasks. The contributions of this paper are: · Generate multiple task sets by implementing the programs from the Seoul National University (SNU) real-time benchmark in Wind River Workbench 3.3 to calculate the WCET and generating the periods by using a linear programming solution aiming to maximize the utilization of the system based on a predefined hyper-period. We implemented the SNU programs (sqrt.c, fibcall.c, crc.c, minver.c and select.c) on a server with an Intel i7-3770 processor running at 3.4 GHz, with 16 GB of RAM and 2 TB hard drive using Wind River Workbench 3.3 to calculate the worst case execution time (WCET). We run each program 100 times to average the results. We created 4 task sets with 2, 3, 4, and 5 tasks respectively. For each task set we used 100 ms as the hyper-period to calculate the periods of the tasks. We implemented a system with implicit deadlines. · Measure the performance of HEF algorithm to schedule real-time tasks using as metrics the number of context switches and deadline-miss ratio. The results from the preliminary performance evaluation show that the number of context switches is directly proportional to the number of tasks in the task set. For the deadline-miss ratio, HEF was able to schedule all the task sets without missing any deadline. Further analysis must be made to confirm that the deadline-miss ratio depends on the utilization of the system (U ≤ 1 = no deadline misses). The HEF algorithm has some similarities with the earliest deadline first algorithm (EDF), therefore we propose as future work to compare the performance of HEF against EDF using the task sets generated by the methodology proposed in this paper.
- Published
- 2016
47. Using Entropy as a Parameter to Schedule Real-Time Tasks
- Author
-
Albert M. K. Cheng and A C Carlos Rincon
- Subjects
Earliest deadline first scheduling ,Rate-monotonic scheduling ,Schedule ,Mathematical optimization ,Fixed-priority pre-emptive scheduling ,Least slack time scheduling ,Computer science ,Two-level scheduling ,Dynamic priority scheduling ,Computer Science::Operating Systems ,Fair-share scheduling - Abstract
The purpose of this paper is to present the mathematical background for using entropy in real-time scheduling as well as the relationship between entropy and utilization. We present a new scheduling algorithm based on entropy to schedule tasks in real-time systems. The goal is to minimize the uncertainty of the scheduling problem by executing the task with the highest entropy first without missing any deadline. The uncertainty measurement is based on the probability of the execution of a task during the hyper-period. This study aims to present entropy as a new parameter that can be used by researchers in different real-time systems fields.
- Published
- 2015
48. Deferred Start: A Non-Work-Conserving Model for P-FRP Fixed Priority Task Scheduling
- Author
-
Albert M. K. Cheng, Xingliang Zou, and Yu Jiang
- Subjects
Rate-monotonic scheduling ,Earliest deadline first scheduling ,Priority inversion ,Fixed-priority pre-emptive scheduling ,Computer science ,Two-level scheduling ,Distributed computing ,Dynamic priority scheduling ,Deadline-monotonic scheduling ,Fair-share scheduling ,Reliability engineering - Abstract
In real-time systems, FRP (Functional Reactive Programming) is playing and potentially going to play a more important role. Priority-based (preemptive) FRP (P-FRP), a variant of FRP with more real-time characteristics, demands more research in its scheduling and timing analysis. Its abort-andrestart nature indicates that reducing preemptions can be critical for improving system performance. In this paper, we present a non-work conserving scheduling model, Deferred Start, to reduce certain preemptions. Experiments show the improvement on schedulability and task response time.
- Published
- 2015
49. Scheduling Conditions for Real-Time Software Transactional Memory
- Author
-
Chaitanya Belwal and Albert M. K. Cheng
- Subjects
General Computer Science ,Computer science ,Transaction processing ,Distributed computing ,Real-time computing ,Processor scheduling ,Transactional memory ,Parallel computing ,Software_PROGRAMMINGTECHNIQUES ,Data structure ,Scheduling (computing) ,Transactional leadership ,Control and Systems Engineering ,Software transactional memory ,Real-time operating system - Abstract
Software transactional memory (STM) is a transactional mechanism of controlling access to shared resources in memory. Recently, variants of STM with real-time support have been presented. Due to its abort-restart nature, the execution semantics of STM are different from the classical preemptive or nonpreemptive model. In this letter, we formally derive utilization based necessary and sufficient scheduling condition for a STM system using lazy conflict detection.
- Published
- 2011
50. Lazy Versus Eager Conflict Detection in Software Transactional Memory: A Real-Time Schedulability Perspective
- Author
-
Chaitanya Belwal and Albert M. K. Cheng
- Subjects
General Computer Science ,Programming language ,Computer science ,Transaction processing ,business.industry ,Transactional memory ,Access control ,Software_PROGRAMMINGTECHNIQUES ,computer.software_genre ,Scheduling (computing) ,Concurrency control ,Eager evaluation ,Control and Systems Engineering ,Task analysis ,Software transactional memory ,business ,computer - Abstract
Transactional memory is a mechanism of controlling access to shared resources in concurrent programs. Though originally implemented in hardware, software implementations of transactional memory are now available as library extensions in all major programming language. Lately, variants of software transactional memory (STM) with real-time support have been presented. The conflict detection policy used in STM, which can be of lazy or eager type, determines the point at which transactions are aborted. The conflict detection policy can have a significant effect on the schedulability of tasks sharing common resources. Using an abstract model, we present a real-time scheduling perspective analysis of lazy and eager conflict detection policies used in STM.
- Published
- 2011
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.