111 results on '"workload distribution"'
Search Results
2. A Model to Balance Production Workload Distribution in a Trailer Manufacturing Organisation Under Fluctuating Customer Ordering Condition
- Author
-
van der Walt, Gerhard, Makinde, Olasumbo, Mpofu, Khumbulani, Chaari, Fakher, Series Editor, Gherardini, Francesco, Series Editor, Ivanov, Vitalii, Series Editor, Cavas-Martínez, Francisco, Editorial Board Member, di Mare, Francesca, Editorial Board Member, Haddar, Mohamed, Editorial Board Member, Kwon, Young W., Editorial Board Member, Trojanowska, Justyna, Editorial Board Member, Xu, Jinyang, Editorial Board Member, Kohl, Holger, editor, Seliger, Günther, editor, and Dietrich, Franz, editor
- Published
- 2023
- Full Text
- View/download PDF
3. An Efficient Workload Distribution Mechanism for Tightly Coupled Heterogeneous Hardware
- Author
-
Rivera-Alvarado, Ernesto, Torres-Rojas, Francisco J., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Nagar, Atulya K., editor, Singh Jat, Dharm, editor, Mishra, Durgesh Kumar, editor, and Joshi, Amit, editor
- Published
- 2023
- Full Text
- View/download PDF
4. OptiDJS+: A Next-Generation Enhanced Dynamic Johnson Sequencing Algorithm for Efficient Resource Scheduling in Distributed Overloading within Cloud Computing Environment.
- Author
-
Banerjee, Pallab, Roy, Sharmistha, Modibbo, Umar Muhammad, Pandey, Saroj Kumar, Chaudhary, Parul, Sinha, Anurag, and Singh, Narendra Kumar
- Subjects
OPTIMIZATION algorithms ,CLOUD computing ,ALGORITHMS ,RESOURCE allocation ,WORKING hours ,SCHEDULING ,PHYSIOLOGICAL adaptation - Abstract
The continuously evolving world of cloud computing presents new challenges in resource allocation as dispersed systems struggle with overloaded conditions. In this regard, we introduce OptiDJS+, a cutting-edge enhanced dynamic Johnson sequencing algorithm made to successfully handle resource scheduling challenges in cloud computing settings. With a solid foundation in the dynamic Johnson sequencing algorithm, OptiDJS+ builds upon it to suit the demands of modern cloud infrastructures. OptiDJS+ makes use of sophisticated optimization algorithms, heuristic approaches, and adaptive mechanisms to improve resource allocation, workload distribution, and task scheduling. To obtain the best performance, this strategy uses historical data, dynamic resource reconfiguration, and adaptation to changing workloads. It accomplishes this by utilizing real-time monitoring and machine learning. It takes factors like load balance and make-up into account. We outline the design philosophies, implementation specifics, and empirical assessments of OptiDJS+ in this work. Through rigorous testing and benchmarking against cutting-edge scheduling algorithms, we show the better performance and resilience of OptiDJS+ in terms of reaction times, resource utilization, and scalability. The outcomes underline its success in reducing resource contention and raising service quality generally in cloud computing environments. In contexts where there is distributed overloading, OptiDJS+ offers a significant advancement in the search for effective resource scheduling solutions. Its versatility, optimization skills, and improved decision-making procedures make it a viable tool for tackling the resource allocation issues that cloud service providers and consumers encounter daily. We think that OptiDJS+ opens the way for more dependable and effective cloud computing ecosystems, assisting in the full realization of cloud technologies' promises across a range of application areas. In order to use the OptiDJS+ Johnson sequencing algorithm for cloud computing task scheduling, we provide a two-step procedure. After examining the links between the jobs, we generate a Gantt chart. The Gantt chart graph is then changed into a two-machine OptiDJS+ Johnson sequencing problem by assigning tasks to servers. The OptiDJS+ dynamic Johnson sequencing approach is then used to minimize the time span and find the best sequence of operations on each server. Through extensive simulations and testing, we evaluate the performance of our proposed OptiDJS+ dynamic Johnson sequencing approach with two servers to that of current scheduling techniques. The results demonstrate that our technique greatly improves performance in terms of makespan reduction and resource utilization. The recommended approach also demonstrates its ability to scale and is effective at resolving challenging work scheduling problems in cloud computing environments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. System of Human Management Processes to Improve the Predictors of Staff Turnover in SMEs Dedicated to the Service Sector
- Author
-
Morales-Rojas, Grecia, Uchida-Ore, Kaduo, Sotelo, Fernando, Rojas, José, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Ahram, Tareq, editor, and Taiar, Redha, editor
- Published
- 2022
- Full Text
- View/download PDF
6. New Perspectives in Resistance Training Periodization: Mixed Session vs. Block Periodized Programs in Trained Men.
- Author
-
Bartolomei, Sandro, Zaniboni, Federico, Verzieri, Nicolò, and Hoffman, Jay R.
- Subjects
- *
RESISTANCE training , *STATURE , *BODY composition , *EVALUATION of human services programs , *HYPERTROPHY , *LEAN body mass , *EXERCISE physiology , *PHYSICAL training & conditioning , *PECTORALIS muscle , *MUSCLE strength , *DESCRIPTIVE statistics , *QUADRICEPS muscle , *STATISTICAL sampling , *BODY mass index , *JUMPING - Abstract
The purpose of this investigation was to compare the effects of 2 different periodized resistance training programs on maximal strength, power, and muscle architecture, in trained individuals. Twenty-two resistance-trained men were randomly assigned to either a mixed session training group (MSP; n = 11; age = 23.7 ± 2.6 years; body mass = 80.5 ± 9.8 kg; height = 175.5 ± 6.1 cm) or a block periodization group (BP; n = 11; age = 25.7 ± 4.6 years; body mass = 81.1 ± 10.7 kg; height = 176.8 ± 8.4 cm). Both training programs were 10 weeks in duration and were equated in volume. Each training session of the MSP focused on power, maximal strength, and hypertrophy, whereas each mesocycle within the BP focused on one of these components. Subjects were assessed for body composition, muscle architecture, maximal strength, and power. In addition, perceived training load, and training volume were calculated. Subjects in MSP experienced greater improvements in fat free mass (p = 0.021), muscle thickness of the pectoralis and vastus lateralis (p < 0.05), and a greater improvement in 1RM bench press (p < 0.001; +8.6% in MSP and +2% in BP) than in BP. By contrast, BP resulted in greater improvements in vertical jump (p = 0.022; +7.2%) compared with MSP (+1.2%). No significant differences were noted between the groups for perceived training load (p = 0.362) nor training volume (p = 0.169). Results of this study indicated that in a 10-week training study, MSP may enhance muscle hypertrophy and maximal strength to a greater extent than BP, with the same training volume and perceived training load. However, BP may be more effective for vertical jump improvement. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Analytical Workload Allocation for Optimized Power Consumption and Delay in Fog-Cloud Networks Using Particle Swarm Optimization Algorithm.
- Author
-
Sohal, Asha and Kait, Ramesh
- Subjects
PARTICLE swarm optimization ,SERVER farms (Computer network management) ,COMPUTER performance ,QUALITY of service - Abstract
Fog Computing caters to the immediate requirement of processing and storage to serve the application running on physical devices attached to the Internet of Things. However, edge devices are restricted by limited processing power and storage. Thus, the voluminous data and the velocity with which the devices produce this data must be judiciously distributed between the fog device and data centers in the cloud system. Such an arrangement ensures that the overall quality of service and quality of experience of end-users are maintained. The paper proposes an algorithm to allocate jobs that utilizes branch-bound technique and particle swarm optimization, a tradeoff between power consumption and task execution time over fog-cloud networks. The paper addresses the twin challenge of reducing the delay period or latency for communication and reducing the power consumption of the fog devices and machines deployed in the data centers. The paper aims to optimize workload allocation among various system components, including fog devices, machines in the data center, and the communication network between them. The graphical findings demonstrate both power consumption and delay have been reduced by employing the suggested algorithm in a fog-cloud scenario. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Developing a decision support tool for the operation of parallel AS/RS during partial downtime: A case study at Jumbo Supermarkets
- Author
-
van den Brink, Luc (author) and van den Brink, Luc (author)
- Abstract
This paper investigates the optimisation of Automated Storage and Retrieval Systems (AS/RS) in warehousing by minimising performance losses during partial downtime. Given the increasing automation in logistics, AS/RS systems play a pivotal role, yet the operation of those systems during partial downtime remains a topic ignored in literature. This research fills this gap by exploring the effects of partial downtime in AS/RS through a reusable Discrete Event Simulation model which was developed in Python. This model incorporates the influence of both upstream and downstream systems, a characteristic notably absent from the limited number of publicly-available AS/RS models. Collaborating with Jumbo Supermarkets, the study utilises their highly automated distribution centre with an Order Consolidation Buffer housing 4 dual-crane AS/RS units as a case study. The study identifies operational policies to mitigate partial downtime effects, developed for scenarios with one or both cranes down within an AS/RS. Results suggest strategic workload distribution adjustments among AS/RS can significantly reduce performance degradation, particularly during high workload periods. After comparing both scenarios, it was concluded that for most scenarios, it is beneficial to keep operating the remaining crane when a crane breaks down, even though this slows down repairs. Overall, this research offers insights into parallel AS/RS dynamics under partial downtime and provides practical guidelines for effective operations., Mechanical Engineering | Multi-Machine Engineering
- Published
- 2024
9. Police Districting Problem: Literature Review and Annotated Bibliography
- Author
-
Liberatore, Federico, Camacho-Collados, Miguel, Vitoriano, Begoña, Price, Camille C., Series Editor, Zhu, Joe, Associate Editor, Hillier, Frederick S., Founding Editor, and Ríos-Mercado, Roger Z., editor
- Published
- 2020
- Full Text
- View/download PDF
10. A Queueing Game Based Management Framework for Fog Computing With Strategic Computing Speed Control.
- Author
-
Yi, Changyan, Cai, Jun, Zhu, Kun, and Wang, Ran
- Subjects
SPEED ,TASK analysis - Abstract
In this paper, a novel management framework for fog computing with strategic computing speed control at fog nodes (FNs) is studied. In the considered model, mobile users declare requests of offloading resource-hungry computation tasks that are dynamically collected at a dedicated edge server (ES). Upon receiving these requests, the ES can decide to either self-process or delegate some workloads to third-party FNs for maximizing the overall management profit. Unlike the existing work, this paper takes into account strategic behaviors of FNs in computing speed control, i.e., each FN can strategically allocate its computing resource to maximize its utility, which consists of the benefit gained from executing offloaded tasks and the cost incurred by dissatisfied (delayed) service to its own subscribed tasks. To jointly address the long-term system performance and FNs’ strategic interactions, a scheduling mechanism integrating a noncooperative game and a queueing model is formulated. We then investigate two delegation reward settings, i.e., constant and utility-dependent delegation prices, and propose efficient adaptive algorithms to determine the optimal workload distribution at the ES and the computing speed equilibrium among FNs. Both theoretical analyses and simulations are conducted to evaluate the performance of the proposed solutions and demonstrate their superiorities over counterparts. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Analysis of Web Workload on QoS to Assist Capacity
- Author
-
Abirami, K., Harini, N., Vaidhyesh, P. S., Kumar, Priyanka, Tavares, João Manuel R.S., Series Editor, Jorge, Renato Natal, Series Editor, Pandian, Durai, editor, Fernando, Xavier, editor, Baig, Zubair, editor, and Shi, Fuqian, editor
- Published
- 2019
- Full Text
- View/download PDF
12. A Technique of Adaptation of the Workload Distribution Problem Model for the Fog-Computing Environment
- Author
-
Kalyaev, I., Melnik, E., Klimenko, A., Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, and Silhavy, Radek, editor
- Published
- 2019
- Full Text
- View/download PDF
13. An Ontology-Based Approach to the Workload Distribution Problem Solving in Fog-Computing Environment
- Author
-
Klimenko, Anna, Safronenkova, Irina, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, and Silhavy, Radek, editor
- Published
- 2019
- Full Text
- View/download PDF
14. Distribution of Workload in IMA Systems by Solving a Modified Multiple Knapsack Problem
- Author
-
Balashov, Vasily, Antipina, Ekaterina, Rodrigues, H.C., editor, Herskovits, J., editor, Mota Soares, C.M., editor, Araújo, A.L., editor, Guedes, J.M., editor, Folgado, J.O., editor, Moleiro, F., editor, and Madeira, J. F. A., editor
- Published
- 2019
- Full Text
- View/download PDF
15. QoS-Aware Workload Distribution in Hierarchical Edge Clouds: A Reinforcement Learning Approach
- Author
-
Chunglae Cho, Seungjae Shin, Hongseok Jeon, and Seunghyun Yoon
- Subjects
Deep reinforcement learning ,edge computing ,resource allocation ,workload distribution ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Recently, edge computing is getting attention as a new computing paradigm that is expected to achieve short-delay and high-throughput task offloading for large scale Internet-of-Things (IoT) applications. In edge computing, workload distribution is one of the most critical issues that largely influences the delay and throughput performance of edge clouds, especially in distributed Function-as-a-Service (FaaS) over networked edge nodes. In this paper, we propose the Resource Allocation Control Engine with Reinforcement learning (RACER), which provides an efficient workload distribution strategy to reduce the task response slowdown with per-task response time Quality-of-Service (QoS). First, we present a novel problem formulation with the per-task QoS constraint derived from the well-known token bucket mechanism. Second, we employ a problem relaxation to reduce the overall computation complexity by compromising just a bit of optimality. Lastly, we take the deep reinforcement learning approach as an alternative solution to the workload distribution problem to cope with the uncertainty and dynamicity of underlying environments. Evaluation results show that RACER achieves a significant improvement in terms of per-task QoS violation ratio, average slowdown, and control efficiency, compared to AREA, a state-of-the-art workload distribution method.
- Published
- 2020
- Full Text
- View/download PDF
16. A system for equitable workload distribution in clinical medical physics.
- Author
-
Minsun Kim, Ford, Eric, Smith, Wade, Bowen, Stephen R., Geneser, Sarah, and Meyer, Juergen
- Subjects
MEDICAL physics ,ACADEMIC medical centers ,PHYSICISTS ,INDIVIDUAL differences - Abstract
Background: Clinical medical physics duties include routine tasks, special procedures, and development projects. It can be challenging to distribute the effort equitably across all team members, especially in large clinics or systems where physicists cover multiple sites. The purpose of this work is to study an equitable workload distribution system in radiotherapy physics that addresses the complex and dynamic nature of effort assignment. Methods:We formed a working group that defined all relevant clinical tasks and estimated the total time spent per task. Estimates used data from the oncology information system,a survey of physicists,and group consensus.We introduced a quantitative workload unit, "equivalent workday" (eWD), as a common unit for effort. The sum of all eWD values adjusted for each physicist's clinical full-time equivalent yields a "normalized total effort" (nTE) metric for each physicist, that is, the fraction of the total effort assigned to that physicist. We implemented this system in clinical operation. During a trial period of 9 months, we made adjustments to include tasks previously unaccounted for and refined the system. The workload distribution of eight physicists over 12 months was compared before and after implementation of the nTE system. Results: Prior to implementation, differences in workload of up to 50% existed between individual physicists (nTE range of 10.0%--15.0%). During the trial period, additional categories were added to account for leave and clinical projects that had previously been assigned informally. In the 1-year period after implementation, the individual workload differences were within 5% (nTE range of 12.3%--12.8%). Conclusion: We developed a system to equitably distribute workload and demonstrated improvements in the equity of workload.A quantitative approach to workload distribution improves both transparency and accountability. While the system was motivated by the complexities within an academic medical center, it may be generally applicable for other clinics. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Dynamic RAN Slicing for Service-Oriented Vehicular Networks via Constrained Learning.
- Author
-
Wu, Wen, Chen, Nan, Zhou, Conghao, Li, Mushu, Shen, Xuemin, Zhuang, Weihua, and Li, Xu
- Subjects
RADIO access networks ,REINFORCEMENT learning ,TRAFFIC density ,RESOURCE allocation ,ALGORITHMS ,QUALITY of service - Abstract
In this paper, we investigate a radio access network (RAN) slicing problem for Internet of vehicles (IoV) services with different quality of service (QoS) requirements, in which multiple logically-isolated slices are constructed on a common roadside network infrastructure. A dynamic RAN slicing framework is presented to dynamically allocate radio spectrum and computing resource, and distribute computation workloads for the slices. To obtain an optimal RAN slicing policy for accommodating the spatial-temporal dynamics of vehicle traffic density, we first formulate a constrained RAN slicing problem with the objective to minimize long-term system cost. This problem cannot be directly solved by traditional reinforcement learning (RL) algorithms due to complicated coupled constraints among decisions. Therefore, we decouple the problem into a resource allocation subproblem and a workload distribution subproblem, and propose a two-layer constrained RL algorithm, named Resource Allocation and Workload diStribution (RAWS) to solve them. Specifically, an outer layer first makes the resource allocation decision via an RL algorithm, and then an inner layer makes the workload distribution decision via an optimization subroutine. Extensive trace-driven simulations show that the RAWS effectively reduces the system cost while satisfying QoS requirements with a high probability, as compared with benchmarks. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. A Data Partitioning Model for Highly Heterogeneous Systems
- Author
-
Tabik, S., Ortega, G., Garzón, E. M., Suárez, D., Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Desprez, Frédéric, editor, Dutot, Pierre-François, editor, Kaklamanis, Christos, editor, Marchal, Loris, editor, Molitorisz, Korbinian, editor, Ricci, Laura, editor, Scarano, Vittorio, editor, Vega-Rodríguez, Miguel A., editor, Varbanescu, Ana Lucia, editor, Hunold, Sascha, editor, Scott, Stephen L., editor, Lankes, Stefan, editor, and Weidendorfer, Josef, editor
- Published
- 2017
- Full Text
- View/download PDF
19. Bi-Objective Optimization of Data-Parallel Applications on Heterogeneous HPC Platforms for Performance and Energy Through Workload Distribution.
- Author
-
Khaleghzadeh, Hamidreza, Fahad, Muhammad, Shahid, Arsalan, Manumachu, Ravi Reddy, and Lastovetsky, Alexey
- Subjects
- *
FAST Fourier transforms , *MULTICORE processors , *MATRIX multiplications , *GLOBAL optimization , *MATHEMATICAL optimization , *ENERGY function , *HETEROGENEOUS computing - Abstract
Performance and energy are the two most important objectives for optimization on modern parallel platforms. In this article, we show that moving from single-objective optimization for performance or energy to their bi-objective optimization on heterogeneous processors results in a tremendous increase in the number of optimal solutions (workload distributions) even for the simple case of linear performance and energy profiles. We then study full performance and energy profiles of two real-life data-parallel applications and find that they exhibit shapes that are non-linear and complex enough to prevent good approximation of them as analytical functions for input to exact algorithms or optimization software for determining the Pareto front. We, therefore, propose a solution method solving the bi-objective optimization problem on heterogeneous processors. The method's novel component is an efficient and exact global optimization algorithm that takes as an input performance and energy profiles as arbitrary discrete functions of workload size, which accurately and realistically take into account resource contention and NUMA inherent in modern parallel platforms, and returns the Pareto-optimal solutions (generally speaking, load imbalanced). To construct the input discrete energy functions, the method employs a methodology that accurately models the energy consumption by a hybrid data-parallel application executing on a heterogeneous HPC platform containing different computing devices using system-level power measurements provided by power meters. We experimentally analyse the proposed solution method using three data-parallel applications, matrix multiplication, 2D fast Fourier transform (2D-FFT), and gene sequencing, on two connected heterogeneous servers consisting of multicore CPUs, GPUs, and Intel Xeon Phi. We show that it determines a superior Pareto front containing the best load balanced solutions and all the load imbalanced solutions that are ignored by load balancing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. The workload distribution of acute stroke CT imaging in a level three hospital in Ireland.
- Author
-
Mc Garvey, Caoimhe, Ruddy, Sarah, and O'Brien, Paul
- Abstract
Background: Interventions for acute ischaemic stroke require brain imaging. Computerised tomography (CT) scanning is the most common method used. In this study, the aim was to investigate the CT workload of acute stroke in an Irish level 3 hospital, seeing approximately 200 acute strokes per year. Method: A time frame for data collection: 17th of October 2017–17th of October 2018 was selected. Data were collected from ordering and viewing radiology systems and the Symphony Emergency Department (ED) system. Acute stroke CT brain scans were examined under numerous parameters including arrival time and time in CT scanner. Data were used to calculate 'time to CT' and to examine how this varied depending on the time of day. Scans were categorised into 5 time periods. All CT brains and other CT scans, after hours, in the same period were analysed. Results: Data were collected on 3739 CT Brain scans, 215 were acute stroke scans. One hundred twenty-four acute stroke scans were performed after hours. Acute stroke scans accounted for 9.4% of all out-of-hour CT scans, rising to 14.8% Monday to Friday. Median time to CT in acute stroke patients: period 1 00:30 mins, period 2 00:34 mins, period 3 00:49 mins, period 4 00:34 mins, and period 5 00:39 mins. Conclusion: Acute stroke imaging constitutes a relatively small portion of the out-of-hour CT workload. Due to the emergency status of these scans, providing an acute stroke radiology service requires radiology staff to operate with extremely short response times 24 h a day. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. Energy Cost Optimization of Globally Distributed Internet Data Centers by Copula-based Multidimensional Correlation Modeling
- Author
-
Mohammad Ali Lasemi, Shahin Alizadeh, Mohsen Assili, Zhenyu Yang, Payam Teimourzadeh Baboli, Ahmad Arabkoohsar, Amin Raeiszadeh, Michael Brand, and Sebastian Lehnhoff
- Subjects
General Energy ,Electricity market ,Quality of service ,Energy storage system ,Internet data center ,Correlation modeling ,Energy management ,Workload distribution - Abstract
The high operating costs of Internet Data Centers (IDC) are a major challenge for their owners worldwide. Therefore, more attention has recently been paid to the energy and cost management of IDCs. This paper investigates the optimal operational strategy for minimizing the electricity costs of a group of globally distributed IDCs in different locations under various day-ahead electricity markets, and each is equipped with a high-performance energy storage system. For this goal, optimal workload dispatching and optimal energy management of the storage units of all IDCs are simultaneously perused by the proposed problem. The system is modeled regarding power balancing constraints, battery costs, and quality of service (QoS). For more practical results, a penalty function is also considered when QoS constraints are not perfectly met, and the impact of the batteries’ depth of discharge on the cost of energy storage is also modeled. Moreover, the cross-correlations between the traffic of IDCs are also considered by the multidimensional copula function. The proposed energy cost optimization is linearized for increasing the accuracy of convergence. The results show that not only the power consumption pattern of the IDCs is significantly improved, but also the cost of power consumption is reduced by 34%. The results also prove the positive effect of battery discharge on workload dispatch and represent a compromise between battery costs and electricity cost savings.
- Published
- 2023
- Full Text
- View/download PDF
22. Optimization‐based workload distribution in geographically distributed data centers: A survey.
- Author
-
Ahmad, Iftikhar, Khalil, Muhammad Imran Khan, and Shah, Syed Adeel Ali
- Subjects
- *
SERVER farms (Computer network management) , *INTERNET service providers , *POWER resources , *ENERGY consumption , *ELECTRICITY pricing , *MATHEMATICAL optimization - Abstract
Summary: Energy efficiency is a contemporary and challenging issue in geographically distributed data centers. These data centers consume significantly high energy and cast a negative impact on the energy resources and environment. To minimize the energy cost and the environmental impacts, Internet service providers use different approaches such as geographical load balancing (GLB). GLB refers to the placement of data centers in diverse geolocations to exploit variations in electricity prices with the objective to minimize the total energy cost. GLB helps to minimize the overall energy cost, achieve quality of service, and maximize resource utilization in geo‐distributed data centers by employing optimal workload distribution and resource utilization in the real time. In this paper, we summarize various optimization‐based workload distribution strategies and optimization techniques proposed in recent research works based on commonly used optimization factors such as workload type, load balancer, availability of renewable energy, energy storage, and data center server specification in geographically distributed data centers. The survey presents a systemized and a novel taxonomy of workload distribution in data centers. Moreover, we also debate various challenges and open research issues along with their possible solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. Workload Allocation in IoT-Fog-Cloud Architecture Using a Multi-Objective Genetic Algorithm.
- Author
-
Abbasi, Mahdi, Mohammadi Pasand, Ehsan, and Khosravi, Mohammad R.
- Abstract
With the rapid growth of Internet-of-Things (IoT) applications, data volumes have been considerably increased. The processing resources of IoT nodes cannot cope with such huge workloads. Processing parts of the workload in clouds could solve this problem, but the quality of services for end-users will be decreased. Given the latency reduction for end-users, the concept of processing in the fog devices, which are at the edge of the network has been evolved. Optimizing the energy consumption of fog devices in comparison with cloud devices is a significant challenge. On the other hand, providing the expected-quality of service in processing the requested workloads is highly dependent on the propagation delay between fog devices and clouds, which due to the nature of the distribution of clouds with the different workloads, is highly variable. To date, none of the proposed solutions has solved the problem of workload allocation given the criteria of minimizing the energy and delay of fog devices and clouds, simultaneously. This paper presents a processing model for the problem in which a trade-off between energy consumption and delay in processing workloads in fog is formulated. This multi-objective model of the problem is solved using NSGAII algorithm. The numerical results show that by using the proposed algorithm for workload allocation in a fog-cloud scenario, both of energy-consumption and delay can be improved. Also, by allocating 25% of the IoT workloads to fog devices, the energy consumption and delay are both minimized. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
24. Context-Aware and Reinforcement Learning-Based Load Balancing System for Green Clouds
- Author
-
Anghel, Ionut, Cioara, Tudor, Salomie, Ioan, Sammes, A.J., Series editor, Pop, Florin, editor, Kołodziej, Joanna, editor, and Di Martino, Beniamino, editor
- Published
- 2016
- Full Text
- View/download PDF
25. Vem ska göra vad? Uppdatering av arbetsfördelning : Förbättringsarbete på AB GotlandsHem
- Author
-
Lisper, Josefin, Aman Kylegård, Oscar, Lisper, Josefin, and Aman Kylegård, Oscar
- Abstract
Introduction: The case study has been conducted at AB GotlandsHem, the largest housing agency on the island of Gotland, Sweden. The study has two purposes. The first purpose is to map and examine the workload distribution in the inspection department by developing a generalizable template. The second purpose is to investigate how AB GotlandsHem can proceed with updating the workload distribution across all departments. Problem description: Interviews have shown that several departments at AB GotlandsHem are experiencing unclarity about workload distribution. The workload distribution is uneven, unclear and unstructured, which leads to high workload. This can be seen in the long lead times at the inspection department. Theory: Several theories have been used to analyze the study’s results. The theories being used are the cornerstone model, Kotter’s eight-step-model, workload distribution and Lean. Method: The project was performed as a case study where an interview, observations, a diary and reviewing of existing documents was used. The collected data was analyzed by transcription, histogram and bar chart. Results: The study’s results demonstrate that the developed template is effective. It further shows that the inspection department has significantly more actual hours then available hours, and several work activities within the department are irrelevant for them to perform. AB GotlandsHem consists of multiple sub-processes, each with a process leader responsible for the processes in the company. Conclusion: The study reveals that most irrelevant activities can be moved to another process. If this relocation is implemented, the inspection department would reduce its actual hours and therefore decrease lead time since their available hours would be sufficient. Therefore, process leaders are responsible for reviewing work activities within their processes, allowing the company to map the departments irrelevant activities and the place more releva, Sammanfattning Inledning: Fallstudien har genomförts på Gotlands största bostadsförmedling, AB GotlandsHem. Studien har två syften. Det första syftet är att kartlägga och undersöka arbetsfördelningen på besiktningsavdelningen genom att ta fram en generaliserbar mall. Det andra syftet är att undersöka hur AB GotlandsHem kan gå tillväga för att uppdatera arbetsfördelningen inom samtliga av organisationens avdelningar. Problembeskrivning: Flera avdelningar inom AB GotlandsHem har vid intervjuer beskrivit oklarhet inom arbetsfördelningen. Arbetsfördelningen är ojämn, otydlig och ostrukturerad vilket kräver mer arbetstid än nödvändigt. Inom besiktningsavdelningen syns den otydliga arbetsfördelningen på de långa ledtiderna. Teori: Flera teorier har använts för att analysera studiens resultat. De teorier som använts är hörnstensmodellen, Kotters åttastegsmodell, arbetsfördelning och Lean. Metod: En kvalitativ fallstudie har genomförts och datainsamlingsmetoderna som använts är en intervju, observationer, en dagbok och dokumentgranskning. Data har analyserats med hjälp av transkribering, histogram och stolpdiagram. Resultat: Studiens resultat visar på att mallen som tagits fram fungerar. Mallen har i sin tur visat att besiktningsavdelningen har många fler timmar faktiskt tid än disponibel tid, samt att flera arbetsaktiviteter inom avdelningen anses irrelevanta för dem att genomföra. Vidare har det kartlagts att AB GotlandsHem består av flera delprocesser som har varsin processledare som ansvarar för de processer som finns inom företaget. Slutsatser: Studien visar att majoriteten av de irrelevanta aktiviteterna kan förflyttas till en annan process där aktiviteterna anses mer passande att utföra. Skulle förflyttningen genomföras skulle besiktningsavdelningen minska sina faktiska timmar och på det viset minska ledtiderna då deras disponibla timmar räcker till. Processledarna bör ansvara för att se över arbetsaktiviteterna inom sin process för att kartlägga avdelninga
- Published
- 2023
26. Optimizing workload distribution in Fog-Cloud ecosystem: A JAYA based meta-heuristic for energy-efficient applications.
- Author
-
Singh, Satveer, Sham, Eht E., and Vidyarthi, Deo Prakash
- Subjects
ANT algorithms ,PARTICLE swarm optimization ,ENERGY consumption ,GENETIC algorithms ,METAHEURISTIC algorithms - Abstract
Fog-integrated Cloud has emerged as a novel computing paradigm that brings Cloud computing services to the network's edge in real-time, though with limited capabilities. Despite its advantages, there are several challenges including workload distribution, energy consumption, computational time, and network latency, that require attention. The workload of IoT applications can be distributed over the Fog or Cloud devices based on their priority, deadline, and latency restrictions. In this work, we introduce a novel population-based metaheuristic called MAYA, a modified variant of the JAYA algorithm, to address the Energy-Efficient Workload Distribution of Sensors (EEWDS) in the Fog-Cloud ecosystem. The workload distribution of IoT applications depends on several factors such as request deadlines, the energy consumed during transmission, and needed computation. The performance of the proposed model for the energy consumption, computation time, C O 2 emission, fairness index, and the convergence rate, is evaluated through simulation experiments. The results are compared in two scenarios: one concerning to methodology, where the performance is compared with JAYA, Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Ant Colony Optimization (ACO) techniques. The other scenario is based on the environment, where we examine Cloud-only, Fog-only, and Fog-Cloud integrated environments. Compared to JAYA, GA, PSO and ACO, the proposed MAYA technique demonstrates significant improvements, including reduction in energy consumption by 34.76%, 88.92%, 85.36% and 93.84%; decrease in computation time by 37.64%, 85.07%, 87.22%, and 91.08%; decrease in C O 2 emissions by 23.46%, 76.24%, 97.17%, and 99.02%; and increase in fairness index by 9.62%, 3.72%, 16.90%, and 15.26% respectively. • An IoT workload distribution method has been developed, which aims to minimize energy consumption in uploading, downloading, and computation. • A new technique MAYA, which is a modified version of JAYA metaheuristic, has been introduced. • In addition, C O 2 emission is also measured associated with the proposed MAYA technique in the Fog-integrated Cloud environment. • Furthermore, computation time, running time, convergence rate, and fairness index have also been studied. • A comparative study has been performed in two classic scenarios: Methodology-based and Environment-based. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Towards the Transparent Execution of Compound OpenCL Computations in Multi-CPU/Multi-GPU Environments
- Author
-
Soldado, Fábio, Alexandre, Fernando, Paulino, Hervé, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Kobsa, Alfred, Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Lopes, Luís, editor, Žilinskas, Julius, editor, Costan, Alexandru, editor, Cascella, Roberto G., editor, Kecskemeti, Gabor, editor, Jeannot, Emmanuel, editor, Cannataro, Mario, editor, Ricci, Laura, editor, Benkner, Siegfried, editor, Petit, Salvador, editor, Scarano, Vittorio, editor, Gracia, José, editor, Hunold, Sascha, editor, Scott, Stephen L., editor, Lankes, Stefan, editor, Lengauer, Christian, editor, Carretero, Jesus, editor, Breitbart, Jens, editor, and Alexander, Michael, editor
- Published
- 2014
- Full Text
- View/download PDF
28. Near Optimal Work-Stealing Tree Scheduler for Highly Irregular Data-Parallel Workloads
- Author
-
Prokopec, Aleksandar, Odersky, Martin, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Kobsa, Alfred, Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Cașcaval, Călin, editor, and Montesinos, Pablo, editor
- Published
- 2014
- Full Text
- View/download PDF
29. Equitable Workload and the Perceptions of Academic Staff in Universities.
- Author
-
Rao Muramalla, Venkata Sai Srinivasa and Alotaibi, Khalid Abdullah
- Subjects
UNIVERSITY faculty ,UNIVERSITY & college administration ,CLASS size ,TEACHERS' workload ,CONTRACT employment - Abstract
Workload is the overall assignments to be completed by individuals in a given time. In academia, the number of instructional hours, credit hours, contact hours, class sizes, non-instructional schedules, student-teacher ratio, scholarly activities, and administrative and community services will determine the workload of faculty members in a semester. Any discrimination, favoritism, partiality, or managerial biases in the distribution of the workloads would lead to misconceptions among the academic staff that will affect the work culture of educational institutions. In this context, this paper examines the perceptions of 256 academic staff chosen by stratified random sampling from ten universities in Saudi Arabia by using a questionnaire on the general practices of universities for the allocation of the equitable workload in three variables such as teaching, research, and academic administration. Study results revealed that the academic staff positively responded to all the practices in the three variables. This paper answered the questions of what are the significant differences and significant correlations in the perceptions of academic staff concerning equitable workload distribution in the three variables. Firstly, it was found that the academic staff as groups by gender, nationality, type of university, and tenure has a significant perceptional difference in the three variables. In Saudi Arabia, foreigners work on contract basis, therefore, this study revealed that foreign staff members place more emphasis on teaching instead of research and administration. This result was found in college faculty members in science, arts, and other college faculty members. The study further found a relationship among three types of faculty departments/disciplines as far as teaching and administrative work is concerned, but there is no relation between research and administrative work. In conclusion, the authors have recommended, some practical suggestions for equitable workload among academic staff. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. What task should be assigned to whom? Updating workload distribution
- Author
-
Lisper, Josefin and Aman Kylegård, Oscar
- Subjects
custemer focus ,besiktning ,processförbättring ,arbetsfördelning ,ledtider ,lead times ,kundfokus ,Civil Engineering ,Samhällsbyggnadsteknik ,process improvment ,cornerstone model ,workload distribution ,hörnstensmodellen ,inspection - Abstract
Introduction: The case study has been conducted at AB GotlandsHem, the largest housing agency on the island of Gotland, Sweden. The study has two purposes. The first purpose is to map and examine the workload distribution in the inspection department by developing a generalizable template. The second purpose is to investigate how AB GotlandsHem can proceed with updating the workload distribution across all departments. Problem description: Interviews have shown that several departments at AB GotlandsHem are experiencing unclarity about workload distribution. The workload distribution is uneven, unclear and unstructured, which leads to high workload. This can be seen in the long lead times at the inspection department. Theory: Several theories have been used to analyze the study’s results. The theories being used are the cornerstone model, Kotter’s eight-step-model, workload distribution and Lean. Method: The project was performed as a case study where an interview, observations, a diary and reviewing of existing documents was used. The collected data was analyzed by transcription, histogram and bar chart. Results: The study’s results demonstrate that the developed template is effective. It further shows that the inspection department has significantly more actual hours then available hours, and several work activities within the department are irrelevant for them to perform. AB GotlandsHem consists of multiple sub-processes, each with a process leader responsible for the processes in the company. Conclusion: The study reveals that most irrelevant activities can be moved to another process. If this relocation is implemented, the inspection department would reduce its actual hours and therefore decrease lead time since their available hours would be sufficient. Therefore, process leaders are responsible for reviewing work activities within their processes, allowing the company to map the departments irrelevant activities and the place more relevant for them. Key words: cornerstone model, customer focus, inspection, lead times, process improvement, workload distribution Fel! Hittar inte referenskälla., Uppsala universitet. Fel! Hittar inte referenskälla.. Handledare: Fel! Hittar inte referenskälla., Ämnesgranskare: Fel! Hittar inte referenskälla., Examinator: Fel! Hittar inte referenskälla. Sammanfattning Inledning: Fallstudien har genomförts på Gotlands största bostadsförmedling, AB GotlandsHem. Studien har två syften. Det första syftet är att kartlägga och undersöka arbetsfördelningen på besiktningsavdelningen genom att ta fram en generaliserbar mall. Det andra syftet är att undersöka hur AB GotlandsHem kan gå tillväga för att uppdatera arbetsfördelningen inom samtliga av organisationens avdelningar. Problembeskrivning: Flera avdelningar inom AB GotlandsHem har vid intervjuer beskrivit oklarhet inom arbetsfördelningen. Arbetsfördelningen är ojämn, otydlig och ostrukturerad vilket kräver mer arbetstid än nödvändigt. Inom besiktningsavdelningen syns den otydliga arbetsfördelningen på de långa ledtiderna. Teori: Flera teorier har använts för att analysera studiens resultat. De teorier som använts är hörnstensmodellen, Kotters åttastegsmodell, arbetsfördelning och Lean. Metod: En kvalitativ fallstudie har genomförts och datainsamlingsmetoderna som använts är en intervju, observationer, en dagbok och dokumentgranskning. Data har analyserats med hjälp av transkribering, histogram och stolpdiagram. Resultat: Studiens resultat visar på att mallen som tagits fram fungerar. Mallen har i sin tur visat att besiktningsavdelningen har många fler timmar faktiskt tid än disponibel tid, samt att flera arbetsaktiviteter inom avdelningen anses irrelevanta för dem att genomföra. Vidare har det kartlagts att AB GotlandsHem består av flera delprocesser som har varsin processledare som ansvarar för de processer som finns inom företaget. Slutsatser: Studien visar att majoriteten av de irrelevanta aktiviteterna kan förflyttas till en annan process där aktiviteterna anses mer passande att utföra. Skulle förflyttningen genomföras skulle besiktningsavdelningen minska sina faktiska timmar och på det viset minska ledtiderna då deras disponibla timmar räcker till. Processledarna bör ansvara för att se över arbetsaktiviteterna inom sin process för att kartlägga avdelningarnas aktiviteter och avgöra om samtliga är relevanta för avdelningen.
- Published
- 2023
31. RAC Operational Practices : by Riyaj Shamsudeen
- Author
-
Hussain, Syed Jaffar, Farooq, Tariq, Shamsudeen, Riyaj, Yu, Kai, Hussain, Syed Jaffar, Farooq, Tariq, Shamsudeen, Riyaj, and Yu, Kai
- Published
- 2013
- Full Text
- View/download PDF
32. Efficient Workload Distribution Bridging HTC and HPC in Scientific Computing
- Author
-
Manuali, Carlo, Costantini, Alessandro, Laganà, Antonio, Cecchi, Marco, Ghiselli, Antonia, Carpené, Michele, Rossi, Elda, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Murgante, Beniamino, editor, Gervasi, Osvaldo, editor, Misra, Sanjay, editor, Nedjah, Nadia, editor, Rocha, Ana Maria A. C., editor, Taniar, David, editor, and Apduhan, Bernady O., editor
- Published
- 2012
- Full Text
- View/download PDF
33. A system for equitable workload distribution in clinical medical physics
- Author
-
Eric C. Ford, Juergen Meyer, Minsun Kim, Sarah Geneser, Stephen R. Bowen, and Wade P. Smith
- Subjects
leadership ,medicine.medical_specialty ,Radiation ,Management and Profession ,Equity (finance) ,Workload ,Task (project management) ,Unit (housing) ,equity ,Surveys and Questionnaires ,Transparency (graphic) ,Accountability ,Radiation Oncology ,medicine ,workload distribution ,Humans ,Radiology, Nuclear Medicine and imaging ,Fraction (mathematics) ,Medical physics ,Metric (unit) ,Instrumentation ,Health Physics - Abstract
Background Clinical medical physics duties include routine tasks, special procedures, and development projects. It can be challenging to distribute the effort equitably across all team members, especially in large clinics or systems where physicists cover multiple sites. The purpose of this work is to study an equitable workload distribution system in radiotherapy physics that addresses the complex and dynamic nature of effort assignment. Methods We formed a working group that defined all relevant clinical tasks and estimated the total time spent per task. Estimates used data from the oncology information system, a survey of physicists, and group consensus. We introduced a quantitative workload unit, “equivalent workday” (eWD), as a common unit for effort. The sum of all eWD values adjusted for each physicist's clinical full‐time equivalent yields a “normalized total effort” (nTE) metric for each physicist, that is, the fraction of the total effort assigned to that physicist. We implemented this system in clinical operation. During a trial period of 9 months, we made adjustments to include tasks previously unaccounted for and refined the system. The workload distribution of eight physicists over 12 months was compared before and after implementation of the nTE system. Results Prior to implementation, differences in workload of up to 50% existed between individual physicists (nTE range of 10.0%–15.0%). During the trial period, additional categories were added to account for leave and clinical projects that had previously been assigned informally. In the 1‐year period after implementation, the individual workload differences were within 5% (nTE range of 12.3%–12.8%). Conclusion We developed a system to equitably distribute workload and demonstrated improvements in the equity of workload. A quantitative approach to workload distribution improves both transparency and accountability. While the system was motivated by the complexities within an academic medical center, it may be generally applicable for other clinics.
- Published
- 2021
- Full Text
- View/download PDF
34. An Approach for Processing Large and Non-uniform Media Objects on MapReduce-Based Clusters
- Author
-
Schmidt, Rainer, Rella, Matthias, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Xing, Chunxiao, editor, Crestani, Fabio, editor, and Rauber, Andreas, editor
- Published
- 2011
- Full Text
- View/download PDF
35. Finding a Balance: Reconciling the Needs of the Institution, Patient, and Genetic Counselor for Optimal Resource Utilization.
- Author
-
Patel, Devanshi, Blouch, Erica L., Rodgers-Fouché, Linda H., Emmet, Margaret M., and Shannon, Kristen M.
- Abstract
The current practice of cancer genetic counseling is undergoing widespread change and scrutiny. While there are clinical resources for genetic counselors (GCs) regarding the delivery of cancer genetic services, there is limited literature regarding effective management of a genetic counseling clinical program. We have developed administrative tools to manage a large team of GCs at a single academic medical center over a period of increasing demand for genetics services, with the initial aim of decreasing wait time for urgent genetic counseling visits. Here, we describe the three main elements of the clinical operations: Balancing patient volume between GCs, scheduling tracks for both routine and urgent appointments, and a team of triaging GCs to ensure appropriate patient referrals. For each of these elements, we describe how they have been modified over time and present data to support the utility of these strategies. The preliminary evidence offered here suggests that these tools allow for an equitable distribution of patient volume between team members, as well as the timely and accurate scheduling of urgent patients. As a result of the experiences presented here, other genetic counseling programs grappling with similar issues should be aware that it is possible to shift clinical operations to serve certain patient populations in a more timely fashion while keeping both providers and GC staff satisfied. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
36. Workload Equity in Vehicle Routing Problems: A Survey and Analysis.
- Author
-
Matl, P., Hartl, R. F., and Vidal, T.
- Subjects
- *
WORKLOAD of computers , *VEHICLE routing problem , *TRAVELING salesman problem , *AXIOMATIC set theory ,LOGISTICS management - Abstract
Over the past two decades, equity aspects have been considered in a growing number of models and methods for vehicle routing problems (VRPs). Equity concerns most often relate to fairly allocating workloads and to balancing the utilization of resources, and many practical applications have been reported in the literature. However, there has been only limited discussion about how workload equity should be modeled in the context of VRPs, and various measures for optimizing such objectives have been proposed and implemented without a critical evaluation of their respective merits and consequences. This article addresses this gap by providing an analysis of classical and alternative equity functions for biobjective VRP models. In our survey, we review and categorize the existing literature on equitable VRPs. In the analysis, we identify a set of axiomatic properties that an ideal equity measure should satisfy, collect six common measures of equity, and point out important connections between their properties and the properties of the resulting Pareto-optimal solutions. To gauge the extent of these implications, we also conduct a numerical study on small biobjective VRP instances solvable to optimality. Our study reveals two undesirable consequences when optimizing equity with nonmonotonic functions: Pareto-optimal solutions can consist of non-TSP-optimal tours, and even if all tours are TSP optimal, Pareto-optimal solutions can be workload inconsistent, i.e., composed of tours whose workloads are all equal to or longer than those of other Pareto-optimal solutions.We show that the extent of these phenomena should not be underestimated. The results of our biobjective analysis remain valid also for weighted sum, constraint-based, or single-objective models. Based on this analysis, we conclude that monotonic equity functions are more appropriate for certain types of VRP models, and suggest promising avenues for further research on equity in logistics. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
37. Probabilistic Network Loads with Dependencies and the Effect on Queue Sojourn Times
- Author
-
Ivers, Matthias, Ernst, Rolf, Akan, Ozgur, Series editor, Bellavista, Paolo, Series editor, Cao, Jiannong, Series editor, Dressler, Falko, Series editor, Ferrari, Domenico, Series editor, Gerla, Mario, Series editor, Kobayashi, Hisashi, Series editor, Palazzo, Sergio, Series editor, Sahni, Sartaj, Series editor, Shen, Xuemin (Sherman), Series editor, Stan, Mircea, Series editor, Xiaohua, Jia, Series editor, Zomaya, Albert, Series editor, Coulson, Geoffrey, Series editor, Bartolini, Novella, editor, Nikoletseas, Sotiris, editor, Sinha, Prasun, editor, Cardellini, Valeria, editor, and Mahanti, Anirban, editor
- Published
- 2009
- Full Text
- View/download PDF
38. A Performance-Based Approach to Dynamic Workload Distribution for Master-Slave Applications on Grid Environments
- Author
-
Shih, Wen-Chung, Yang, Chao-Tung, Tseng, Shian-Shyong, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Dough, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Chung, Yeh-Ching, editor, and Moreira, José E., editor
- Published
- 2006
- Full Text
- View/download PDF
39. Workload Management
- Author
-
Dyke, Julian and Shaw, Steve
- Published
- 2006
- Full Text
- View/download PDF
40. A Survey of Load Balancing in Grid Computing
- Author
-
Li, Yawei, Lan, Zhiling, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Dough, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Zhang, Jun, editor, He, Ji-Huan, editor, and Fu, Yuxi, editor
- Published
- 2005
- Full Text
- View/download PDF
41. Performance, Energy and Temperature Considerations for Job Scheduling and for Workload Distribution in Heterogeneous Systems
- Author
-
Alsubaihi, Shouq
- Subjects
Computer engineering ,Energy Consumption ,Heterogeneous Systems ,Peak Power ,Performance ,Scheduling ,Workload Distribution - Abstract
Many systems today are heterogeneous in that they consist of a mix of different types of processing units (e.g., CPUs, GPUs). Each of these processing units has different performance and energy consumption characteristics. Job scheduling and workload distribution play a crucial role in such systems as they strongly affect system’s performance, energy consumption, peak power and peak temperature. The scheduler maps the entire jobs to processing units, whereas workload distributor maps parts of the job. Allocating resources (e.g., core scaling, thread allocation) is another challenge since different sets of resources exhibit different behavior in terms of performance and energy. Performance was the dominant factor in job scheduling and workload distribution for years. As processor’s design has hit the power-wall, energy consumption also becomes important. Many studies have been conducted on scheduling and workload distribution with an eye on performance improvement. However, few of them consider both performance and energy.We propose a Performance, Energy and Thermal aware Resource Allocator and Scheduler (PETRAS), which includes core scaling and thread allocation. Since job scheduling is known to be an NP-hard problem, we apply a Genetic Algorithm (GA) to find an efficient job schedule in terms of performance and energy consumption, under peak power and peak CPU temperature constraints. Compared to other schedulers, PETRAS achieves up to 4.7x speedup and energy saving of up to 195%.The classic workload distribution does not fully utilize the CPUs and the GPUs. It maps the sequential parts of a job to the CPU and the parallel parts to the GPU. We thus propose a Workload Distributor with a Resource Allocator (WDRA), which combines core scaling and thread allocation into a workload distributor. Since workload distribution is known to be an NP-hard problem, WDRA utilizes Particle Swarm Optimization (PSO) to find an efficient workload distribution in terms of performance and energy consumption, under peak power and peak CPU temperature constraints. Compared to other workload distributors, WDRA can achieve up to 1.47x speedup and 82% reduction of energy consumption. WDRA is a well-suited runtime distributor since it only takes up to 1.7% of the job’s execution time.
- Published
- 2017
42. A Hash-Based Collaborative Transcoding Proxy System
- Author
-
Wu, Xiu, Tan, Kian-Lee, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Zhou, Xiaofang, editor, Orlowska, Maria E., editor, and Zhang, Yanchun, editor
- Published
- 2003
- Full Text
- View/download PDF
43. Mapping Unstructured Applications into Nested Parallelism Best Student Paper Award: First Prize
- Author
-
González-Escribano1, Arturo, van Gemund, Arjan J.C., Cardeñoso-Payo, Valentín, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Palma, José M. L. M., editor, Sousa, A. Augusto, editor, Dongarra, Jack, editor, and Hernández, Vicente, editor
- Published
- 2003
- Full Text
- View/download PDF
44. Performance Prediction of Data-Dependent Task Parallel Programs
- Author
-
Gautama, Hasyim, van Gemund, Arjan J. C., Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Sakellariou, Rizos, editor, Gurd, John, editor, Freeman, Len, editor, and Keane, John, editor
- Published
- 2001
- Full Text
- View/download PDF
45. Realisations for Strict Languages
- Author
-
Kluge, Werner, Hammond, Kevin, editor, and Michaelson, Greg, editor
- Published
- 1999
- Full Text
- View/download PDF
46. QoS-Aware Workload Distribution in Hierarchical Edge Clouds: A Reinforcement Learning Approach
- Author
-
Seung Hyun Yoon, Hongseok Jeon, Seungjae Shin, and Chunglae Cho
- Subjects
General Computer Science ,Computer science ,Distributed computing ,resource allocation ,Cloud computing ,02 engineering and technology ,edge computing ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,General Materials Science ,Resource management ,Edge computing ,Deep reinforcement learning ,business.industry ,Quality of service ,General Engineering ,020206 networking & telecommunications ,Workload ,Token bucket ,020202 computer hardware & architecture ,Resource allocation ,workload distribution ,Enhanced Data Rates for GSM Evolution ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,lcsh:TK1-9971 - Abstract
Recently, edge computing is getting attention as a new computing paradigm that is expected to achieve short-delay and high-throughput task offloading for large scale Internet-of-Things (IoT) applications. In edge computing, workload distribution is one of the most critical issues that largely influences the delay and throughput performance of edge clouds, especially in distributed Function-as-a-Service (FaaS) over networked edge nodes. In this paper, we propose the Resource Allocation Control Engine with Reinforcement learning (RACER), which provides an efficient workload distribution strategy to reduce the task response slowdown with per-task response time Quality-of-Service (QoS). First, we present a novel problem formulation with the per-task QoS constraint derived from the well-known token bucket mechanism . Second, we employ a problem relaxation to reduce the overall computation complexity by compromising just a bit of optimality. Lastly, we take the deep reinforcement learning approach as an alternative solution to the workload distribution problem to cope with the uncertainty and dynamicity of underlying environments. Evaluation results show that RACER achieves a significant improvement in terms of per-task QoS violation ratio, average slowdown, and control efficiency, compared to AREA, a state-of-the-art workload distribution method.
- Published
- 2020
47. Introduction
- Author
-
Chengzhong, Xu and Lau, Francis C. M.
- Published
- 1997
- Full Text
- View/download PDF
48. Karhünen-Loève transform: An exercise in simple image-processing parallel pipelines
- Author
-
Fleury, Martin, Downton, Andy C., Clark, Adrian F., Lengauer, Christian, editor, Griebl, Martin, editor, and Gorlatch, Sergei, editor
- Published
- 1997
- Full Text
- View/download PDF
49. Adaptive Distributed Simulation for Computationally Intensive Modelling
- Author
-
Shum, Kam Hong, Merabti, Madjid, editor, Carew, Michael, editor, and Ball, Frank, editor
- Published
- 1996
- Full Text
- View/download PDF
50. Experience with the implementation of a concurrent graph reduction system on an nCUBE/2 platform
- Author
-
Bülck, Torsten, Held, Achim, Kluge, Werner, Pantke, Stefan, Rathsack, Carsten, Scholz, Sven-Bodo, Schröder, Raimund, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Buchberger, Bruno, editor, and Volkert, Jens, editor
- Published
- 1994
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.