178 results on '"data allocation"'
Search Results
2. Optimization of data allocation in hierarchical memory for blocked shortest paths algorithms
- Author
-
A. A. Prihozhy
- Subjects
shortest paths algorithm ,hierarchical memory ,direct mapped cache ,performance ,block conflict graph ,data allocation ,equitable coloring ,defective coloring ,Information technology ,T58.5-58.64 - Abstract
This paper is devoted to the reduction of data transfer between the main memory and direct mapped cache for blocked shortest paths algorithms (BSPA), which represent data by a D[M×M] matrix of blocks. For large graphs, the cache size S = δ×M2, δ < 1 is smaller than the matrix size. The cache assigns a group of main memory blocks to a single cache block. BSPA performs multiple recalculations of a block over one or two other blocks and may access up to three blocks simultaneously. If the blocks are assigned to the same cache block, conflicts occur among the blocks, which imply active transfer of data between memory levels. The distribution of blocks on groups and the block conflict count strongly depends on the allocation and ordering of the matrix blocks in main memory. To solve the problem of optimal block allocation, the paper introduces a block conflict weighted graph and recognizes two cases of block mapping: non-conflict and minimum-conflict. In first case, it formulates an equitable color-class-size constrained coloring problem on the conflict graph and solves it by developing deterministic and random algorithms. In second case, the paper formulates a problem of weighted defective color-count constrained coloring of the conflict graph and solves it by developing a random algorithm. Experimental results show that the equitable random algorithm provides an upper bound of the cache size that is very close to the lower bound estimated over the size of a complete subgraph, and show that a non-conflict matrix allocation is possible at δ = 0.5 for M = 4 and at δ = 0.1 for M = 20. For a low cache size, the weighted defective algorithm gives the number of remaining conflicts that is up to 8.8 times less than the original BSPA gives. The proposed model and algorithms are applicable to set-associative cache as well.
- Published
- 2021
- Full Text
- View/download PDF
3. Recommender systems and market approaches for industrial data management
- Author
-
Jess, Torben and McFarlane, Duncan
- Subjects
025.04 ,Data allocation ,Data management ,Recommender systems ,Market-based algorithms ,Data overload - Abstract
Industrial companies are dealing with an increasing data overload problem in all aspects of their business: vast amounts of data are generated in and outside each company. Determining which data is relevant and how to get it to the right users is becoming increasingly difficult. There are a large number of datasets to be considered, and an even higher number of combinations of datasets that each user could be using. Current techniques to address this data overload problem necessitate detailed analysis. These techniques have limited scalability due to their manual effort and their complexity, which makes them unpractical for a large number of datasets. Search, the alternative used by many users, is limited by the user’s knowledge about the available data and does not consider the relevance or costs of providing these datasets. Recommender systems and so-called market approaches have previously been used to solve this type of resource allocation problem, as shown for example in allocation of equipment for production processes in manufacturing or for spare part supplier selection. They can therefore also be seen as a potential application for the problem of data overload. This thesis introduces the so-called RecorDa approach: an architecture using market approaches and recommender systems on their own or by combining them into one system. Its purpose is to identify which data is more relevant for a user’s decision and improve allocation of relevant data to users. Using a combination of case studies and experiments, this thesis develops and tests the approach. It further compares RecorDa to search and other mechanisms. The results indicate that RecorDa can provide significant benefit to users with easier and more flexible access to relevant datasets compared to other techniques, such as search in these databases. It is able to provide a fast increase in precision and recall of relevant datasets while still keeping high novelty and coverage of a large variety of datasets.
- Published
- 2017
- Full Text
- View/download PDF
4. Data, User and Power Allocations for Caching in Multi-Access Edge Computing.
- Author
-
Xia, Xiaoyu, Chen, Feifei, He, Qiang, Cui, Guangming, Grundy, John C., Abdelrazek, Mohamed, Xu, Xiaolong, and Jin, Hai
- Subjects
- *
EDGE computing , *NASH equilibrium , *INFORMATION retrieval , *ALLOCATION (Accounting) , *VENDOR-managed inventory - Abstract
In the multi-access edge computing (MEC) environment, app vendors’ data can be cached on edge servers to ensure low-latency data retrieval. Massive users can simultaneously access edge servers with high data rates through flexible allocations of transmit power. The ability to manage networking resources offers unique opportunities to app vendors but also raises unprecedented challenges. To ensure fast data retrieval for users in the MEC environment, edge data caching must take into account the allocations of data, users, and transmit power jointly. We make the first attempt to study the Data, User, and Power Allocation (DUPA $^3$ 3 ) problem, aiming to serve the most users and maximize their overall data rate. First, we formulate the DUPA $^3$ 3 problem and prove its $\mathcal {NP}$ NP -completeness. Then, we model the DUPA $^3$ 3 problem as a potential DUPA $^3$ 3 game admitting at least one Nash equilibrium and propose a two-phase game-theoretic decentralized algorithm named DUPA $^3$ 3 Game to achieve the Nash equilibrium as the solution to the DUPA $^3$ 3 problem. To evaluate DUPA $^3$ 3 Game, we analyze its theoretical performance and conduct extensive experiments to demonstrate its effectiveness and efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. On K-means clustering-based approach for DDBSs design
- Author
-
Ali A. Amer
- Subjects
DDBS ,Data allocation ,Data replication ,Query clustering ,K-means algorithm ,Computer engineering. Computer hardware ,TK7885-7895 ,Information technology ,T58.5-58.64 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract In Distributed Database Systems (DDBS), communication costs and response time have long been open-ended challenges. Nevertheless, when DDBS is carefully designed, the desired reduction in communication costs will be achieved. Data fragmentation (data clustering) and data allocation are on popularity as the prime strategies in constant use to design DDBS. Based on these strategies, on the other hand, several design techniques have been presented in the literature to improve DDBS performance using either empirical results or data statistics, making most of them imperfect or invalid particularly, at least, at the initial stage of DDBSs design. In this paper, thus, a heuristic k-means approach for vertical fragmentation and allocation is introduced. This approach is primarily focused on DDBS design at the initial stage. Many techniques are being joined in a step to make a promising work. A brief yet effective experimental study, on both artificially-created and real datasets, has been conducted to demonstrate the optimality of the proposed approach, comparing with its counterparts, as the obtained results has been shown encouraging.
- Published
- 2020
- Full Text
- View/download PDF
6. Dynamic Data Allocation and Task Scheduling on Multiprocessor Systems With NVM-Based SPM
- Author
-
Yan Wang, Kenli Li, and Keqin Li
- Subjects
Data allocation ,endurance ,execution cost ,nonvolatile memory ,wear-leveling ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Low-power and short-latency memory access is critical to the performance of chip multiprocessor (CMP) system devices, especially to bridge the performance gap between memory and CPU. Together with increased demand for low-energy consumption and high-speed memory, scratch-pad memory (SPM) has been widely adopted in multiprocessor systems. In this paper, we employ a hybrid SPM, composed of a static random-access memory and a nonvolatile memory (NVM), to replace the cache in CMP. However, there are several challenges related to the CMP that need to be addressed, including how to dynamically assign processors to application tasks and dynamically allocate data to memories. To solve these problems based on this architecture, we propose a novel dynamic data allocation and task scheduling algorithm, i.e., dynamic greedy data allocation and task scheduling (DGDATS). Experiments on DSP benchmarks demonstrate the effectiveness and efficiency of our proposed algorithms; namely, our proposed algorithm can generate a highly efficient dynamic data allocation and task scheduling approach to minimize the total execution cost and produce the least amount of write operations on NVMs. Our extensive simulation study demonstrates that our proposed algorithm exhibits an excellent performance compared with the heuristic allocation (HA) and adaptive genetic algorithm for data allocation (AGADA) algorithms. Based on the CMP systems with hybrid SPMs, DGDATS reduces the total execution cost by 22.18% and 51.37% compared with those of the HA and AGADA algorithms, respectively. Additionally, the average number of write operations on NVM is 19.82% lower than that of HA.
- Published
- 2019
- Full Text
- View/download PDF
7. Marking of Electrode Sheets in the Production of Lithium-Ion Cells as an Enabler for Tracking and Tracing.
- Author
-
Sommer, Alessandro, Leeb, Matthias, Haghi, Sajedeh, Günter, Florian J., and Reinhart, Gunther
- Abstract
The production of lithium-ion batteries is highly complex and characterized by continuous as well as discrete material flows and processes. A first step towards controlling the complexity of battery production is to create transparency through data collection. Electrodes as one of the key elements in a battery cell play a decisive role for the battery performance. The allocation of production data to electrodes enables a detailed digital twin and an individual grading system. This paper presents a concept for the marking of electrode sheets and requirements on markers as well as on marking technologies due to boundary conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. DDNN based data allocation method for IIoT.
- Author
-
Wang, Chuting, Guo, Ruifeng, Hu, Yi, Yu, Haoyu, and Su, Wenju
- Subjects
DATABASES ,MANUFACTURING processes ,ARTIFICIAL intelligence ,MACHINE learning ,INDUSTRIAL sites ,DEEP learning - Abstract
With the complete application of artificial intelligence in the field of industrial production and manufacturing and the rapid development of edge computing, industrial processing sites often need to deploy machine learning tasks at edges and terminals. We propose a data allocation method based on Distributed Deep Neural Networks (DDNN) framework, which allocates data to edge servers or stays locally for processing. DDNN divides deep learning tasks and deploys pre-trained shallow neural networks and deep neural networks at local or edges, respectively. However, all data is processed locally, and the failure is sent to the edge server or the cloud. It will lead to excessive pressure on local terminal equipment and long-term idle edge servers, which cannot meet industrial production's real-time requirements on user privacy and time-sensitive tasks. In this paper, the complexity and inference error rate of machine learning model, the data processing speed of local equipment and edge server, and the transmission time are comprehensively considered to establish the system model. A joint optimization problem is proposed to minimize the total data processing delay. The optimal solution is derived analytically, and the optimal data allocation methhod is given. Simulation experiments are designed to verify the method's effectiveness and study the influence of key parameters on the allocation method. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. A data distribution model for RDF.
- Author
-
Schroeder, Rebeca, Penteado, Raqueline R. M., and Hara, Carmem S.
- Subjects
DATA distribution ,MESSAGE passing (Computer science) ,DATA modeling ,RDF (Document markup language) ,ELECTRONIC data processing ,COMBINED sewer overflows ,DATABASES - Abstract
The ever-increasing amount of RDF data made available requires data to be partitioned across multiple servers. We have witnessed some research progress made towards scaling RDF query processing based on suitable data distribution methods. In general, they work well for queries matching simple triple patterns, but they are not efficient for queries involving more complex patterns. In this paper, we present an RDF data distribution method which overcomes the shortcomings of the current approaches in order to scale RDF storage both on the volume of data and query processing. We apply a method that identifies frequent patterns accessed by queries in order to keep related data in the same partition. We deploy our reasoning on a summarized view of data in order to avoid exhaustive analysis on large datasets. As result, partitioning templates are obtained from data items in an RDF structure. In addition, we provide an approach for dynamic data insertions even if new data do not conform to the original RDF structure. Apart from the repartitioning approaches, we use an overflow repository to store data which may not follow the original schema. Our study shows that our method scales well and is effective to improve the overall performance by decreasing the amount of message passing among servers, compared to alternative data distribution approaches for RDF. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. SmartHeating: On the Performance and Lifetime Improvement of Self-Healing SSDs.
- Author
-
Cui, Jinhua, Liu, Junwei, Huang, Jianhang, and Yang, Laurence T.
- Subjects
- *
FLASH memory , *SOLID state drives , *RECORDS management , *HIGH temperatures , *THRESHOLD voltage - Abstract
In NAND flash memory-based solid-state drives (SSDs), during the idle time between the consecutive program/erase cycles (dwell time), the dielectric damage of flash cell can be partially repaired, also known as the self-recovery effect. As the effectiveness of the self-recovery effect can be improved under high temperature, self-healing SSDs are proven feasible to extend the flash endurance significantly. However, current self-healing SSDs perform the heating operations on all the worn-out blocks without considering the data retention requirement, and measures the lifetime of flash memory based on the worst-case self-recovery effect, leading to some unnecessary heating operations and the degraded performance. We propose SmartHeating, a smart heating scheme that exploits the dwell time variation and the write hotness variation to improve the I/O performance and the lifetime of self-healing SSDs. SmartHeating tracks the dwell time of all worn-out flash blocks, predicts their self-recovery effect and reliability, and avoids performing heating operations on the worn-out flash blocks that still have strong flash reliability. In addition, by exploiting the data hotness variation, SmartHeating only heats the worn-out flash blocks that store write-cold data, while allocating write-hot data to a small portion of worn-out flash blocks with negligible refresh overhead. The experimental results show that SmartHeating reduces the number of heating operations by 12.5% on average, boosts I/O performance of flash storage systems by 21.0%, and improves the lifetime of flash memory by $1.20\times $ compared with conventional heating scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. ASGOP: An aggregated similarity-based greedy-oriented approach for relational DDBSs design
- Author
-
Ali A. Amer, Marghny H. Mohamed, and Khaled Al_Asri
- Subjects
Information science ,Computer science ,Vertical fragmentation ,Clustering ,Data allocation ,Data replication ,Science (General) ,Q1-390 ,Social sciences (General) ,H1-99 - Abstract
In the literature of distributed database system (DDBS), several methods sought to meet the satisfactory reduction on transmission cost (TC) and were seen substantially effective. Data Fragmentation, site clustering, and data distribution have been considered the major leading TC-mitigating influencers. Sites clustering, on one hand, aims at grouping sites appropriately according to certain similarity metrics. On the other hand, data distribution seeks to allocate the fragmented data into clusters/sites properly. The combination of these methods, however, has been shown fruitful concerning TC reduction along with network overheads. In this work, hence, a heuristic clustering-based approach for vertical fragmentation and data allocation is meticulously designed. The focus is directed on proposing an influential solution for improving relational DDBS throughputs across an aggregated similarity-based fragmentation procedure, an effective site clustering and a greedy algorithm-driven data allocation model. Moreover, the data replication is also considered so TC is further minimized. Through the delineated-below evaluation, the findings of experimental implementation have been observed to be promising.
- Published
- 2020
- Full Text
- View/download PDF
12. TTEC: Data Allocation Optimization for Morphable Scratchpad Memory in Embedded Systems
- Author
-
Linbo Long, Qing Ai, Xiaotong Cui, and Jun Liu
- Subjects
Data allocation ,scratchpad memory ,morphable NVM ,embedded systems ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Scratchpad memory (SPM) is widely utilized in many embedded systems as a software-controlled on-chip memory to replace the traditional cache. New non-volatile memory (NVM) has emerged as a promising candidate to replace SRAM in SPM, due to its significant benefits, such as low-power consumption and high performance. In particular, several representative NVMs, such as PCM, ReRAM, and STT-RAM can build multiple-level cells (MLC) to achieve even higher density. Nevertheless, this triggers off higher energy overhead and longer access latency compared with its single-level cell (SLC) counterpart. To address this issue, this paper first proposes a specific SPM with morphable NVM, in which the memory cell can be dynamically programmed to the MLC mode or SLC mode. Considering the benefits of high-density MLC and low-energy SLC, a simple and novel optimization technique, named theory of thermal expansion and contraction, is presented to minimize the energy consumption and access latency in embedded systems. The basic idea is to dynamically adjust the size configure of SLC/MLC in SPM according to the different workloads of program and allocate the optimal storage medium for each data. Therefore, an integer linear programming formulation is first built to produce an optimal SLC/MLC SPM partition and data allocation. In addition, a corresponding approximation algorithm is proposed to achieve near-optimal results in polynomial time. Finally, the experimental results show that the proposed technique can effectively improve the system performance and reduce the energy consumption.
- Published
- 2018
- Full Text
- View/download PDF
13. Latency-Sensitive Data Allocation and Workload Consolidation for Cloud Storage
- Author
-
Song Yang, Philipp Wieder, Muzzamil Aziz, Ramin Yahyapour, Xiaoming Fu, and Xu Chen
- Subjects
Cloud Storage ,data allocation ,latency ,workload consolidation ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Customers often suffer from the variability of data access time in (edge) cloud storage service, caused by network congestion, load dynamics, and so on. One efficient solution to guarantee a reliable latency-sensitive service (e.g., for industrial Internet of Things application) is to issue requests with multiple download/upload sessions which access the required data (replicas) stored in one or more servers, and use the earliest response from those sessions. In order to minimize the total storage costs, how to optimally allocate data in a minimum number of servers without violating latency guarantees remains to be a crucial issue for the cloud provider to deal with. In this paper, we study the latency-sensitive data allocation problem, the latency-sensitive data reallocation problem and the latency-sensitive workload consolidation problem for cloud storage. We model the data access time as a given distribution whose cumulative density function is known, and prove that these three problems are NP-hard. To solve them, we propose an exact integer nonlinear program (INLP) and a Tabu Search-based heuristic. The simulation results reveal that the INLP can always achieve the best performance in terms of lower number of used nodes and higher storage and throughput utilization, but this comes at the expense of much higher running time. The Tabu Search based heuristic, on the other hand, can obtain close-to-optimal performance, but in a much lower running time.
- Published
- 2018
- Full Text
- View/download PDF
14. Tracking and Tracing for Data Mining Application in the Lithium-ion Battery Production.
- Author
-
Wessel, Jacob, Turetskyy, Artem, Wojahn, Olaf, Herrmann, Christoph, and Thiede, Sebastian
- Abstract
The production of Lithium-Ion Battery (LIB) cells is characterized by the interlinking of different production processes with a manifold of intermediate products. To be able to ensure high quality and enable a traceability of different production and product characteristics (e.g. energy consumption, material), a tracking and tracing concept is required. In this paper, a practical tracking and tracing concept throughout the production of LIB cells, enabling inline data-driven applications, is introduced. As a part of this concept an intelligent tracking and tracing platform is shown, which allows the generation of a pre-clustered data sets to facilitate future data-driven applications. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. On K-means clustering-based approach for DDBSs design.
- Author
-
Amer, Ali A.
- Subjects
DISTRIBUTED databases ,K-means clustering ,DESIGN techniques - Abstract
In Distributed Database Systems (DDBS), communication costs and response time have long been open-ended challenges. Nevertheless, when DDBS is carefully designed, the desired reduction in communication costs will be achieved. Data fragmentation (data clustering) and data allocation are on popularity as the prime strategies in constant use to design DDBS. Based on these strategies, on the other hand, several design techniques have been presented in the literature to improve DDBS performance using either empirical results or data statistics, making most of them imperfect or invalid particularly, at least, at the initial stage of DDBSs design. In this paper, thus, a heuristic k-means approach for vertical fragmentation and allocation is introduced. This approach is primarily focused on DDBS design at the initial stage. Many techniques are being joined in a step to make a promising work. A brief yet effective experimental study, on both artificially-created and real datasets, has been conducted to demonstrate the optimality of the proposed approach, comparing with its counterparts, as the obtained results has been shown encouraging. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Delineation of Traffic Analysis Zone for Public Transportation OD Matrix Estimation Based on Socio-spatial Practices
- Author
-
Moghaddam, S. M. Hassan Mahdavi, Ameli, Mostafa, Rao, K. Ramachandra, Tiwari, Geetam, and Technische Universität Dresden
- Subjects
ddc:360 ,public transportation delineation ,travel demand ,traffic analysis zones ,data allocation ,empirical data ,Abgrenzung des öffentlichen Verkehrs, Verkehrsnachfrage, Verkehrsanalysezonen, Daten Zuweisung, empirische Daten - Abstract
This paper aims to develop and validate an efficient method for delineation of public transit analysis zones (PTTAZ), particularly for origin-destination (OD) matrix prediction for transit operation planning. Existing methods have a problem in reflecting the level of spatial precision, travel characteristics, travel demand growth, access to transit stations, and most importantly, the direction of transit routes. This study proposes a new methodology to redelineate existing traffic analysis zones (TAZ) to create PTTAZ in order to allocate travel demand to transit stops. We aim to achieve an accurate prediction of the OD matrix for public transportation (PT). The matrix should reflect the passenger accessibility in the socioeconomic and socio-spatial characterization of PTTAZ and minimize intrazonal trips. The proposed methodology transforms TAZ-based to PTTAZ-based data with sequential steps through multiple statistical methods. In short, the generation of PTTAZ establishes homogeneous sub-zones representing the relationship between passenger flow, network structure, land use, population, socio-economic characteristics, and, most importantly, existing bus transit infrastructure. To validate the proposed scheme, we implement the framework for India’s Vishakhapatnam bus network and compare the results with the household survey. The results show that the PTTAZ-based OD matrix represents a realistic scenario for PT demand.
- Published
- 2023
17. Adaptive Distributed RDF Graph Fragmentation and Allocation based on Query Workload.
- Author
-
Peng, Peng, Zou, Lei, Chen, Lei, and Zhao, Dongyan
- Subjects
- *
DATA integrity , *RESOURCE allocation , *ELECTRONIC data processing , *QUERYING (Computer science) , *RDF (Document markup language) - Abstract
As massive volumes of Resource Description Framework (RDF) data are growing, designing a distributed RDF database system to manage them is necessary. In designing this system, it is very common to partition the RDF data into some parts, called fragments, which are then distributed. Thus, the distribution design comprises two steps: fragmentation and allocation. In this study, we explore the workload for fragmentation and allocation, which aims to reduce the communication cost during SPARQL query processing. Specifically, we adaptively maintain some frequent access patterns (FAPs) to reflect the characteristics of the workload while ensuring the data integrity and approximation ratio. Based on these frequent access patterns, we propose three fragmentation strategies, namely vertical, horizontal, and mixed fragmentation, to divide RDF graphs while meeting different types of query processing objectives. After fragmentation, we discuss how to allocate these fragments to various sites while balancing the fragments. Finally, we discuss how to process queries based on the results of fragmentation and allocation. Experiments over large RDF datasets confirm the superior performance of our proposed solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. Cooperative Data Caching for Cloud Data Servers
- Author
-
Mingcong Yang, Kai Guo, and Yongbing Zhang
- Subjects
cloud data centers ,data allocation ,mixed integer programming ,Management information systems ,T58.6-58.62 - Abstract
Thanks to the advance of cloud computing technologies, users can access the data stored at cloud data centers at any time and from any where. However, the data centers are usually sparsely distributed over the Internet and are far away from end users. In this paper, we consider to construct a cache network by a large number of cache nodes close to the end users in order to minimize the data access delay.We firstly formulate the problem of placing the replicas of data items to cache nodes as a mixed integer programming (MIP) problem. Then, we proposed an efficient heuristic algorithm that allocates at least one replica of each data item in the cache network and attempt to allocate more data items so as to minimize the total data access cost. The simulation results show that our proposed algorithm behaves much better than a well-known LRU algorithm and the computation complexity is limited.
- Published
- 2016
- Full Text
- View/download PDF
19. An optimized cost-based data allocation model for heterogeneous distributed computing systems
- Author
-
Sashi Tarun, Mithilesh Kumar Dubey, Ranbir Singh Batth, and Sukhpreet Kaur
- Subjects
Total cost ,Execution time ,General Computer Science ,Data allocation ,Computation cost ,Electrical and Electronic Engineering ,Communication cost ,Network cost - Abstract
Continuous attempts have been made to improve the flexibility and effectiveness of distributed computing systems. Extensive effort in the fields of connectivity technologies, network programs, high processing components, and storage helps to improvise results. However, concerns such as slowness in response, long execution time, and long completion time have been identified as stumbling blocks that hinder performance and require additional attention. These defects increased the total system cost and made the data allocation procedure for a geographically dispersed setup difficult. The load-based architectural model has been strengthened to improve data allocation performance. To do this, an abstract job model is employed, and a data query file containing input data is processed on a directed acyclic graph. The jobs are executed on the processing engine with the lowest execution cost, and the system's total cost is calculated. The total cost is computed by summing the costs of communication, computation, and network. The total cost of the system will be reduced using a Swarm intelligence algorithm. In heterogeneous distributed computing systems, the suggested approach attempts to reduce the system's total cost and improve data distribution. According to simulation results, the technique efficiently lowers total system cost and optimizes partitioned data allocation.
- Published
- 2022
20. Data allocation optimization for query processing in graph databases using Lucene.
- Author
-
Mathew, Anita Brigit
- Subjects
- *
NONRELATIONAL databases , *SOCIAL networks , *WIRELESS sensor nodes , *METAHEURISTIC algorithms , *GRAPHIC methods - Abstract
Abstract Methodological handling of queries is a crucial requirement in social networks connected to a graph NoSQL database that incorporates massive amounts of data. The massive data need to be partitioned across numerous nodes so that the queries when executed can be retrieved from a parallel structure. A novel storage mechanism for effective query processing must to be established in graph databases for minimizing time overhead. This paper proposes a metaheuristic algorithm for partitioning of graph database across nodes by placement of all related information on same or adjacent nodes. The graph database allocation problem is proved to be NP-Hard. A metaheuristic algorithm comprising of Best Fit Decreasing with Ant Colony Optimization is proposed for data allocation in a distributed architecture of graph NoSQL databases. Lucene index is applied on proposed allocation for faster query processing. The proposed algorithm with Lucene is evaluated based on simulation results obtained from different heuristics available in literature. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
21. A Novel Bitrate Adaptation Method for Heterogeneous Wireless Body Area Networks.
- Author
-
Cwalina, Krzysztof K., Ambroziak, Slawomir J., Rajchowski, Piotr, Sadowski, Jaroslaw, and Stefanski, Jacek
- Subjects
BODY area networks ,BIT rate - Abstract
In the article, a novel bitrate adaptation method for data streams allocation in heterogeneous Wireless Body Area Networks (WBANs) is presented. The efficiency of the proposed algorithm was compared with other known algorithms of data stream allocation using computer simulation. A dedicated simulator has been developed using results of measurements in the real environment. The usage of the proposed adaptive data streams allocation method by transmission rate adaptation based on radio channel parameters can increase the efficiency of resources’ usage in a heterogeneous WBANs, in relation to fixed bitrates transmissions and the use of well-known algorithms. This increase of efficiency has been shown regardless of the mobile node placement on the human body. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
22. Efficient query retrieval in Neo4jHA using metaheuristic social data allocation scheme.
- Author
-
Mathew, Anita Brigit, Madhu Kumar, S.D., Krishnan, K. Murali, and Salam, Sameera M.
- Subjects
- *
BIG data , *QUERY (Information retrieval system) , *ONLINE data processing , *ANT algorithms , *LINEAR programming - Abstract
Large amount of data from social networks needs to be shared, distributed and indexed in a parallel structure to be able to make best use of the data. Neo4j High Availability (Neo4jHA) is a popular open-source graph database used for query handling on large social data. This paper analyses how storing and indexing of social data across machines can be carried out by placing all the related information on the same or adjacent machines, with replication. The social graph data allocation problem referred to as Neo4jHA allocation has proved to be NP-Hard in this paper. An integration of Best Fit Decreasing algorithm with Ant Colony Optimization based metaheuristics is proposed for data allocation in a distributed architecture of Neo4jHA. The evaluation of the algorithm is carried out by simulation. The query processing efficiency is compared with other heuristic algorithms like First Fit, Best Fit, First Fit Decreasing and Best Fit Decreasing found in literature. A Skip List index was constructed on Neo4jHA of every machine after the implementation of the proposed allocation strategy for enhancing the query processing efficiency. The results illustrate how the proposed algorithm outperforms other data allocation approaches in query execution with and without an index. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
23. Towards an Efficient Data Fragmentation, Allocation, and Clustering Approach in a Distributed Environment
- Author
-
Hassan Abdalla and Abdel Monim Artoli
- Subjects
vertical fragmentation ,clustering ,data allocation ,data replication ,site clustering ,DDBS. ,Information technology ,T58.5-58.64 - Abstract
Data fragmentation and allocation has for long proven to be an efficient technique for improving the performance of distributed database systems’ (DDBSs). A crucial feature of any successful DDBS design revolves around placing an intrinsic emphasis on minimizing transmission costs (TC). This work; therefore, focuses on improving distribution performance based on transmission cost minimization. To do so, data fragmentation and allocation techniques are utilized in this work along with investigating several data replication scenarios. Moreover, site clustering is leveraged with the aim of producing a minimum possible number of highly balanced clusters. By doing so, TC is proved to be immensely reduced, as depicted in performance evaluation. DDBS performance is measured using TC objective function. An inclusive evaluation has been made in a simulated environment, and the compared results have demonstrated the superiority and efficacy of the proposed approach on reducing TC.
- Published
- 2019
- Full Text
- View/download PDF
24. Optimization of Data Allocation on CMP Embedded System with Data Migration.
- Author
-
Du, Jiayi, Li, Renfa, Xiao, Zheng, Tong, Zhao, and Zhang, Li
- Subjects
- *
MATHEMATICAL optimization , *COMPUTER storage devices , *ALGORITHMS , *DATA analysis , *ENERGY consumption - Abstract
Chip multi-processors are applied in embedded system. An embedded system with multi-cores is considered large and consumes substantial power. Scathed-pad memory (SPM) and non-volatile memory (NVM) are new memory technologies, and an embedded system that uses SPM and NVM can reduce its size and power consumption.This study proposes an optimization of data allocation with data migration algorithm on task-level (TODMA). Data migration and dynamic programming are co-dependent and are combined to allocate task data in TODMA. In the experiments, we evaluated the performance of TODMA algorithm based on DSPstone benchmark and random benchmark. Results of DSPstone show that TODMA reduces the time cost, the number of write activities on NVM, and system energy consumption by 36.25, 24.58, and 34.41 %, respectively, compared with the greedy algorithm. The corresponding reductions are 33.82, 10.00, and 24.27 %, respectively, compared with the iterational optimal data placement algorithm (IODA). For the random benchmark, TODMA can reduce the time cost, the number of write activities on NVM, and system energy consumption by 26.79, 33.32, and 26.88 %, respectively, compared with the greedy algorithm. The advanced percentages are 25.17, 9.87, and 19.54 %, respectively, which are similar to IODA algorithm. Results show that the proposed TODMA algorithm effectively optimizes data allocation problems, improves system performance, reduces the number of write activities on NVM main memory, and lessens system energy consumption. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
25. Towards Long-View Computing Load Balancing in Cluster Storage Systems.
- Author
-
Liu, Guoxin, Shen, Haiying, and Wang, Haoyu
- Subjects
- *
COMPUTER storage devices , *LOAD balancing (Computer networks) , *COMPUTER scheduling , *DISTRIBUTED computing , *SERVER farms (Computer network management) - Abstract
In large-scale computing clusters, when the server storing a task's requested data does not have sufficient computing capacity for the task, current job schedulers either schedule the task to the closest server and transmit to it the requested data, or let the task wait until the server has sufficient computing capacity. The former solution generates network load while the latter solution increases task delay. To handle this problem, load balancing methods are needed to reduce the number of overloaded servers due to computing workloads. However, current load balancing methods do not aim to balance the computing load for the long term. Through trace analysis, we demonstrate the diversity of computing workloads of different tasks and the necessity of balancing the computing workloads among servers. Then, we propose a cost-efficient Computing load Aware and Long-View load balancing approach (CALV ). CALV is novel in that it achieves long-term computing load balance by migrating out an overloaded server data blocks contributing more computing workloads when the server is more overloaded and contribute less computing workloads when the server is more underloaded at different epochs during a time period. Based upon the task schedules, we further propose a task reassignment algorithm that reassigns tasks from an overloaded server to other data servers of the tasks to make it non-overloaded before CALV is conducted. The above methods are for the tasks whose submission times and execution latencies can be predicted. To handle unexpected tasks or insufficiently accurate predictions, we propose a dynamic load balancing method, in which an overloaded server dynamically redirects tasks to other data servers of the tasks, or replicates the tasks’ requested data to other servers and redirects the tasks to those servers in order to become non-overloaded. Finally, we propose a proximity-aware tree based distributed load balancing method to reduce the reallocation cost and improve the scalability of CALV. Trace-driven experiments in simulation and a real computing cluster show that CALV outperforms other methods in terms of balancing the computing workloads and cost efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
26. Delay-sensitive data allocation scheme for CMT over diversity paths
- Author
-
Wen-feng DU, Zhen WU, and Li-qian LAI
- Subjects
concurrent multipath transfer ,data allocation ,performance optimization ,Telecommunication ,TK5101-6720 - Abstract
The performance of CMT association degrades remarkably when the performances of parts of paths deteriorate.Based on the analysis of different network configurations,a delay sensitive data allocation scheme was proposed to distribute data to different paths over multi-diversity network with reference of their transmission delay,which is a key factor to the whole performance.Meanwhile,the transmission sequence number of each chunk will also was considered.The result of analysis and simulation reveal the performance of our scheme can achieve much better performance than the original round-robin scheme.
- Published
- 2013
- Full Text
- View/download PDF
27. Energy Optimization for Data Allocation With Hybrid SRAM+NVM SPM.
- Author
-
Wang, Yan, Li, Kenli, Zhang, Jun, and Li, Keqin
- Subjects
- *
STATIC random access memory chips , *RESOURCE allocation , *ENERGY consumption - Abstract
The gradually widening disparity in the speed of the CPU and memory has become a bottleneck for the development of chip multiprocessor (CMP) systems. Increasing penalties caused by frequent on-chip memory access have raised critical challenges in delivering high memory access performance with tight energy and latency budgets. To overcome the memory wall and energy wall issues, this paper adopts CMP systems with hybrid scratchpad memories (SPMs), which are configured from SRAM and nonvolatile memory. Based on this architecture, we propose two novel algorithms, i.e., energy-aware data allocation (EADA) and balancing data allocation to energy and write operations (BDAEW), to perform data allocation to different memories and task mapping to different cores, reducing energy consumption and latency. We evaluate the performance of our proposed algorithms by comparison with a parallel solution that is commonly used to solve data allocation and task scheduling problems. Experiments show the merits of the hybrid SPM architecture over the traditional pure memory system and the effectiveness of the proposed algorithms. Compared with the AGADA algorithm, the EADA and BDAEW algorithms can reduce energy consumption by 23.05% and 19.41%, respectively. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
28. The Saudi women participation in development index
- Author
-
Reem Almahmud, Yusra Tashkandy, FatimaH Aldooh, Abir Alharbi, SumayaH Binhazzaa, Eidah Alenazi, Maha Almousa, Arwa M. Alshangiti, Maha A. Omair, and Sara Salem Alzaid
- Subjects
Gender equality ,Multidisciplinary ,Index (economics) ,Public economics ,Order (exchange) ,Legislature ,Business ,Composite index ,Data allocation ,Social engagement ,lcsh:Science (General) ,Focus group ,lcsh:Q1-390 - Abstract
We present a composite index that measures the participation of Saudi women in national development through certain broad dimensions of measurements, to be employed on Saudi Arabian datasets. In terms of method and technique, the composite index consists of 5 pillars from selected weighted variables chosen by experts, in order to provide a well representative index that measures local priorities and development needs. Construction of the index goes through stages, such as data allocation, national surveys, data normalizing, weight assignment, focus groups, pilot testing, and finally measurement of the index and its components. The index incorporates 54 indicators to capture the complexity of national development and ranks regions in Saudi Arabia according to calculated components and the gender gap between women and men in five key areas: health, education, economy, social engagement, and legislative structure to gauge the state of gender equality in the country. The index will be beneficial to decision makers to allocate necessary strategic policies that will help increase women participation in development in order to play their anticipated role in achieving the goals of Saudi Arabia’s Vision 2030. JEL classification: Y8, D63, O15, I0, I3, C8, Keywords: Composite index, Economic development index, Gender gap, Women’s empowerment, Index construction, Sustainable development indicators
- Published
- 2020
29. Data Allocation Approaches for Optimizing Storage Systems
- Author
-
Strong, Christina Rose
- Subjects
Computer science ,data allocation ,optimization ,storage systems - Abstract
As storage systems grow in size and complexity, the necessity for automatically managing them increases. One area of concern is system optimization, often managed by the administrator manually tweaking parameters to make the system as efficient as possible. The problem arises when there are multiple objectives to optimize, some of which may interfere with each other. Even when optimization is automated, it often requires either a decision to be made as to which objectives are more important than others, or if the objectives are combined into a linear function, this function (and the potential related weights) must be determined by the administrator. To this end, most systems optimize for only one objective, or when there are two objectives it is two that do not conflict. However, when all objectives are equally important, or it is unknown which objective is most important, the system must learn to balance multiple objectives. One way to aid in automating system optimization is by looking at the file allocation problem. The file allocation problem considers a set of files (or tasks) that need to be allocated to some number of devices, so as to optimize an objective function. I propose extending the file allocation problem so that rather than optimizing a single objective function, the goal is to allocate data to a number of devices subject to a multi-objective optimization. This dissertation covers the three top-down theoretical approaches I developed as well as a bottom-up practical approach. In addition, I present a system design that incorporates the practical approach with two space saving modifications.
- Published
- 2016
30. A Novel Bitrate Adaptation Method for Heterogeneous Wireless Body Area Networks
- Author
-
Krzysztof K. Cwalina, Slawomir J. Ambroziak, Piotr Rajchowski, Jaroslaw Sadowski, and Jacek Stefanski
- Subjects
Wireless Body Area Networks ,off-body communication ,narrowband ,ultra-wide band ,data allocation ,bitrate adaptation ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
In the article, a novel bitrate adaptation method for data streams allocation in heterogeneous Wireless Body Area Networks (WBANs) is presented. The efficiency of the proposed algorithm was compared with other known algorithms of data stream allocation using computer simulation. A dedicated simulator has been developed using results of measurements in the real environment. The usage of the proposed adaptive data streams allocation method by transmission rate adaptation based on radio channel parameters can increase the efficiency of resources’ usage in a heterogeneous WBANs, in relation to fixed bitrates transmissions and the use of well-known algorithms. This increase of efficiency has been shown regardless of the mobile node placement on the human body.
- Published
- 2018
- Full Text
- View/download PDF
31. Design of distributed database systems: an iterative genetic algorithm.
- Author
-
Song, Sukkyu
- Subjects
DISTRIBUTED databases ,CONSISTENCY models (Computers) ,GENETIC algorithms ,REACTION time ,DECISION support systems - Abstract
The two important aspects for design of distributed database systems are operation allocation and data allocation. Operation allocation refers to query execution plan indicating which operations (subqueries) should be allocated to which sites in a computer network, so that query processing costs are minimized. Data allocation is to allocate relations to sites so that the performance of distributed database are improved. In this research, we developed a solution technique for operation allocation and data allocation problem, using three objective functions: total time minimization or response time minimization, and the combination of total time and response time minimization. We formulated these allocation problems and provided analytical cost models for each objective function. Since the problem is NP-hard, we proposed a heuristic solution based on genetic algorithm (GA). Comparison of results with the exhaustive enumeration indicated that GA produced optimal solutions in all cases in much less time. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
32. Distributed Database Design: A Case Study.
- Author
-
Tosun, Umut
- Subjects
DISTRIBUTED databases ,EVOLUTIONARY algorithms ,QUALITY of service ,LINEAR programming ,INTEGER programming - Abstract
Data Allocation is an important problem in Distributed Database Design. Generally, evolutionary algorithms are used to determine the assignments of fragments to sites. Data Allocation Algorithms should handle replication, query frequencies, quality of service (QoS), cite capacities, table update costs, selection and projection costs. Most of the algorithms in the literature attack one or few components of the problem. In this paper, we present a case study considering all of these features. The proposed model uses Integer Linear Programming for the formulation of the problem. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
33. Preliminary Experience with OpenMP Memory Management Implementation
- Author
-
Patrick Carribault, Julien Jaeger, Adrien Roussel, Roussel, Adrien, DAM Île-de-France (DAM/DIF), Direction des Applications Militaires (DAM), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Laboratoire en Informatique Haute Performance pour le Calcul et la simulation (LIHPC), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Direction des Applications Militaires (DAM), and Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Paris-Saclay
- Subjects
Memory Management ,020203 distributed computing ,Hardware_MEMORYSTRUCTURES ,Computer science ,02 engineering and technology ,Data allocation ,020202 computer hardware & architecture ,MCDRAM ,NVDIMM ,Memory management ,Computer architecture ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,Data Allocation ,[INFO.INFO-DC] Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] ,[INFO.INFO-DC]Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] ,OpenMP 5.0 - Abstract
International audience; Because of the evolution of compute units, memory hetero-geneity is becoming popular in HPC systems. But dealing with such various memory levels often requires different approaches and interfaces. For this purpose, OpenMP 5.0 defines memory-management constructs to offer application developers the ability to tackle the issue of exploiting multiple memory spaces in a portable way. This paper proposes an overview of memory-management from applications to runtimes. Thus, we describe a convenient way to tune an application to include memory management constructs. We also detail a methodology to integrate them into an OpenMP runtime supporting multiple memory types (DDR, MC-DRAM and NVDIMM). We implement our design into the MPC framework , while presenting some results on a realistic benchmark.
- Published
- 2020
- Full Text
- View/download PDF
34. Data allocation optimization for query processing in graph databases using Lucene
- Author
-
Anita Brigit Mathew
- Subjects
Graph database ,General Computer Science ,Computer science ,Ant colony optimization algorithms ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,NoSQL ,Data allocation ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Time overhead ,Data mining ,Electrical and Electronic Engineering ,Heuristics ,computer ,Metaheuristic - Abstract
Methodological handling of queries is a crucial requirement in social networks connected to a graph NoSQL database that incorporates massive amounts of data. The massive data need to be partitioned across numerous nodes so that the queries when executed can be retrieved from a parallel structure. A novel storage mechanism for effective query processing must to be established in graph databases for minimizing time overhead. This paper proposes a metaheuristic algorithm for partitioning of graph database across nodes by placement of all related information on same or adjacent nodes. The graph database allocation problem is proved to be NP-Hard. A metaheuristic algorithm comprising of Best Fit Decreasing with Ant Colony Optimization is proposed for data allocation in a distributed architecture of graph NoSQL databases. Lucene index is applied on proposed allocation for faster query processing. The proposed algorithm with Lucene is evaluated based on simulation results obtained from different heuristics available in literature.
- Published
- 2018
- Full Text
- View/download PDF
35. A Comprehensive Taxonomy of Fragmentation and Allocation Techniques in Distributed Database Design
- Author
-
Ali A. Amer and Dalia Nashat
- Subjects
General Computer Science ,Distributed database ,Computer science ,business.industry ,Distributed computing ,Replication Process ,Fragmentation (computing) ,Information technology ,Response time ,02 engineering and technology ,Data allocation ,Theoretical Computer Science ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business - Abstract
The need to design an optimally distributed database is increasingly important with the growth of information technology and computer networks. However, designing a distributed database is an extremely complex process due to a large number of geographically distributed sites and database relations. Moreover, decreasing communication costs and query response time should be taken into consideration. There are three main techniques applied to design a distributed database, namely Fragmentation, Data allocation, and Replication. It is notable that these techniques are often treated separately and rarely processed together. Some available allocation methods are applied regardless of how the fragmentation technique is performed or replication process is adopted. In contrast, other fragmentation techniques do not consider the allocation or the replication techniques. Therefore, the first and foremost step for designing an optimal database is to develop a comprehensive understanding of the current fragmentation, replication, and allocation techniques and their disadvantages. This article presents an attempt to fulfill this step by proposing a comprehensive taxonomy of the available fragmentation and allocation techniques in distributed database design. The article also discusses some case studies of these techniques for a deeper understanding of its achievements and limitations.
- Published
- 2018
- Full Text
- View/download PDF
36. TTEC: Data Allocation Optimization for Morphable Scratchpad Memory in Embedded Systems
- Author
-
Qing Ai, Linbo Long, Jun Liu, and Xiaotong Cui
- Subjects
General Computer Science ,Computer science ,Data allocation ,02 engineering and technology ,01 natural sciences ,scratchpad memory ,Memory cell ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Overhead (computing) ,General Materials Science ,Static random-access memory ,Scratchpad memory ,010302 applied physics ,Random access memory ,Hardware_MEMORYSTRUCTURES ,business.industry ,General Engineering ,Energy consumption ,morphable NVM ,020202 computer hardware & architecture ,Resistive random-access memory ,Non-volatile memory ,Embedded system ,embedded systems ,Cache ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,lcsh:TK1-9971 - Abstract
Scratchpad memory (SPM) is widely utilized in many embedded systems as a software- controlled on-chip memory to replace the traditional cache. New non-volatile memory (NVM) has emerged as a promising candidate to replace SRAM in SPM, due to its significant benefits, such as low-power consumption and high performance. In particular, several representative NVMs, such as PCM, ReRAM, and STT-RAM can build multiple-level cells (MLC) to achieve even higher density. Nevertheless, this triggers off higher energy overhead and longer access latency compared with its single-level cell (SLC) counterpart. To address this issue, this paper first proposes a specific SPM with morphable NVM, in which the memory cell can be dynamically programmed to the MLC mode or SLC mode. Considering the benefits of high-density MLC and low-energy SLC, a simple and novel optimization technique, named theory of thermal expansion and contraction, is presented to minimize the energy consumption and access latency in embedded systems. The basic idea is to dynamically adjust the size configure of SLC/MLC in SPM according to the different workloads of program and allocate the optimal storage medium for each data. Therefore, an integer linear programming formulation is first built to produce an optimal SLC/MLC SPM partition and data allocation. In addition, a corresponding approximation algorithm is proposed to achieve near-optimal results in polynomial time. Finally, the experimental results show that the proposed technique can effectively improve the system performance and reduce the energy consumption.
- Published
- 2018
37. A new approach based on particle swarm optimization algorithm for solving data allocation problem
- Author
-
Ömer Kaan Baykan, Halife Kodaz, and Mostafa Mahi
- Subjects
Mathematical optimization ,Distributed database ,Heuristic ,Computer science ,Total cost ,Particle swarm optimization ,02 engineering and technology ,Data allocation ,Robustness (computer science) ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Enumeration ,020201 artificial intelligence & image processing ,Multi-swarm optimization ,Algorithm ,Software - Abstract
The effectiveness distributed database systems highly depends on the state of site that its task is to allocate fragments. This allocation purpose is performed for obtaining the minimum execute time and transaction cost of queries. There are some NP-hard problems that Data Allocation Problem (DAP) is one of them and solving this problem by means of enumeration method can be computationally expensive. Recently heuristic algorithms have been used to achieve desirable solutions. Due to fewer control parameters, robustness, speed convergence characteristics and easy adaptation to the problem, this paper propose a novel method based on Particle Swarm Optimization (PSO) algorithm which is suitable to minimize the total transmission cost for both the each site – fragment dependency and the each inter – fragment dependency. The core of the study is to solve DAP by utilizing and adaptation PSO algorithm, PSO-DAP for short. Allocation of fragments to the site has been done with PSO algorithm and its performance has been evaluated on 20 different test problems and compared with the state-of-art algorithms. Experimental results and comparisons demonstrate that proposed method generates better quality solutions in terms of execution time and total cost than compared state-of-art algorithms.
- Published
- 2018
- Full Text
- View/download PDF
38. A Data Allocation Method over Multiple Wireless Broadcast Channels.
- Author
-
SUNGWON JUNG and SOOYONG PARK
- Subjects
WIRELESS communications ,COMPUTER networks ,TELESOFTWARE ,BANDWIDTH allocation ,PROBABILITY theory - Abstract
In this paper, we concentrate on data allocation methods for multiple wireless broadcast channels to reduce the average data access time. Existing works first sorted data by their access probabilities and allocate the partitions of the sorted data to the multiple wireless channels. They employ the Flat broadcast schedule for each channel to cyclically broadcast all the data items allocated to it. The different access probabilities of the data items within a channel are ignored. To cope with this problem, S2AP method was proposed. It allocates a popular data item more than once per cycle to the channel to which it is assigned. The number of times that each data item is allocated reflects its access probability. However, the performance improvement of S2AP method is somewhat limited because the skewness of data access probability distribution within each channel is not large. We propose ZGMD method which first allocates data over multiple wireless channels by trying to maximize the average skewness of data access probability distributions over multiple channels. ZGMD method then computes the broadcast repetition frequencies of all the data items in each channel by using the method proposed in S2AP scheme. Finally, ZGMD method generates the broadcast disk program for multiple wireless broadcast channels. Our performance analysis shows that ZGMD method gives the better average access time than the existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2013
39. NBTI-Aware Data Allocation Strategies for Scratchpad Based Embedded Systems.
- Author
-
Ferri, Cesare, Papagiannopoulou, Dimitra, Bahar, R., and Calimera, Andrea
- Subjects
- *
EMBEDDED computer systems , *MEMORY , *SYSTEMS on a chip , *ELECTRONIC circuit design , *COMPUTER software - Abstract
The push to embed reliable and low-power memories architectures into modern systems-on-chip is driving the EDA community to develop new design techniques and circuit solutions that can concurrently optimize aging effects due to Negative Bias Temperature Instability (NBTI), and static power consumption due to leakage mechanisms. While recent works have shown how conventional leakage optimization techniques can help mitigate NBTI-induced aging effects on cache memories, in this paper we focus specifically on scratchpad memory (SPM) and present novel software approaches as a means of alleviating the NBTI-induced aging effects. In particular, we demonstrate how intelligent software directed data allocation strategies can extend the lifetime of partitioned SPMs by means of distributing the idleness across the memory sub-banks. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
40. Minimizing Access Cost for Multiple Types of Memory Units in Embedded Systems Through Data Allocation and Scheduling.
- Author
-
Zhuge, Qingfeng, Guo, Yibo, Hu, Jingtong, Tseng, Wei-Che, Xue, Chun Jason, and Sha, Edwin Hsing-Mean
- Subjects
- *
GDPS (Computer system) , *SCHEDULING software , *DYNAMIC programming , *POLYNOMIAL time algorithms , *DIGITAL signal processing - Abstract
Software-controlled memories, such as scratch-pad memory (SPM), have been widely adopted in many digital signal processors to achieve high performance with low cost. Multiple types of memory units with varying performance and cost can be found in one system. In this paper, we design a polynomial-time algorithm, the regional optimal data allocation (RODA) algorithm, using dynamic programming approach. It guarantees optimal data allocation with minimal access cost for a program region. A polynomial-time algorithm, the global data allocation (GDA) algorithm, is proposed to reduce access cost efficiently based on regional results generated by the RODA algorithm. A heuristic, the maximal similarity scheduling (MSS) algorithm, is also developed to find an execution sequence of program regions with maximal similarity of accessed data items for consecutive regions in order to reduce memory traffic. The experimental results on a set of benchmarks show that our technique that combines the GDA and the MSS algorithms outperforms greedy algorithm in all the experimental cases. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
41. An optimal workload-based data allocation approach for multidisk databases
- Author
-
Lin, Ming-Hua
- Subjects
- *
PARALLEL processing , *ELECTRONIC data processing , *DATABASES , *COMPUTER files - Abstract
Abstract: Parallel processing mechanisms and data layout approaches that significantly affect access performance of database systems have received increased attention in the last few decades. Multidisk allocation problems try to find an allocation of relations to disks such that expected query cost is minimized. Solving this NP-complete problem is extremely time-consuming, especially because the need for solution time rises exponentially as the number of 0–1 variables increases. This study presents a novel and efficient approach for deriving an optimal layout of relations on disks based on database statistics of access patterns and relation sizes. In addition to minimizing query cost, the proposed model allows replication of relations, minimizes storage cost, and enhances computational efficiency by reducing the number of 0–1 variables and constraints. Illustrative examples and experimental results demonstrate the advantages and efficiency of the proposed method. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
42. A linearly convergent method for broadcast data allocation
- Author
-
Jea, Kuen-Fang, Wang, Jen-Ya, and Chen, Shih-Ying
- Subjects
- *
MOBILE computing , *WIRELESS communications , *POLYNOMIALS , *COMPUTATIONAL complexity , *NP-complete problems - Abstract
Abstract: In this paper, we deal with an NP-hard minimization problem, performing data allocation over multiple broadcast channels in the wireless environment. Our idea is to solve the discrete case of such a problem by the concept of gradient in the Euclidean space . The theoretical basis of the novel idea ensures the near-optimality of our solution. Furthermore, the experimental results show that the problem can be solved quickly. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
43. A New Data Allocation Method for Parallel Probe-Based Storage Devices.
- Author
-
Varsamou, Maria and Antonakopoulos, Theodore
- Subjects
- *
COMPUTER storage devices , *ELECTRONIC data processing , *ATOMIC force microscopy , *INFORMATION storage & retrieval systems , *NUMERICAL analysis - Abstract
We present a new data allocation method for probe-based storage devices that use multiple, simultaneously accessed parallel data fields. Our method uses blocks of data of unequal length for allocating a sector in the various storage fields. The amount of data stored in each field depends on the sector's offset from the beginning of the allocation round and on the storage field used. Numerical results demonstrate the storage efficiency improvement that is achieved by the proposed method. We show that this method can be applied to atomic force microscopy-based probe storage devices. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
44. Tensor Product Formulation for Hilbert Space-Filling Curves.
- Author
-
Shen-Yi Lin, Chih-Shen Chen, Li Liu, and Chua-Huang Huang
- Subjects
TENSOR products ,LINEAR algebra ,ALGORITHMS ,HILBERT space ,PERMUTATIONS - Abstract
We present a tensor product formulation for Hilbert space-filling curves. Both recursive and iterative formulas are expressed in the paper. We view a Hilbert space-filling curve as a permutation which maps two-dimensional 2" x 2" data elements stored in the row major or column major order to the order of traversing a Hilbert curve. The tensor product formula of Hilbert space-filling curves uses several permutation operations: stride permutation, radix-2 Gray permutation, transposition, and anti-diagonal transposition. The iterative tensor product formula can be manipulated to obtain the inverse Hilbert permutation. Also, the formulas are directly translated into computer programs which can be used in various applications including image processing, VLSI component layout, and R-tree indexing, etc. [ABSTRACT FROM AUTHOR]
- Published
- 2008
45. Efficient index and data allocation for wireless broadcast services
- Author
-
Lo, Shou-Chih and Chen, Arbee L.P.
- Subjects
- *
WIRELESS communications , *MOBILE communication systems , *MOBILE computing , *DATA warehousing - Abstract
Abstract: The periodic broadcasting of frequently requested data can reduce the workload of uplink channels and improve data access for users in a wireless network. Since mobile devices have limited energy capacities associated with their reliance on battery power, it is important to minimize the time and energy spent on accessing broadcast data. The indexing and scheduling of broadcast data play a key role in this problem. In this paper, we formulate the index and data allocation problem and propose a solution that can adapt to any number of broadcast channels. We first restrict the considered problem to a scenario with no index/data replication, and introduce an optimal solution and a heuristic solution to the single-channel and multichannel cases, respectively. Then, we discuss how to replicate indexes on the allocation to further improve the performance. The results from some experiments demonstrate the superiority of our proposed approach. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
46. An XML data allocation method on disks
- Author
-
Kim, Jung Hoon, Chung, Yon Dohn, and Kim, Myoung Ho
- Subjects
- *
XML (Extensible Markup Language) , *FOUNDATIONS of arithmetic , *COMPUTER programming , *ANT algorithms - Abstract
Abstract: XML recently has expanded its application areas: data formats in various information systems, communication protocols in distributed systems, and so on. Generally, XML data can be logically modeled as rooted tree. For the query processing of such data, path queries are widely used. In this paper, we present an optimal algorithm that places XML data on disks such that the number of disk accesses for path query processing is minimized. The proposed algorithm consists of two steps. First, we assign a number (called the mapping indicator) for each node of a tree in a bottom–up fashion, and in the next step we map the nodes to disk blocks using the assigned number. We analyze the optimality of the proposed method with some relevant proofs. We also show the proposed method provides good performance for various query types with XML data set. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
47. Data Allocation in MEMS-based Mobile Storage Devices.
- Author
-
Soyoon Lee and Hyokyung Bahn
- Subjects
- *
COMPUTER storage devices , *MICROELECTROMECHANICAL systems , *ELECTROMECHANICAL devices , *ELECTRONIC industries , *HOUSEHOLD electronics industry - Abstract
MEMS-based storage is being developed as a new storage media. Due to its attractive features such as small size, shock resistance, and low-power consumption, MEMS-based storage is anticipated to be widely used for mobile consumer electronics. However, MEMS-based storage has vastly different physical characteristics compared to a traditional disk. First, MEMS storage has thousands of heads that can be activated simultaneously. Second, the media of MEMS storage is a square structure which is djfferent from the platter structure of disks. Third, the size of a physical sector in MEMS storage is an order of magnitude different from that of a conventional disk. This paper presents a new data allocation scheme for MEMS storage that makes use of the aforementioned characteristics. This new scheme considers the parallelism of MEMS storage as well as the seek time of requests on the two dimensional square structure. Simulation studies show that the proposed scheme improves the performance of MEMS storage significantly by exploiting the high parallelism of MEMS storage. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
48. An Efficient Algorithm for Near Optimal Data Allocation on Multiple Broadcast Channels.
- Author
-
Chih-Hao Hsu, Guanling Lee, and Chen, Arbee L. P.
- Subjects
BROADBAND communication systems ,BANDWIDTHS ,DATA transmission systems ,WIRELESS communications ,ALGORITHMS ,TELECOMMUNICATION systems - Abstract
In a wireless environment, the bandwidth of the channels and the energy of the portable devices are limited. Data broadcast has become an excellent method for efficient data dissemination. In this paper, the problem for generating a broadcast program of a set of data items with the associated access frequencies on multiple channels is explored. In our approach, a minimal expected average access time of the broadcast data items is first derived. The broadcast program is then generated, which minimizes the minimal expected average access time. Simulation is performed to compare the performance of our approach with two existing approaches. The result of the experiments shows that our approach outperforms others and is in fact close to the optimal. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
49. Evolutionary Algorithms for Allocating Data in Distributed Database Systems.
- Author
-
Ahmad, Ishfaq, Karlapalem, Kamalakar, Kwok, Yu-Kwong, and So, Siu-Kai
- Abstract
A major cost in executing queries in a distributed database system is the data transfer cost incurred in transferring relations (fragments) accessed by a query from different sites to the site where the query is initiated. The objective of a data allocation algorithm is to determine an assignment of fragments at different sites so as to minimize the total data transfer cost incurred in executing a set of queries. This is equivalent to minimizing the average query execution time, which is of primary importance in a wide class of distributed conventional as well as multimedia database systems. The data allocation problem, however, is NP-complete, and thus requires fast heuristics to generate efficient solutions. Furthermore, the optimal allocation of database objects highly depends on the query execution strategy employed by a distributed database system, and the given query execution strategy usually assumes an allocation of the fragments. We develop a site-independent fragment dependency graph representation to model the dependencies among the fragments accessed by a query, and use it to formulate and tackle data allocation problems for distributed database systems based on query-site and move-small query execution strategies. We have designed and evaluated evolutionary algorithms for data allocation for distributed database systems. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
50. A Hypergraph Based Approach to Declustering Problems.
- Author
-
Liu, Duen-Ren and Wu, Mei-Yu
- Abstract
Parallelizing I/O operations via effective declustering of data is becoming essential to scale up the performance of parallel databases or high performance systems. Declustering has been shown to be a NP-complete problem in some contexts. Some heuristic methods have been proposed to solve this problem. However, most methods are not effective in several cases such as queries with different access frequencies or data with different sizes. In this paper, we propose a hypergraph model to formulate the declustering problem. Several interesting theoretical results are achieved by analyzing the proposed model. The proposed approach will allow modeling a wide range of declustering problems. Furthermore, the hypergraph declustering model is used as the basis to develop new heuristic methods, including a greedy method and a hybrid declustering method. Experiments show that the proposed methods can achieve better performance than several declustering methods. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.