36 results on '"List update problem"'
Search Results
2. Paid exchanges are worth the price.
- Author
-
López-Ortiz, Alejandro, Renault, Marc P., and Rosén, Adi
- Subjects
- *
EXCHANGE , *ONLINE algorithms - Abstract
We consider the list update problem as defined in the seminal work on competitive analysis by Sleator and Tarjan [13]. An instance of the problem consists of a sequence of requests to access items in a linked list. After an item is accessed, that item can be moved to any position forward in the list at no cost (a move called free exchange), and, at any time, any two adjacent items can be swapped at a cost of 1 (a move called paid exchange). The cost to access an item is equal to its current position in the list. The goal is to dynamically rearrange the list so as to minimize the total cost (accrued from accesses and exchanges) over the request sequence. We show a lower bound of 12/11 on the worst-case ratio between the performance of an (offline) optimal algorithm that can only perform free exchanges and that of an (offline) optimal algorithm that can perform both paid and free exchanges. This answers the question of the asymptotic relative power of the two models which has been open since Reingold and Westbrook [11] showed in 1996 that Sleator and Tarjan erred in [13] when they claimed that the two models are equivalent. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
3. New Results on Competitive Analysis of Move To Middle(MTM) List Update Algorithm using Doubly Linked List.
- Author
-
Rakesh, Mohanty and Rakesh, Singh Kumar
- Subjects
ONLINE algorithms ,DIGITAL image processing ,COMPARATIVE studies ,CLOUD computing ,DATA structures - Abstract
On-line algorithm is an emerging area of research since last five decades with immense theoretical and practical significance. Here the sequence of inputs is received and processed by the algorithm one by one in order. At any instant of time, the algorithm has the knowledge of past and present inputs without knowledge of future inputs. Competitive analysis is a standard performance measure for on-line algorithms in which the performance of an on-line algorithm is compared with that of an optimum offline algorithm. List update problem is a well studied research problem in the area of on-line algorithms, which finds applications in data-compression, dictionary maintenance, collision resolution in hash table, symbol table organization in compiler and computing convex hull in computational geometry. From the literature it is known that Move To Front(MTF) is the best performing deterministic list update algorithm using singly linked list in all practical applications. In this paper, we study and analyse the performance of a deterministic on-line list update algorithm known as Move To Middle(MTM) using doubly linked list. We make a first attempt to find the competitive ratio of MTM algorithm, which was only experimentally studied in the literature. In our study by considering doubly linked list as the data structure and a new variant of standard full cost model, we obtain new competitive analysis result and prove that MTM is 2-competitive. From our competitive analysis result, we show that MTM outperforms MTF. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
4. Paid exchanges are worth the price
- Author
-
Marc P. Renault, Alejandro López-Ortiz, Adi Rosén, University of Waterloo [Waterloo], and Rosen, Adi
- Subjects
Sequence ,000 Computer science, knowledge, general works ,Theoretical computer science ,General Computer Science ,Competitive analysis ,Computer science ,[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS] ,Relative power ,[INFO.INFO-DS] Computer Science [cs]/Data Structures and Algorithms [cs.DS] ,0102 computer and information sciences ,02 engineering and technology ,Linked list ,01 natural sciences ,Upper and lower bounds ,Theoretical Computer Science ,List update problem ,010201 computation theory & mathematics ,Position (vector) ,020204 information systems ,Computer Science ,0202 electrical engineering, electronic engineering, information engineering ,ComputingMilieux_MISCELLANEOUS - Abstract
We consider the list update problem as defined in the seminal work on competitive analysis by Sleator and Tarjan [13] . An instance of the problem consists of a sequence of requests to access items in a linked list. After an item is accessed, that item can be moved to any position forward in the list at no cost (a move called free exchange), and, at any time, any two adjacent items can be swapped at a cost of 1 (a move called paid exchange). The cost to access an item is equal to its current position in the list. The goal is to dynamically rearrange the list so as to minimize the total cost (accrued from accesses and exchanges) over the request sequence. We show a lower bound of 12/11 on the worst-case ratio between the performance of an (offline) optimal algorithm that can only perform free exchanges and that of an (offline) optimal algorithm that can perform both paid and free exchanges. This answers the question of the asymptotic relative power of the two models which has been open since Reingold and Westbrook [11] showed in 1996 that Sleator and Tarjan erred in [13] when they claimed that the two models are equivalent.
- Published
- 2020
5. Stochastic Dominance and the Bijective Ratio of Online Algorithms
- Author
-
Pascal Schweitzer, Marc P. Renault, Spyros Angelopoulos, Recherche Opérationnelle (RO), LIP6, Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), University of Wisconsin-Madison, and Technical University of Kaiserslautern (TU Kaiserslautern)
- Subjects
FOS: Computer and information sciences ,Amortized analysis ,General Computer Science ,Generalization ,Computer science ,Applied Mathematics ,[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS] ,Stochastic dominance ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Stochastic ordering ,Computer Science Applications ,List update problem ,010201 computation theory & mathematics ,Computer Science - Data Structures and Algorithms ,0202 electrical engineering, electronic engineering, information engineering ,Bijection ,020201 artificial intelligence & image processing ,Data Structures and Algorithms (cs.DS) ,[INFO]Computer Science [cs] ,Online algorithm ,Greedy algorithm ,Algorithm - Abstract
Stochastic dominance is a technique for evaluating the performance of online algorithms that provides an intuitive, yet powerful stochastic order between the compared algorithms. Accordingly this holds for bijective analysis, which can be interpreted as stochastic dominance assuming the uniform distribution over requests. These techniques have been applied to some online problems, and have provided a clear separation between algorithms whose performance varies significantly in practice. However, there are situations in which they are not readily applicable due to the fact that they stipulate a stringent relation between the compared algorithms. In this paper, we propose remedies for these shortcomings. First, we establish sufficient conditions that allow us to prove the bijective optimality of a certain class of algorithms for a wide range of problems; we demonstrate this approach in the context of some well-studied online problems. Second, to account for situations in which two algorithms are incomparable or there is no clear optimum, we introduce the bijective ratio as a natural extension of (exact) bijective analysis. Our definition readily generalizes to stochastic dominance. This renders the concept of bijective analysis (and that of stochastic dominance) applicable to all online problems, and allows for the incorporation of other useful techniques such as amortized analysis. We demonstrate the applicability of the bijective ratio to one of the fundamental online problems, namely the continuous $k$-server problem on metrics such as the line, the circle, and the star. Among other results, we show that the greedy algorithm attains bijective ratios of $O(k)$ consistently across these metrics. These results confirm extensive previous studies that gave evidence of the efficiency of this algorithm on said metrics in practice, which, however, is not reflected in competitive analysis., Abridged abstract; full abstract in manuscript
- Published
- 2020
6. Analysis Price List Update Process of Price List Management on Direct Material Procurement Department at PT. X
- Author
-
Katherine Marcella Silvanus Sie and Debora Anne Yang Aysia
- Subjects
List update problem ,Procurement ,Database ,Vendor ,Process (engineering) ,Control (management) ,TheoryofComputation_GENERAL ,Control material ,Business ,computer.software_genre ,computer - Abstract
PT.X is one of the largest consumer goods manufacturing companies in Indonesia. As manufacturing companies PT. X has Direct Material (DIM) Procurement Department to control all direct material, which also controls the price list. The process of controlling price list update is still handled via email. The use of email as communication between internal PT. X and also with the vendor, cause email pilling up in the DIM Procurement staff’s inbox. This causes the process of updating material price list to become long and hard to control. There are approximately 4 500 direct materials and approximately 130 active vendors are handled by the DIM Procurement department annually. A large number of materials and vendor handled by DIM Procurement is one of the reasons why the process of controlling price list is important. Digitalization is one way to help the process of controlling and simplifying the price list update process. Improvements to the process of price list update will be done by designing the concept of a platform that helps improve the performance of DIM Procurement Department. The platform called price list catalog can be used to store and record all vendor price list changes. The suggestions for price list update process is to facilitate DIM Procurement department so they can easily control material price list, provide trusted price list, and always available on time. Keywords: price list catalog; price list control; price list data; pricing management
- Published
- 2020
7. New Results on Competitive Analysis of Move To Middle(MTM) List Update Algorithm using Doubly Linked List
- Author
-
Singh Kumar Rakesh and Mohanty Rakesh
- Subjects
List update problem ,Competitive analysis ,Doubly linked list ,Symbol table ,Computer science ,General Earth and Planetary Sciences ,Linked list ,Computational geometry ,Data structure ,Algorithm ,Hash table ,General Environmental Science - Abstract
On-line algorithm is an emerging area of research since last five decades with immense theoretical and practical significance. Here the sequence of inputs is received and processed by the algorithm one by one in order. At any instant of time, the algorithm has the knowledge of past and present inputs without knowledge of future inputs. Competitive analysis is a standard performance measure for on-line algorithms in which the performance of an on-line algorithm is compared with that of an optimum offline algorithm. List update problem is a well studied research problem in the area of on-line algorithms, which finds applications in data-compression, dictionary maintenance, collision resolution in hash table, symbol table organization in compiler and computing convex hull in computational geometry. From the literature it is known that Move To Front(MTF) is the best performing deterministic list update algorithm using singly linked list in all practical applications. In this paper, we study and analyse the performance of a deterministic on-line list update algorithm known as Move To Middle(MTM) using doubly linked list. We make a first attempt to find the competitive ratio of MTM algorithm, which was only experimentally studied in the literature. In our study by considering doubly linked list as the data structure and a new variant of standard full cost model, we obtain new competitive analysis result and prove that MTM is 2-competitive. From our competitive analysis result, we show that MTM outperforms MTF.
- Published
- 2018
8. Computational Geometry Column 66
- Author
-
Adrian Dumitrescu and Khaled Elbassioni
- Subjects
021103 operations research ,Multidisciplinary ,Theoretical computer science ,Competitive analysis ,Linear programming ,Computer science ,Rounding ,Open problem ,0211 other engineering and technologies ,0102 computer and information sciences ,02 engineering and technology ,Data structure ,Computational geometry ,01 natural sciences ,Column (database) ,List update problem ,010201 computation theory & mathematics - Abstract
The list update problem is a well studied online problem in the area of self-adjusting data structures. Understanding the o?ine version of this problem is crucial because of the role it plays in the competitive analysis of online list update algorithms. In this paper we settle a long-standing open problem by showing that the o?ine list update problem is NP-hard.
- Published
- 2017
9. Child based Level-Wise List Scheduling Algorithm
- Author
-
Lokesh Kr. Arya and Amandeep Verma
- Subjects
Earliest deadline first scheduling ,Rate-monotonic scheduling ,List update problem ,Operations research ,Computer science ,Lottery scheduling ,Flow shop scheduling ,Dynamic priority scheduling ,Self-organizing list ,Deadline-monotonic scheduling ,Computer Science Applications ,Education - Published
- 2017
10. SIGACT News Online Algorithms Column 31
- Author
-
Christoph Ambühl
- Subjects
Multidisciplinary ,Theoretical computer science ,Competitive analysis ,Computer science ,Open problem ,0102 computer and information sciences ,Data structure ,computer.software_genre ,01 natural sciences ,Column (database) ,List update problem ,010201 computation theory & mathematics ,Data mining ,Online algorithm ,computer - Abstract
The list update problem is a well studied online problem in the area of self-adjusting data structures. Understanding the o?ine version of this problem is crucial because of the role it plays in the competitive analysis of online list update algorithms. In this paper we settle a long-standing open problem by showing that the o?ine list update problem is NP-hard.
- Published
- 2017
11. Turbo Receiver With Dual-Loop Dual-List Update for Inter-Cell Interference Mitigation in Heterogeneous Networks
- Author
-
Yi-Yao Lan and Tzi-Dar Chiueh
- Subjects
biology ,Computer science ,Applied Mathematics ,Turbo ,Detector ,Real-time computing ,Co-channel interference ,Word error rate ,020302 automobile design & engineering ,020206 networking & telecommunications ,02 engineering and technology ,Spectral efficiency ,biology.organism_classification ,Computer Science Applications ,Spatial multiplexing ,List update problem ,0203 mechanical engineering ,Interference (communication) ,Modulation ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Decoding methods - Abstract
Inter-cell interference (ICI) may considerably degrade the detection performance of receivers in heterogeneous networks (HetNets). To increase the spectrum efficiency per area, the cell ranges of small cells may be expanded, making the ICI problem more severe. Furthermore, to pursue higher data rate, spatial multiplexing is often applied, incurring inter-antenna interference (IAI). This paper presents a turbo receiver with a dual-loop dual-list update (TRDDU) structure that efficiently detects data signal deteriorated by both ICI and IAI. The proposed lists maintained several likely data vectors and interference vectors, facilitating the receiver to more efficiently determine the most-likely solution. In addition, a dual-loop update algorithm was proposed to enhance the iteration efficiency. Moreover, two design techniques were proposed to reduce the detection complexity. Simulation results showed that the error rate performance of TRDDU is close to that of a receiver in the ICI-free scenario. The complexity needed by TRDDU is only 3.5% that of the receiver that considers all possible interference vectors. In addition, the TRDDU complexity is only approximately 1.56 times that of a successive-interference-cancellation-based receiver. Finally, an over-the-air experiment was conducted to validate the effectiveness of the proposed TRDDU, and the results demonstrated the feasibility of this receiver in the field.
- Published
- 2017
12. Multiple CRC-aided variable successive cancellation list decoder of polar codes
- Author
-
Zhao Shuang, Cao Miao, and Zhao Sheng-mei
- Subjects
Computer Networks and Communications ,020208 electrical & electronic engineering ,020206 networking & telecommunications ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Part number ,List update problem ,Cyclic redundancy check ,Signal Processing ,Path (graph theory) ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Self-organizing list ,Algorithm ,Time complexity ,Decoding methods ,Information Systems ,Mathematics - Abstract
In order to change the path candidates, reduce the average list size, and make more paths pass cyclic redundancy check (CRC), multiple CRC-aided variable successive cancellation list (SCL) decoding algorithm is proposed. In the decoding algorithm, the whole unfrozen bits are divided into several parts and each part is concatenated with a corresponding CRC code, except the last part which is concatenated with a whole unfrozen CRC code. Each CRC detection is performed, and only those satisfying each part CRC become the path candidates. A variable list is setup for each part to reduce the time complexity. Variable list size is setup for each part to reduce the time complexity until one survival path in each part can pass its corresponding CRC. The results show that the proposed algorithm can reduce the average list size, and the frame error rate (FER) performance, and has a better performance with the increase of the part number.
- Published
- 2017
13. On the Separation and Equivalence of Paging Strategies and Other Online Algorithms
- Author
-
Spyros Angelopoulos, Alejandro López-Ortiz, Reza Dorrigiv, Recherche Opérationnelle (RO), LIP6, Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), Samsung Research America [San José], and University of Waterloo, Waterloo, ON, Canada
- Subjects
General Computer Science ,Computer science ,Heuristic (computer science) ,Applied Mathematics ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Computer Science Applications ,Set (abstract data type) ,List update problem ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Bijection ,Locality of reference ,Paging ,020201 artificial intelligence & image processing ,[INFO]Computer Science [cs] ,Online algorithm ,Equivalence (measure theory) ,Algorithm - Abstract
We introduce a new technique for the analysis of online algorithms, namely bijective analysis, that is based on pair-wise comparison of the costs incurred by the algorithms. Under this framework, an algorithm A is no worse than an algorithm B if there is a bijection $$\pi $$ defined over all request sequences of a given size such that the cost of A on $$\sigma $$ is no more than the cost of B on $$B(\pi (\sigma ))$$ . We also study a relaxation of bijective analysis, termed average analysis, in which we compare two algorithms based on their corresponding average costs over request sequences of a given size. We apply these new techniques in the context of two fundamental online problems, namely paging and list update. For paging, we show that any two lazy online algorithms are equivalent under bijective analysis. This result demonstrates that, without further assumptions on characteristics of request sequences, it is unlikely, or even undesirable, to separate online paging algorithms based on their performance. However, once we restrict the set of request sequences to those exhibiting locality of reference, and in particular using a model of locality due to Albers et al. (J Comput Syst Sci 70(2):145–175, 2005), we demonstrate that Least-Recently-Used (LRU) is the unique optimal strategy according to average analysis. This is, to our knowledge, the first deterministic model to provide full theoretical backing to the empirical observation that LRU is preferable in practice. Concerning list update, we obtain similar conclusions, in terms of the bijective comparison of any two online algorithms, and in terms of the superiority (albeit not necessarily unique) of the Move-To-Front (MTF) heuristic in the presence of locality of reference.
- Published
- 2019
14. Self-adjusting Linear Networks
- Author
-
Ingo van Duijn, Chen Avin, and Stefan Schmid
- Subjects
Theoretical computer science ,Competitive analysis ,Computer science ,Node (networking) ,Distributed computing ,Order (ring theory) ,Control reconfiguration ,Workload ,Self adjusting ,Upper and lower bounds ,Telecommunications network ,List update problem ,Distributed algorithm ,Locality of reference ,Online algorithm - Abstract
Emerging networked systems become increasingly flexible, reconfigurable, and “self-\(*\)”. This introduces an opportunity to adjust networked systems in a demand-aware manner, leveraging spatial and temporal locality in the workload for online optimizations. However, it also introduces a tradeoff: while more frequent adjustments can improve performance, they also entail higher reconfiguration costs. This paper initiates the formal study of list networks which self-adjust to the demand in an online manner, striking a balance between the benefits and costs of reconfigurations. We show that the underlying algorithmic problem can be seen as a distributed generalization of the classic dynamic list update problem known from self-adjusting datastructures: in a network, requests can occur between node pairs. This distributed version turns out to be significantly harder than the classical problem it generalizes. Our main results are a \(\varOmega (\log {n})\) lower bound on the competitive ratio, and a (distributed) online algorithm that is \(\mathcal {O}(\log {n})\)-competitive if the communication requests are issued according to a linear order.
- Published
- 2019
15. Lossless Image Compression Using List Update Algorithms
- Author
-
Neil D. B. Bruce, Rezaul Karim, Shahin Kamali, and Arezoo Abdollahi
- Subjects
Lossless compression ,050101 languages & linguistics ,Sequence ,Pixel ,Computer science ,05 social sciences ,Locality ,02 engineering and technology ,Image (mathematics) ,List update problem ,Matrix (mathematics) ,Encoding (memory) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Algorithm - Abstract
We consider lossless image compression using a technique similar to bZip2 for sequential data. Given an image represented with a matrix of pixel values, we consider different approaches for linearising the image into a sequence and then encoding the sequence using the Move-To-Front list update algorithm. In both linearisation and encoding stages, we exploit the locality present in the images to achieve encodings that are as compressed as possible. We consider a few approaches, and in particular Hilbert space-filling curves, for linearising the image. Using a natural model of locality for images introduced by Albers et al. [J. Comput. Syst. Sci. 2015], we establish the advantage of Hilbert space-filling curves over other linearisation techniques such as row-major or column-major curves for preserving the locality during the linearisation. We also use a result by Angelopoulos and Schweitzer [J. ACM 2013] to select Move-To-Front as the best list update algorithm for encoding the linearised sequence. In summary, our theoretical results show that a combination of Hilbert space-filling curves and Move-To-Front encoding has advantage over other approaches. We verify this with experiments on a dataset consisting of different categories of images.
- Published
- 2019
16. Balancing the problem list with an advantage inventory
- Author
-
Sarah H. Kagan
- Subjects
Patient Transfer ,Information retrieval ,030504 nursing ,Computer science ,Problem list ,Social Support ,Patient Discharge ,03 medical and health sciences ,List update problem ,0302 clinical medicine ,Inventory theory ,Humans ,Female ,Independent Living ,030212 general & internal medicine ,0305 other medical science ,Gerontology ,Aged - Published
- 2017
17. PMU14 China's 2019 National Reimbursement Drug LIST Update: Implications on Global and Local Manufacturer Strategy
- Author
-
M. Hunt, S. Park, W. Zhang, and A. Du
- Subjects
Finance ,List update problem ,business.industry ,Health Policy ,Economics, Econometrics and Finance (miscellaneous) ,Business ,China ,Pharmacology, Toxicology and Pharmaceutics (miscellaneous) ,Reimbursement - Published
- 2020
18. The itinerant list update problem
- Author
-
Olver, Neil, Pruhs, Kirk, Schewior, Kevin, Sitters, René, Stougie, Leen, Epstein, Leah, Erlebach, Thomas, Epstein, Leah, Erlebach, Thomas, Free University of Amsterdam, Department of Computer Science - University of Pittsburgh, University of Pittsburgh (PITT), Pennsylvania Commonwealth System of Higher Education (PCSHE)-Pennsylvania Commonwealth System of Higher Education (PCSHE), Technische Universität Munchen - Université Technique de Munich [Munich, Allemagne] (TUM), Vrije Universiteit Amsterdam [Amsterdam] (VU), Centrum Wiskunde & Informatica (CWI), Equipe de recherche européenne en algorithmique et biologie formelle et expérimentale (ERABLE), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Amsterdam Business Research Institute, Tinbergen Institute, and Econometrics and Operations Research
- Subjects
Online and offline ,List update problem ,Theoretical computer science ,010201 computation theory & mathematics ,Computer science ,Pointer (computer programming) ,0202 electrical engineering, electronic engineering, information engineering ,[INFO]Computer Science [cs] ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,020202 computer hardware & architecture - Abstract
We introduce the itinerant list update problem (ILU), which is a relaxation of the classic list update problem in which the pointer no longer has to return to a home location after each request. The motivation to introduce ILU arises from the fact that it naturally models the problem of track memory management in Domain Wall Memory. Both online and offline versions of ILU arise, depending on specifics of this application. First, we show that ILU is essentially equivalent to a dynamic variation of the classical minimum linear arrangement problem (MLA), which we call DMLA. Both ILU and DMLA are very natural, but do not appear to have been studied before. In this work, we focus on the offline ILU and DMLA problems. We then give an O ( log 2 n ) -approximation algorithm for these problems. While the approach is based on well-known divide-and-conquer approaches for the standard MLA problem, the dynamic nature of these problems introduces substantial new difficulties. We also show an Ω ( log n ) lower bound on the competitive ratio for any randomized online algorithm for ILU. This shows that online ILU is harder than online LU, for which O(1)-competitive algorithms, like Move-To-Front, are known.
- Published
- 2018
19. Route problem with constraints depending on a list of tasks
- Author
-
A. A. Chentsov and Alexander G. Chentsov
- Subjects
Current time ,0209 industrial biotechnology ,Mathematical optimization ,Route problem ,General Mathematics ,010102 general mathematics ,02 engineering and technology ,Function (mathematics) ,01 natural sciences ,Dynamic programming ,List update problem ,020901 industrial engineering & automation ,0101 mathematics ,Algorithm ,Mathematics - Abstract
An additive route problem with preceding conditions is considered in which the cost function and the move constraints both depend on a list of tasks that have not been performed by the current time. The problem is solved by applying a dynamic programming method that takes into account both these factors and is implemented in the construction of a (generally) incomplete array of Bellman function values.
- Published
- 2015
20. A time optimized scheme for top-k list maintenance over incomplete data streams
- Author
-
Christos Anagnostopoulos, Stathes Hadjiefthymiades, and Kostas Kolomvatsos
- Subjects
Information Systems and Management ,Database ,Data stream mining ,Computer science ,computer.software_genre ,Computer Science Applications ,Theoretical Computer Science ,List update problem ,Artificial Intelligence ,Control and Systems Engineering ,Optimal stopping ,Data mining ,Self-organizing list ,computer ,Software - Abstract
A large number of contemporary research efforts focus on incomplete data streams handling. These efforts, usually, focus on the creation and maintenance of top-k lists useful to provide efficient responses to top-k queries. In case of large volumes of data accumulated at high rates the problem becomes more intense as an efficient method for maintaining the top-k list is considered imperative. In this paper, we focus on the behavior of an Observer Entity (OE) responsible to observe the incoming data and initiate the maintenance process of the top-k list. The maintenance process involves the calculation of a score for each object and the update of the top-k list. We adopt the principles of Optimal Stopping Theory (OST) and introduce a scheme that determines, in the appropriate time, when the OE decides to initiate the maintenance process. Due to the incomplete data setting, the high rate of the incoming data and the large volume of data, the maintenance process is initiated only when the OE has the appropriate amount of information for providing a top-k list. In contrast to other research approaches, we do not maintain any additional sets of objects. Instead, we attempt to minimize the number of the necessary processing tasks over the list of objects. We present a mathematical analysis for our scheme and an extensive experimental evaluation. The comparison with other models shows that the proposed model provides efficiency in the list management and minimizes the required time to result the final top-k list.
- Published
- 2015
21. Comparing Linear Search and Binary Search Algorithms to Search an Element from a Linear List Implemented through Static Array, Dynamic Array and Linked List
- Author
-
CK Kumbharana and Vimal P. Parmar
- Subjects
Incremental heuristic search ,Binary search algorithm ,Theoretical computer science ,Computer science ,Linked list ,computer.software_genre ,Uniform binary search ,List update problem ,Jump search ,Search algorithm ,Binary search tree ,Beam stack search ,Combinatorial search ,Beam search ,Data mining ,Self-organizing list ,computer ,Algorithm ,Analysis of algorithms ,Linear search - Abstract
retrieval is the most fundamental requirement for any kind of computing application and which requires search operation to be performed from massive databases implemented by various data structures. Searching an element from the list is the fundamental aspects in computing world. Numbers of algorithms are developed for searching an element among which linear search and binary search are the most popular algorithms. In this paper researcher has made efforts to compare these both algorithms to implement on various data structures and to find out the solution to implement binary search on linked linear list. This paper also analyzes both the algorithms at some extent for the applicability and execution efficiency. This paper also analyzes the few data structures to implement these algorithms. At last based on the linear search and binary search algorithms, one algorithm is designed to function on linked linear list.
- Published
- 2015
22. Cache-Centric Video Recommendation
- Author
-
Dilip Kumar Krishnappa, Pål Halvorsen, Michael Zink, and Carsten Griwodz
- Subjects
Hardware_MEMORYSTRUCTURES ,Information retrieval ,Multimedia ,Computer Networks and Communications ,Computer science ,Server load ,Recommender system ,computer.software_genre ,University campus ,List update problem ,Hardware and Architecture ,Content distribution ,Hit rate ,Overall performance ,Cache ,Self-organizing list ,Latency (engineering) ,computer - Abstract
In this article, we take advantage of the user behavior of requesting videos from the top of the related list provided by YouTube to improve the performance of YouTube caches. We recommend that local caches reorder the related lists associated with YouTube videos, presenting the cached content above noncached content. We argue that the likelihood that viewers select content from the top of the related list is higher than selection from the bottom, and pushing contents already in the cache to the top of the related list would increase the likelihood of choosing cached content. To verify that the position on the list really is the selection criterion more dominant than the content itself, we conduct a user study with 40 YouTube-using volunteers who were presented with random related lists in their everyday YouTube use. After confirming our assumption, we analyze the benefits of our approach by an investigation that is based on two traces collected from a university campus. Our analysis shows that the proposed reordering approach for related lists would lead to a 2 to 5 times increase in cache hit rate compared to an approach without reordering the related list. This increase in hit rate would lead to reduction in server load and backend bandwidth usage, which in turn reduces the latency in streaming the video requested by the viewer and has the potential to improve the overall performance of YouTube's content distribution system. An analysis of YouTube's recommendation system reveals that related lists are created from a small pool of videos, which increases the potential for caching content from related lists and reordering based on the content in the cache.
- Published
- 2015
23. LazySorted: A Lazily, Partially Sorted Python List
- Author
-
Naftali Harris
- Subjects
Statistics and Probability ,Insertion sort ,Bubble sort ,Computer science ,Programming language ,Sorted array ,computer.software_genre ,Tree sort ,List update problem ,Association list ,Data_FILES ,Statistics, Probability and Uncertainty ,Self-organizing list ,computer ,lcsh:Statistics ,lcsh:HA1-4737 ,Software ,Quicksort - Abstract
LazySorted is a Python C extension implementing a partially and lazily sorted list data structure. It solves a common problem faced by programmers, in which they need just part of a sorted list, like its middle element (the median), but sort the entire list to get it. LazySorted presents them with the abstraction that they are working with a fully sorted list, while actually only sorting the list partially with quicksort partitions to return the requested sub-elements. This enables programmers to use naive "sort first" algorithms but nonetheless attain linear run-times when possible. LazySorted may serve as a drop-in replacement for the built-in sorted function in most cases, and can sometimes achieve run-times more than 7 times faster.
- Published
- 2015
24. Technical Innovation: The Automated Residency Match Rank List
- Author
-
Colin Strickland and David Rubinstein
- Subjects
Matching (statistics) ,Information retrieval ,business.industry ,Process (engineering) ,Rank (computer programming) ,Internship and Residency ,United States ,Task (project management) ,Ranking (information retrieval) ,03 medical and health sciences ,List update problem ,0302 clinical medicine ,Humans ,Medicine ,Radiology, Nuclear Medicine and imaging ,030212 general & internal medicine ,Macro ,Self-organizing list ,Radiology ,030223 otorhinolaryngology ,business - Abstract
The creation of the final rank list for the National Residency Matching Program every year is a laborious task requiring the time and input of numerous faculty members and residents. This article describes the creation of an automated visual rank list to efficiently organize and guide discussion at the yearly rank meeting so that the task may be efficiently and fairly completed. The rank list was created using a PowerPoint (Microsoft) macro that can pull information directly from a spreadsheet to generate a visual rank list that can be modified on-the-fly during the final rank list meeting. An automatically created visual rank list helps facilitate an efficient meeting and creates an open and transparent process leading to the final ranking.
- Published
- 2016
25. A UNIQUE SORTING ALGORITHM WITH LINEAR TIME & SPACE COMPLEXITY
- Author
-
Somsubhra Gupta and Sanjib Palui
- Subjects
Divide and conquer algorithms ,List update problem ,Sorting algorithm ,Computer science ,Worst-case complexity ,Sorting ,Self-organizing list ,External sorting ,Time complexity ,Algorithm - Abstract
Sorting a list means selection of the particular permutation of the members of that list in which the final permutation contains members in increasing or in decreasing order. Sorted list is prerequisite of some optimized operations such as searching an element from a list, locating or removing an element to/ from a list and merging two sorted list in a database etc. As volume of information is growing up day by day in the world around us and these data are unavoidable to manage for real life situations, the efficient and cost effective sorting algorithms are required. There are several numbers of fundamental and problem oriented sorting algorithms but still now sorting a problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently and effectively despite of its simple and familiar statements. Algorithms having same efficiency to do a same work using different mechanisms must differ their required time and space. For that reason an algorithm is chosen according to one’s need with respect to space complexity and time complexity. Now a day, space (Memory) is available in market comparatively in cheap cost. So, time complexity is a major issue for an algorithm. Here, the presented approach is to sort a list with linear time and space complexity using divide and conquer rule by partitioning a problem into n (input size) number of sub problems then these sub problems are solved recursively. Required time and space for the algorithm is optimized through reducing the height of the recursive tree and reduced height is too small (as compared to the problem size) to evaluate. So, asymptotic efficiency of this algorithm is very high with respect to time and space.
- Published
- 2014
26. A efficient algorithm for molecular dynamics simulation on hybrid CPU-GPU computing platforms
- Author
-
Jie Liang, Wei Ai, Yu Ye, and Dapu Li
- Subjects
Correctness ,Cost efficiency ,Computer science ,Parallel algorithm ,0102 computer and information sciences ,Parallel computing ,Load balancing (computing) ,01 natural sciences ,010305 fluids & plasmas ,Computational science ,List update problem ,CUDA ,010201 computation theory & mathematics ,0103 physical sciences ,Central processing unit ,General-purpose computing on graphics processing units - Abstract
In this article, an efficient parallel algorithm for a hybrid CPU-GPU platform is proposed to enable large-scale molecular dynamics (MD) simulations of the metal solidification process. The results, implemented the parallel algorithm program on the hybrid CPU-GPU platform shows better performance than the program based on previous algorithms running on the CPU cluster platform. By contrast, the total execution time of the new program has been obviously decreased. Particularly, because of the use of the modified load balancing method, the neighbor list update time is approximately zero. The parallel program based on the CUDA+OpenMP model shows a factor of 6 16-core calculation speedups compared to the parallel program based on the MPI+OpenMP model, and the optimal computational efficiency is achieved in the simulation system including 10,000,000 aluminum atoms. Finally, the good consistency between them verifies the correctness of the algorithm efficiently, by comparison of the theoretical results and experimental results.
- Published
- 2016
27. Machine Learning Approach to Generate Pareto Front for List-scheduling Algorithms
- Author
-
Nam Khanh Pham, Khin Mi Mi Aung, and Akash Kumar
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Theoretical computer science ,Computer science ,Design space exploration ,Pareto principle ,02 engineering and technology ,List scheduling ,Multi-objective optimization ,Scheduling (computing) ,List update problem ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,Systems design ,020201 artificial intelligence & image processing ,Self-organizing list - Abstract
List Scheduling is one of the most widely used techniques for scheduling due to its simplicity and efficiency. In traditional list-based schedulers, a cost/priority function is used to compute the priority of tasks/jobs and put them in an ordered list. The cost function has been becoming more and more complex to cover increasing number of constraints in the system design. However, most of the existing list-based schedulers implement a static priority function that usually provides only one schedule for each task graph input. Therefore, they may not be able to satisfy the desire of system designers, who want to examine the trade-off between a number of design requirements (performance, power, energy, reliability ...). To address this problem, we propose a framework to utilize the Genetic Algorithm (GA) for exploring the design space and obtaining Pareto-optimal design points. Furthermore, multiple regression techniques are used to build predictive models for the Pareto fronts to limit the execution time of GA. The models are built using training task graph datasets and applied on incoming task graphs. The Pareto fronts for incoming task graphs are produced in time 2 orders of magnitude faster than the traditional GA, with only 4% degradation in the quality.
- Published
- 2016
28. Advanced move-to-front for list access problem
- Author
-
Kalyan Das, Santwana Hota, Jagamohan Padhi, and Shradha S. Nanda
- Subjects
Sequence ,Database ,business.industry ,Computer science ,Context (language use) ,Linked list ,computer.software_genre ,Data structure ,List update problem ,Algorithm design ,Self-organizing list ,business ,computer ,Computer network ,Linear search - Abstract
In the context of linear search, list accessing problem is a well known research problem. Unsorted linear list of distinct elements and a request sequence are the input to the list accessing problem where access operation is to be performed on each request on an element of the list. The main objective of list accessing algorithm is to minimize the access cost while processing a request sequence on the list. We have named our algorithm as Advanced Move-To-Front (AMTF) in which new list is updated each time based on the frequency count of accessed element's position in the list. In our paper we have compared Advanced MTF cost with MTF (Move to Front) cost of the accessed elements in the given request sequence and we have found that in most of the data sets, Advanced MTF cost is less as compared to MTF cost.
- Published
- 2016
29. A Fast Algorithm to Build New Users Similarity List in Neighbourhood-Based Collaborative Filtering
- Author
-
Zhigang Lu and Hong Shen
- Subjects
Computer science ,business.industry ,Recommender system ,computer.software_genre ,Machine learning ,Fast algorithm ,List update problem ,Scalability ,Collaborative filtering ,Data mining ,Artificial intelligence ,Self-organizing list ,business ,computer ,Neighbourhood (mathematics) - Abstract
Neighbourhood-based Collaborative Filtering (CF) has been applied in the industry for several decades because of its easy implementation and high recommendation accuracy. As the core of neighbourhood-based CF, the task of dynamically maintaining users’ similarity list is challenged by cold-start problem and scalability problem. Recently, several methods are presented on addressing the two problems. However, these methods require mn steps to compute the similarity list against the kNN attack, where m and n are the number of items and users in the system respectively. Observing that the k new users from the kNN attack, with enough recommendation data, have the same rating list, we present a faster algorithm, TwinSearch, to avoid computing and sorting the similarity list for each new user repeatedly to save the time. The computational cost of our algorithm is 1/125 of the existing methods. Both theoretical and experimental results show that the TwinSearch Algorithm achieves better running time than the traditional method.
- Published
- 2016
30. A novel hybrid approach to list accessing problem using BIT algorithm
- Author
-
Banhisikha Samanta and Shiba Prasad Dash
- Subjects
List update problem ,Bit (horse) ,Theoretical computer science ,Computer science ,Sorting ,Algorithm design ,Self-organizing list ,Hybrid approach ,Algorithm ,Linear search - Abstract
List accessing problem is associated with the searching and sorting of unordered records arranged linearly. Many approaches are followed and invented since more than four decades such as self-organizing linear lists in order to obtain a minimal cost for accessing the list. This paper introduces a hybrid method that is the combination of Move to Front (MTF) and transposes (TRANS) methods for the list accessing problem which results in a minimum of total access costs as compared to the cost of accessing of list using individual methods. Our paper also uses the randomized bit algorithm and look ashead concepts for obtaining the self-organizing linear list.
- Published
- 2015
31. Analyzing the Effects of Reordering Work List Items for Selected Control Flow Patterns
- Author
-
Johannes Pflug and Stefanie Rinderle-Ma
- Subjects
Process (engineering) ,Computer science ,business.industry ,Distributed computing ,Business process modeling ,computer.software_genre ,Electronic mail ,Business process management ,List update problem ,Resource (project management) ,Resource management ,Data mining ,business ,computer ,Throughput (business) - Abstract
Efficient resource management is an important requirement for many process-oriented applications. Typically, work items are assigned to resources through their work lists. There are many reasons for reordering work items in a resource's work list. For process scheduling, for example, swapping process instances constitutes a mean to keep due times. At the same time, reducing the throughput time of the global process is typically not the primary goal. For process optimization, in turn, the implications of reordering work items on the overall temporal performance of the process might be crucial. In this paper, we investigate how reordering work items affects performance parameters that are typically associated with a first-in-first-out processing mechanism at resources. The analysis is conducted for single process tasks and for typical control flow patterns such as sequence as well as parallel and alternative branchings. It is shown that the implications on the global throughput time are less than expected, while the effects on instance based parameters strongly depend on the control-flow pattern in which the reordering mechanism is implemented. The results are supported by means of a simulation.
- Published
- 2015
32. Online List Update
- Author
-
Shahin Kamali
- Subjects
List update problem ,Information retrieval ,Computer science - Published
- 2015
33. Step 1: Create a problem list
- Author
-
Tracy D. Eells
- Subjects
List update problem ,Information retrieval ,Computer science ,Problem list ,Self-organizing list - Published
- 2015
34. Learning to Rank: Regret Lower Bound and Efficient Algorithms
- Author
-
Richard Combes, Stefan Magureanu, Alexandre Proutiere, Cyrille Laroche, Laboratoire des signaux et systèmes (L2S), Université Paris-Sud - Paris 11 (UP11)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS), and Royal Institute of Technology [Stockholm] (KTH )
- Subjects
Optimization problem ,Computer science ,multi-armed bandits ,Machine learning ,computer.software_genre ,search engines ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,Search engine ,[INFO.INFO-NI]Computer Science [cs]/Networking and Internet Architecture [cs.NI] ,ad-display optimization ,[STAT.ML]Statistics [stat]/Machine Learning [stat.ML] ,[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST] ,Web page ,Selection (linguistics) ,Relevance (information retrieval) ,Information retrieval ,learning ,business.industry ,Regret ,[MATH.MATH-PR]Mathematics [math]/Probability [math.PR] ,List update problem ,[INFO.INFO-IT]Computer Science [cs]/Information Theory [cs.IT] ,Learning to rank ,Artificial intelligence ,[MATH.MATH-OC]Mathematics [math]/Optimization and Control [math.OC] ,Self-organizing list ,business ,computer - Abstract
International audience; Algorithms for learning to rank Web documents, display ads, or other types of items constitute a fundamental component of search engines and more generally of online services. In such systems, when a user makes a request or visits a web page, an ordered list of items (e.g. documents or ads) is displayed; the user scans this list in order, and clicks on the first relevant item if any. When the user clicks on an item, the reward collected by the system typically decreases with the position of the item in the displayed list. The main challenge in the design of sequential list selection algorithms stems from the fact that the probabilities with which the user clicks on the various items are unknown and need to be learned. We formulate the design of such algorithms as a stochastic bandit optimization problem. This problem differs from the classical bandit framework: (1) the type of feedback received by the system depends on the actual relevance of the various items in the displayed list (if the user clicks on the last item, we know that none of the previous items in the list are relevant); (2) there are inherent correlations between the average relevance of the items (e.g. the user may be interested in a specific topic only). We assume that items are categorized according to their topic and that users are clustered, so that users of the same cluster are interested in the same topic. We investigate several scenarios depending on the available side-information on the user before selecting the displayed list: (a) we first treat the case where the topic the user is interested in is known when she places a request; (b) we then study the case where the user cluster is known but the mapping between user clusters and topics is unknown. For both scenarios, we derive regret lower bounds and devise algorithms that approach these fundamental limits.
- Published
- 2015
35. Using secure messaging to update medications list in ambulatory care setting
- Author
-
Sharon Freimund, T. S. Raghu, Yu Hui H. Chang, Asha Patel, Meng Ru Cheng, and Keith A. Frey
- Subjects
Adult ,Adolescent ,Health Informatics ,Pharmacy ,Computer security ,computer.software_genre ,Medical Order Entry Systems ,Medication Adherence ,Young Adult ,Age Distribution ,Medication Reconciliation ,Ambulatory care ,Ambulatory Care ,Medicine ,Electronic Health Records ,Humans ,Sex Distribution ,Aged ,Aged, 80 and over ,business.industry ,Technician ,Medical record ,Patient portal ,Arizona ,Middle Aged ,medicine.disease ,List update problem ,Secure messaging ,Medical emergency ,Patient Participation ,business ,computer ,Medication list ,Confidentiality - Abstract
This study analyzed patient adoption of secure messaging to update medication list in an ambulatory care setting. The objective was to establish demographic differences between users and non-users of secure messaging for medications list update. Efficiency of secure messaging for the updates was compared to fax and telephone based updates.The study used a retrospective, cross-sectional study of patient medical records and pharmacy call logs at Mayo Clinic, Arizona from December 2012 to May 2013, approximately one year after organizing a pharmacy call center for medication updates. A subgroup analysis during a 2-week period was used to measure time to complete update.Main dependent variable is the frequency of medication list updates over the study duration. Technician time required for the update was also utilized.A total of 22,495 outpatient visits were drawn and 18,702 unique patients were included in the primary analysis. A total of 402 unique patients were included in sub-group analysis. Secure message response rate (49.5%) was statistically significantly lower than that for phone calls (54.8%, p0.001). Time to complete the update was significantly higher for faxed medication lists (Wilcoxon rank-sum tests, p0.001) when compared to those for secure message or phone.Around 50% of the patients respond to medication update requests before office visit when contacted using phone calls and secure messages. Given the demographic differences between users and non-users of patient portal, mixed mode communication with patients is likely to be the norm for the foreseeable future in outpatient settings.
- Published
- 2014
36. Paid Exchanges are Worth the Price
- Author
-
Alejandro López-Ortiz and Marc P. Renault and Adi Rosén, López-Ortiz, Alejandro, Renault, Marc P., Rosén, Adi, Alejandro López-Ortiz and Marc P. Renault and Adi Rosén, López-Ortiz, Alejandro, Renault, Marc P., and Rosén, Adi
- Abstract
We consider the list update problem as defined in the seminal work on competitive analysis by Sleator and Tarjan [12]. In this problem, a sequence of requests, consisting of items to access in a linked list, is given. After an item is accessed it can be moved to any position forward in the list at no cost (free exchange), and, at any time, any two adjacent items can be swapped at a cost of 1 (paid exchange). The cost to access an item is its current position in the list. The goal is to dynamically rearrange the list so as to minimize the total cost (accrued from accesses and exchanges) over the request sequence. We show a lower bound of 12/11 on the worst-case ratio between the performance of an (offline) optimal algorithm that can only perform free exchanges and that of an (offline) optimal algorithm that can perform both paid and free exchanges. This answers an outstanding question that has been open since 1996 [10].
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.