8 results on '"Mallmann-Trenn, Frederik"'
Search Results
2. Self-Stabilizing Task Allocation In Spite of Noise
- Author
-
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory, Dornhaus, Anna, Lynch, Nancy, Mallmann-Trenn, Frederik, Pajak, Dominik, Radeva, Tsvetomira, Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory, Dornhaus, Anna, Lynch, Nancy, Mallmann-Trenn, Frederik, Pajak, Dominik, and Radeva, Tsvetomira
- Abstract
© 2020 ACM. We study the problem of distributed task allocation by workers in an ant colony in a setting of limited capabilities and noisy environment feedback. We assume that each task has a demand that should be satisfied but not exceeded, i.e., there is an optimal number of ants that should be working on this task at a given time. The goal is to assign a near-optimal number of workers to each task in a distributed manner without explicit access to the value of the demand nor to the number of ants working on the task. We seek to answer the question of how the quality of task allocation depends on the accuracy of assessing by the ants whether too many (overload) or not enough (lack) ants are currently working on a given task. In our model, each ant receives a binary feedback that depends on the deficit, defined as the difference between the demand and the current number of workers in the task. The feedback is modeled as a random variable that takes values lack or overload with probability given by a sigmoid function of the deficit. The higher the overload or lack of workers for a task, the more likely it is that an ant receives the correct feedback from this task; the closer the deficit is to zero, the less reliable the feedback becomes. Each ant receives the feedback independently about one chosen task. We measure the performance of task allocation algorithms using the notion of inaccuracy, defined as the number of steps in which the deficit of some task is beyond certain threshold. We propose a simple, constant-memory, self-stabilizing, distributed algorithm that converges from any initial assignment to a near-optimal assignment under noisy feedback and keeps the deficit small for all tasks in almost every step. We also prove a lower bound for any constant-memory algorithm, which matches, up to a constant factor, the accuracy achieved by our algorithm.
- Published
- 2022
3. Hierarchical Clustering: Objective Functions and Algorithms
- Author
-
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory, Mallmann-Trenn, Frederik, Cohen-addad, Vincent, Kanade, Varun, Mathieu, Claire, Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory, Mallmann-Trenn, Frederik, Cohen-addad, Vincent, Kanade, Varun, and Mathieu, Claire
- Abstract
Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a “good” hierarchical clustering is one that minimizes a particular cost function [23]. He showed that this cost function has certain desirable properties: To achieve optimal cost, disconnected components (namely, dissimilar elements) must be separated at higher levels of the hierarchy, and when the similarity between data elements is identical, all clusterings achieve the same cost. We take an axiomatic approach to defining “good” objective functions for both similarity- and dissimilarity-based hierarchical clustering. We characterize a set of admissible objective functions having the property that when the input admits a “natural” ground-truth hierarchical clustering, the ground-truth clustering has an optimal value. We show that this set includes the objective function introduced by Dasgupta. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better and faster algorithms for hierarchical clustering. We also initiate a beyond worst-case analysis of the complexity of the problem and design algorithms for this scenario.
- Published
- 2022
4. Self-Stabilizing Task Allocation In Spite of Noise
- Author
-
Dornhaus, Anna, Lynch, Nancy, Mallmann-Trenn, Frederik, Pajak, Dominik, Radeva, Tsvetomira, Dornhaus, Anna, Lynch, Nancy, Mallmann-Trenn, Frederik, Pajak, Dominik, and Radeva, Tsvetomira
- Abstract
© 2020 ACM. We study the problem of distributed task allocation by workers in an ant colony in a setting of limited capabilities and noisy environment feedback. We assume that each task has a demand that should be satisfied but not exceeded, i.e., there is an optimal number of ants that should be working on this task at a given time. The goal is to assign a near-optimal number of workers to each task in a distributed manner without explicit access to the value of the demand nor to the number of ants working on the task. We seek to answer the question of how the quality of task allocation depends on the accuracy of assessing by the ants whether too many (overload) or not enough (lack) ants are currently working on a given task. In our model, each ant receives a binary feedback that depends on the deficit, defined as the difference between the demand and the current number of workers in the task. The feedback is modeled as a random variable that takes values lack or overload with probability given by a sigmoid function of the deficit. The higher the overload or lack of workers for a task, the more likely it is that an ant receives the correct feedback from this task; the closer the deficit is to zero, the less reliable the feedback becomes. Each ant receives the feedback independently about one chosen task. We measure the performance of task allocation algorithms using the notion of inaccuracy, defined as the number of steps in which the deficit of some task is beyond certain threshold. We propose a simple, constant-memory, self-stabilizing, distributed algorithm that converges from any initial assignment to a near-optimal assignment under noisy feedback and keeps the deficit small for all tasks in almost every step. We also prove a lower bound for any constant-memory algorithm, which matches, up to a constant factor, the accuracy achieved by our algorithm.
- Published
- 2021
5. Hierarchical Clustering
- Author
-
Cohen-addad, Vincent, primary, Kanade, Varun, additional, Mallmann-trenn, Frederik, additional, and Mathieu, Claire, additional
- Published
- 2019
- Full Text
- View/download PDF
6. Hierarchical Clustering: Objective Functions and Algorithms
- Author
-
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory, Mallmann-Trenn, Frederik, Cohen-addad, Vincent, Kanade, Varun, Mathieu, Claire, Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory, Mallmann-Trenn, Frederik, Cohen-addad, Vincent, Kanade, Varun, and Mathieu, Claire
- Abstract
Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a “good” hierarchical clustering is one that minimizes a particular cost function [23]. He showed that this cost function has certain desirable properties: To achieve optimal cost, disconnected components (namely, dissimilar elements) must be separated at higher levels of the hierarchy, and when the similarity between data elements is identical, all clusterings achieve the same cost. We take an axiomatic approach to defining “good” objective functions for both similarity- and dissimilarity-based hierarchical clustering. We characterize a set of admissible objective functions having the property that when the input admits a “natural” ground-truth hierarchical clustering, the ground-truth clustering has an optimal value. We show that this set includes the objective function introduced by Dasgupta. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better and faster algorithms for hierarchical clustering. We also initiate a beyond worst-case analysis of the complexity of the problem and design algorithms for this scenario.
- Published
- 2019
7. Improved Analysis of Deterministic Load-Balancing Schemes
- Author
-
Berenbrink, Petra, primary, Klasing, Ralf, additional, Kosowski, Adrian, additional, Mallmann-Trenn, Frederik, additional, and Uznański, Przemysław, additional
- Published
- 2018
- Full Text
- View/download PDF
8. Self-stabilizing balls & bins in batches : the power of leaky bins
- Author
-
Berenbrink, Petra, Friedetzky, Tom, Kling, Peter, Mallmann-Trenn, Frederik, Nagel, Lars, and Wastell, Chris
- Subjects
TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY - Abstract
A fundamental problem in distributed computing is the distribution of requests to a set of uniform servers without a centralized controller. Classically, such problems are modelled as static balls into bins processes, where m balls (tasks) are to be distributed to n bins (servers). In a seminal work, [Azar et al.; JoC'99] proposed the sequential strategy Greedy[d] for n = m. When thrown, a ball queries the load of d random bins and is allocated to a least loaded of these. [Azar et al.; JoC'99] showed that d=2 yields an exponential improvement compared to d=1. [Berenbrink et al.; JoC'06] extended this to m ⇒ n, showing that the maximal load difference is independent of m for d=2 (in contrast to d=1). We propose a new variant of an infinite balls into bins process. In each round an expected number of λ n new balls arrive and are distributed (in parallel) to the bins and each non-empty bin deletes one of its balls. This setting models a set of servers processing incoming requests, where clients can query a server's current load but receive no information about parallel requests. We study the Greedy[d] distribution scheme in this setting and show a strong self-stabilizing property: For any arrival rate λ=λ(n) < 1, the system load is time-invariant. Moreover, for any (even super-exponential) round t, the maximum system load is (w.h.p.) O(1 over 1-λ•logn over 1-λ) for d=1 and O(log n over 1-λ) for d=2. In particular, Greedy[2] has an exponentially smaller system load for high arrival rates.
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.