2,670 results on '"010201 computation theory & mathematics"'
Search Results
2. Quality-diversity optimisation
- Author
-
Antoine Cully, Stéphane Doncieux, and Jean-Baptiste Mouret
- Subjects
0106 biological sciences ,business.industry ,Computer science ,media_common.quotation_subject ,Environmental resource management ,0102 computer and information sciences ,02 engineering and technology ,021001 nanoscience & nanotechnology ,010603 evolutionary biology ,01 natural sciences ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Quality (business) ,0210 nano-technology ,business ,media_common ,Diversity (business) - Published
- 2022
3. On secure E-voting over blockchain
- Author
-
MccorryPatrick, F ShahandashtiSiamak, HaoFeng, ToreiniEhsan, and MehrnezhadMaryam
- Subjects
peace_justice_and_strong_institutions ,JF ,Blockchain ,Computer Networks and Communications ,Computer science ,media_common.quotation_subject ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,0102 computer and information sciences ,02 engineering and technology ,Computer security ,computer.software_genre ,industry_innovation_and_infrastructure ,01 natural sciences ,QA76 ,Voting ,0202 electrical engineering, electronic engineering, information engineering ,media_common ,T1 ,020206 networking & telecommunications ,Computer Science Applications ,010201 computation theory & mathematics ,Hardware and Architecture ,Polling ,Safety Research ,computer ,Software ,Information Systems - Abstract
This article discusses secure methods to conduct e-voting over a blockchain in three different settings: decentralized voting, centralized remote voting, and centralized polling station voting. These settings cover almost all voting scenarios that occur in practice. A proof-of-concept implementation for decentralized voting over Ethereum’s blockchain is presented. This work demonstrates the suitable use of a blockchain not just as a public bulletin board but, more importantly, as a trustworthy computing platform that enforces the correct execution of the voting protocol in a publicly verifiable manner. We also discuss scaling up a blockchain-based voting application for national elections. We show that for national-scale elections the major verifiability problems can be addressed without having to depend on any blockchain. However, a blockchain remains a viable option to realize a public bulletin board, which has the advantage of being a “preventive” measure to stop retrospective changes on previously published records as opposed to a “detective” measure like the use of mirror websites. CCS Concepts: • Security and privacy
- Published
- 2021
4. Shorter and Faster Post-Quantum Designated-Verifier zkSNARKs from Lattices
- Author
-
Hang Su, David J. Wu, and Yuval Ishai
- Subjects
Soundness ,Discrete mathematics ,Reduction (recursion theory) ,business.industry ,Computer science ,020206 networking & telecommunications ,0102 computer and information sciences ,02 engineering and technology ,Extension (predicate logic) ,Gas meter prover ,Encryption ,Mathematical proof ,01 natural sciences ,Quadratic equation ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,business ,Probabilistically checkable proof - Abstract
Zero-knowledge succinct arguments of knowledge (zkSNARKs) enable efficient privacy-preserving proofs of membership for general NP languages. Our focus in this work is on post-quantum zkSNARKs, with a focus on minimizing proof size. Currently, there is a 1000x gap in the proof size between the best pre-quantum constructions and the best post-quantum ones. Here, we develop and implement new lattice-based zkSNARKs in the designated-verifier preprocessing model. With our construction, after an initial preprocessing step, a proof for an NP relation of size 2^20 is just over 16 KB. Our proofs are 10.3x shorter than previous post-quantum zkSNARKs for general NP languages. Compared to previous lattice-based zkSNARKs (also in the designated-verifier preprocessing model), we obtain a 42x reduction in proof size and a 60x reduction in the prover's running time, all while achieving a much higher level of soundness. Compared to the shortest pre-quantum zkSNARKs by Groth (Eurocrypt 2016), the proof size in our lattice-based construction is 131x longer, but both the prover and the verifier are faster (by 1.2x and 2.8x, respectively). Our construction follows the general blueprint of Bitansky et al. (TCC 2013) and Boneh et al. (Eurocrypt 2017) of combining a linear probabilistically checkable proof (linear PCP) together with a linear-only vector encryption scheme. We develop a concretely-efficient lattice-based instantiation of this compiler by considering quadratic extension fields of moderate characteristic and using linear-only vector encryption over rank-2 module lattices.
- Published
- 2021
5. Skyline in Crowdsourcing with Imprecise Comparisons
- Author
-
Aris Anagnostopoulos, Giacomo Vettraino, and Adriano Fazzone
- Subjects
Skyline ,Human Computation ,Theoretical computer science ,Computer science ,business.industry ,Feature vector ,Computation ,Skyline Algorithms ,0102 computer and information sciences ,02 engineering and technology ,Object (computer science) ,Crowdsourcing ,01 natural sciences ,Worker Models ,Set (abstract data type) ,010201 computation theory & mathematics ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Human computation ,Threshold model ,business ,crowdsourcing ,human computation ,skyline algorithms ,worker models - Abstract
Given an input of a set of objects each one represented as a vector of features in a feature space, the problem of finding the skyline is the problem of determining the subset of objects that are not dominated by any other input object. An example of an application is to find the best hotel(s) with respect to some features (location, price, cleanliness, etc.) The use of the crowd for solving this problem is useful when a score of items according to their features is not available. Yet the crowd can give inconsistent answers. In this paper we study the computation of the skyline when the comparisons between objects are performed by humans. We model the problem using the threshold model [Ajtai et al, TALG 2015] in which the comparison of two objects may create errors/inconsistencies if the objects are close to each other. We provide algorithms for the problem and we analyze the required number of human comparisons and lower bounds. We also evaluate the effectiveness and efficiency of our algorithms using synthetic and real-world data.
- Published
- 2021
6. Learning to Cluster via Same-Cluster Queries
- Author
-
Yan Song, Yi Li, and Qin Zhang
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Theoretical computer science ,Computer science ,Perspective (graphical) ,0102 computer and information sciences ,02 engineering and technology ,16. Peace & justice ,01 natural sciences ,Oracle ,Machine Learning (cs.LG) ,Set (abstract data type) ,Data point ,010201 computation theory & mathematics ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Cluster (physics) ,Cluster analysis - Abstract
We study the problem of learning to cluster data points using an oracle which can answer same-cluster queries. Different from previous approaches, we do not assume that the total number of clusters is known at the beginning and do not require that the true clusters are consistent with a predefined objective function such as the K-means. These relaxations are critical from the practical perspective and, meanwhile, make the problem more challenging. We propose two algorithms with provable theoretical guarantees and verify their effectiveness via an extensive set of experiments on both synthetic and real-world data.
- Published
- 2021
7. Quantum Money Scheme
- Author
-
David Malone, Jerry Horgan, Deirdre Kilbane, and Hazel Murray
- Subjects
Scheme (programming language) ,0209 industrial biotechnology ,business.industry ,Computer science ,TheoryofComputation_GENERAL ,Quantum simulator ,Cryptography ,0102 computer and information sciences ,02 engineering and technology ,Walton Institute for Information and Communications Systems Science ,01 natural sciences ,020901 industrial engineering & automation ,Electronic money ,Computer engineering ,010201 computation theory & mathematics ,Computer data storage ,Quantum information ,Telecommunications Software and Systems Group ,business ,computer ,Quantum money ,computer.programming_language ,Quantum computer - Abstract
Quantum computing has the power to break current cryptographic systems, disrupting online banking, shopping, data storage and communications. However, quantum mechanics can also be used to make these systems stronger and more resilient. In this paper we describe the transmissibility of a quantum money scheme, which was proposed by Dmitry Gavinsky and implemented by the authors, and discuss some of its benefits and limitations.
- Published
- 2021
8. Static analysis of pattern-free properties
- Author
-
Pierre-Etienne Moreau, Pierre Lermusiaux, Horatiu Cirstea, Proof-oriented development of computer-based systems (MOSEL), Department of Formal Methods (LORIA - FM), Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS), Modeling and Verification of Distributed Algorithms and Systems (VERIDIS), Max-Planck-Institut für Informatik (MPII), Max-Planck-Gesellschaft-Max-Planck-Gesellschaft-Inria Nancy - Grand Est, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Department of Formal Methods (LORIA - FM), Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS), and ANR-16-CE25-0007,FORMEDICIS,Méthodes formelles pour le développement et l'ingénierie de systèmes interactifs critiques(2016)
- Subjects
Pattern-matching ,Rewriting ,[INFO.INFO-PL]Computer Science [cs]/Programming Languages [cs.PL] ,Formalism (philosophy) ,Semantics (computer science) ,Computer science ,Programming language ,020207 software engineering ,Context (language use) ,0102 computer and information sciences ,02 engineering and technology ,Term (logic) ,Static analysis ,computer.software_genre ,01 natural sciences ,Pattern semantics ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,010201 computation theory & mathematics ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,0202 electrical engineering, electronic engineering, information engineering ,Pattern matching ,computer ,Formal verification - Abstract
International audience; Rewriting is a widely established formalism with major applications in computer science. It is indeed a staple of many formal verification applications as it is especially well suited to describe program semantics and transformations. In particular, constructor based term rewriting systems are generally used to illustrate the behaviour of functional programs.In the context of formal verification, it is often necessary to characterize the shape of the reducts of such rewrite systems and, in a typed context, the underlying type system provides syntactic guarantees on the form of these terms by exhibiting, among others, the constructor symbols that they can contain. On the other hand, when performing (program) transformations we often want to eliminate some symbols and, more generally, to ensure that some patterns are absent from the result of the transformation.We propose in this paper an approach to statically verify the absence of specified patterns from the reachable terms of constructor based term rewriting systems. The proposed approach consists in annotating the function symbols with a set of profiles outlining pre- and post-conditions that must be verified by the rewrite relation, and using a rewrite based method to statically verify that the rewrite system is indeed consistent with the respective annotations.
- Published
- 2021
9. Seeking stability by being lazy and shallow: lazy and shallow instantiation is user friendly
- Author
-
Richard A. Eisenberg and Gert-Jan Bottu
- Subjects
User Friendly ,Computer science ,Programming language ,Stability (learning theory) ,020207 software engineering ,0102 computer and information sciences ,02 engineering and technology ,Type (model theory) ,computer.software_genre ,01 natural sciences ,Measure (mathematics) ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Feature (machine learning) ,Haskell ,computer ,computer.programming_language ,Meaning (linguistics) - Abstract
Designing a language feature often requires a choice between several, similarly expressive possibilities. Given that user studies are generally impractical, we propose using stability as a way of making such decisions. Stability is a measure of whether the meaning of a program alters under small, seemingly innocuous changes in the code (e.g., inlining). Directly motivated by a need to pin down a feature in GHC/Haskell, we apply this notion of stability to analyse four approaches to the instantiation of polymorphic types, concluding that the most stable approach is lazy (instantiate a polytype only when absolutely necessary) and shallow (instantiate only top-level type variables, not variables that appear after explicit arguments).
- Published
- 2021
10. Brief Announcement
- Author
-
Katina Russell, Nairen Cao, and Jeremy T. Fineman
- Subjects
010201 computation theory & mathematics ,Distributed algorithm ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,020206 networking & telecommunications ,0102 computer and information sciences ,02 engineering and technology ,Enhanced Data Rates for GSM Evolution ,Directed graph ,01 natural sciences ,Algorithm - Abstract
This brief announcement presents an algorithm for (1+e) approximate single-source shortest paths for directed graphs with non-negative real edge weights in the CONGEST model that runs in O ((n^1/2 +D+n^2/5+o(1) D^2/5 )log W / e^2) rounds, where W is the ratio between the largest and smallest non-zero edge weights.
- Published
- 2021
11. Improved Deterministic (Δ+1) Coloring in Low-Space MPC
- Author
-
Peter Davies, Merav Parter, and Artur Czumaj
- Subjects
Pseudorandom number generator ,Discrete mathematics ,Sublinear function ,Computer science ,Computation ,Hash function ,0102 computer and information sciences ,02 engineering and technology ,Computer Science::Computational Complexity ,Binary logarithm ,01 natural sciences ,Upper and lower bounds ,QA76 ,Randomized algorithm ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Local algorithm ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
We present a deterministic O(log log log n)-round low-space Massively Parallel Computation (MPC) algorithm for the classical problem of (Δ + 1)-coloring on n-vertex graphs. In this model, every machine has sublinear local space of size n^b for any arbitrary constant b ∈ (0, 1). Our algorithm works under the relaxed setting where each machine is allowed to perform exponential local computations, while respecting the n^b space and bandwidth limitations.\ud \ud Our key technical contribution is a novel derandomization of the ingenious (Δ + 1)-coloring local algorithm by Chang-Li-Pettie (STOC 2018, SIAM J. Comput. 2020). The Chang-Li-Pettie algorithm runs in T(n) = poly(log log n) rounds, which sets the state-of-the-art randomized round complexity for the problem in the local model. Our derandomization employs a combination of tools, notably pseudorandom generators (PRG) and bounded-independence hash functions.\ud \ud The achieved round complexity of O(log log log n) rounds matches the bound of log(T(n)), which currently serves an upper bound barrier for all known randomized algorithms for locally-checkable problems in this model. Furthermore, no deterministic sublogarithmic low-space MPC algorithms for the (Δ + 1)-coloring problem have been known before.
- Published
- 2021
12. Fault-Tolerant Labeling and Compact Routing Schemes
- Author
-
Michal Dory and Merav Parter
- Subjects
FOS: Computer and information sciences ,Basis (linear algebra) ,Positive polynomial ,Computer science ,Routing table ,Cycle space ,020206 networking & telecommunications ,0102 computer and information sciences ,02 engineering and technology ,Binary logarithm ,01 natural sciences ,Vertex (geometry) ,Combinatorics ,Computer Science - Distributed, Parallel, and Cluster Computing ,010201 computation theory & mathematics ,Computer Science - Data Structures and Algorithms ,Path (graph theory) ,0202 electrical engineering, electronic engineering, information engineering ,Data Structures and Algorithms (cs.DS) ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Routing (electronic design automation) ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
The paper presents fault-tolerant (FT) labeling schemes for general graphs, as well as, improved FT routing schemes. For a given $n$-vertex graph $G$ and a bound $f$ on the number of faults, an $f$-FT connectivity labeling scheme is a distributed data structure that assigns each of the graph edges and vertices a short label, such that given the labels of the vertices $s$ and $t$, and at most $f$ failing edges $F$, one can determine if $s$ and $t$ are connected in $G \setminus F$. The primary complexity measure is the length of the individual labels. Since their introduction by [Courcelle, Twigg, STACS '07], compact FT labeling schemes have been devised only for a limited collection of graph families. In this work, we fill in this gap by proposing two (independent) FT connectivity labeling schemes for general graphs, with a nearly optimal label length. This serves the basis for providing also FT approximate distance labeling schemes, and ultimately also routing schemes. Our main results for an $n$-vertex graph and a fault bound $f$ are: -- There is a randomized FT connectivity labeling scheme with a label length of $O(f+\log n)$ bits, hence optimal for $f=O(\log n)$. This scheme is based on the notion of cycle space sampling [Pritchard, Thurimella, TALG '11]. -- There is a randomized FT connectivity labeling scheme with a label length of $O(\log^3 n)$ bits (independent of the number of faults $f$). This scheme is based on the notion of linear sketches of [Ahn et al., SODA '12]. -- For $k\geq 1$, there is a randomized routing scheme that routes a message from $s$ to $t$ in the presence of a set $F$ of faulty edges, with stretch $O(|F|^2 k)$ and routing tables of size $\tilde{O}(f^3 n^{1/k})$. This significantly improves over the state-of-the-art bounds by [Chechik, ICALP '11], providing the first scheme with sub-linear FT labeling and routing schemes for general graphs., PODC 2021
- Published
- 2021
13. Brief Announcement
- Author
-
Anisur Rahaman Molla and Manish Kumar
- Subjects
Discrete mathematics ,Leader election ,Matching (graph theory) ,Computer science ,020206 networking & telecommunications ,0102 computer and information sciences ,02 engineering and technology ,Binary logarithm ,01 natural sciences ,Upper and lower bounds ,Randomized algorithm ,010201 computation theory & mathematics ,Distributed algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Fraction (mathematics) ,Time complexity - Abstract
This paper investigates on the message complexity of the two fundamental problems, namely, leader election and agreement in the crash-fault synchronous and fully-connected distributed network. We present randomized algorithms for both the problems and also develop non-trivial lower bounds on the message complexity. In the so-called implicit version of the two problems, our algorithms achieve sublinear message complexity while tolerating more than a constant fraction of faulty nodes. The algorithms work in anonymous networks, where nodes do not know each other. Specifically, our main results are: (1)A randomized leader election algorithm which elects a leader with high probability in a complete network with n nodes, in which at least ⌉ n⌈ nodes are non-faulty and the remaining can be faulty, where ≥ (log2 n)/n. The time complexity of the algorithm is O (log nα) rounds and message complexity is O((n0.5 log2.5 n)/2.5) with high probability (i.e., with probability ≥ 1-1/n). A non-trivial lower bound of Ω(n0.5 / α1.5) messages for any leader election algorithm that tolerates at most (1-α)-fraction faulty nodes and succeeds with high probability. (2)A randomized algorithm, tolerating at most ⌊(1-α)n⌋ faulty nodes, solves agreement in O (log n/α) rounds and with high probability uses only O((n0.5 log1.5n)/α1.5) messages. A matching lower bound (up to a polylog n factor) of Ω (n0.5 /α1.5) messages for any agreement algorithm that tolerates at most (1-α)-fraction faulty nodes and succeeds with high probability. A full version of the paper is available at [17].
- Published
- 2021
14. Low-Congestion Shortcuts for Graphs Excluding Dense Minors
- Author
-
Bernhard Haeupler and Mohsen Ghaffari
- Subjects
FOS: Computer and information sciences ,Optimization problem ,Logarithm ,Computer science ,Minor (linear algebra) ,0102 computer and information sciences ,02 engineering and technology ,Minimum spanning tree ,01 natural sciences ,symbols.namesake ,Minimum cut ,Computer Science - Data Structures and Algorithms ,FOS: Mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Mathematics - Combinatorics ,Data Structures and Algorithms (cs.DS) ,Discrete mathematics ,020206 networking & telecommunications ,Graph structure theorem ,Planar graph ,Computer Science - Distributed, Parallel, and Cluster Computing ,010201 computation theory & mathematics ,Distributed algorithm ,symbols ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Combinatorics (math.CO) ,Distributed graph algorithms ,Low-congestion shortcuts ,Minor excluded graphs ,Planar graphs ,Congestion ,Dilation ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
We prove that any n-node graph G with diameter D admits shortcuts with congestion O(δ D log n) and dilation O(δ D), where δ is the maximum edge-density of any minor of G. Our proof is simple and constructive with a tildeΘ (δ D)-round1 distributed construction algorithm. Our results are tight up to logarithmic factors and generalize, simplify, unify, and strengthen several prior results. For example, for graphs excluding a fixed minor, i.e., graphs with constant δ, only a O (D2) bound was known based on a very technical proof that relies on the Robertson-Seymour Graph Structure Theorem. A direct consequence of our result is this: many graph families, including any minor-excluded ones, have near-optimal tildeΘ(D)-round distributed algorithms for many fundamental communication primitives and optimization problems in the standard synchronous message-passing model with logarithmic message sizes, i.e., the CONGEST model. These problems include minimum spanning tree, minimum cut approximation, and shortest-path approximations.
- Published
- 2021
15. The Space Complexity of Scannable Binary Objects
- Author
-
Sean Ovens
- Subjects
020203 distributed computing ,Computer science ,Binary number ,0102 computer and information sciences ,02 engineering and technology ,Base (topology) ,Object (computer science) ,01 natural sciences ,Upper and lower bounds ,Set (abstract data type) ,Shared memory ,010201 computation theory & mathematics ,Distributed algorithm ,Component (UML) ,0202 electrical engineering, electronic engineering, information engineering ,Algorithm - Abstract
Determining a consistent view of the contents of a shared array while its components are being modified is a fundamental task in distributed algorithm design. Snapshot objects address this problem when read/write registers are components of the array. A natural generalization of a snapshot object is a scannable object, an object with multiple components that each support Read and some other operations. Scannable objects also support the Scan operation, which provides an instantaneous view of all the object's components. This raises the question: How many instances of objects in some set O are required to implement a scannable object whose components are in O In this paper, we consider scannable binary objects, whose components are arbitrary 1-bit objects. If each component is a test-and-set object, there is a simple implementation using k test-and-set objects. However, when each component is non-monotonic (i.e. can change from 0 to 1 and from 1 to 0), we prove that more objects are needed. Specifically, when n ≤ 2^k-1 + 2, we show a lower bound of n + k - 2 base objects, even for obstruction-free, single-updater implementations. When n ≥ 2^k-2, we prove that 2^k-1 base objects are required for obstruction-free, single-updater implementations. When 2^k-1 +2
- Published
- 2021
16. Brief Announcement
- Author
-
Rati Gelashvili, Alexander Spiegelman, Lefteris Kokoris-Kogias, and Zhuolun Xiang
- Subjects
Protocol (science) ,Software_OPERATINGSYSTEMS ,Quadratic cost ,State machine replication ,business.industry ,Computer science ,Liveness ,020206 networking & telecommunications ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Popularity ,Asynchrony (computer programming) ,010201 computation theory & mathematics ,Asynchronous communication ,0202 electrical engineering, electronic engineering, information engineering ,Network conditions ,business ,Computer network - Abstract
The popularity of permissioned blockchain systems demands BFT SMR protocols that are efficient under good network conditions (synchrony) and robust under bad network conditions (asynchrony). The state-of-the-art partially synchronous BFT SMR protocols provide optimal linear communication cost per decision under synchrony and good leaders, but lose liveness under asynchrony. On the other hand, the state-of-the-art asynchronous BFT SMR protocols are live even under asynchrony, but always pay quadratic cost even under synchrony. In this paper, we propose a BFT SMR protocol that achieves the best of both worlds -- optimal linear cost per decision under good networks and leaders, optimal quadratic cost per decision under bad networks, and remains always live.
- Published
- 2021
17. Brief Announcement
- Author
-
Justin Kim, Kartik Nayak, Vandan Mehta, and Nibesh Shrestha
- Subjects
Protocol (science) ,business.industry ,Computer science ,sync ,0102 computer and information sciences ,computer.software_genre ,01 natural sciences ,Asynchrony (computer programming) ,010201 computation theory & mathematics ,Key (cryptography) ,Compiler ,Latency (engineering) ,business ,Byzantine fault tolerance ,computer ,Computer network - Abstract
BFT protocols in the synchronous setting rely on a strong assumption: every message sent by a party will arrive at its destination within a known bounded time. To allow some degree of asynchrony while still tolerating a minority corruption, recently, in Crypto'19, a weaker synchrony assumption called mobile sluggish faults was introduced. In this work, we investigate the support for mobile sluggish faults in existing synchronous protocols such as Dfinity, Streamlet, Sync HotStuff, OptSync and the optimal latency BFT protocol. We identify key principles that can be used to "compile'' these synchronous protocols to tolerate mobile sluggish faults.
- Published
- 2021
18. Brief Announcement
- Author
-
Moses Charikar, Li-Yang Tan, and Weiyun Ma
- Subjects
0209 industrial biotechnology ,Computer science ,0102 computer and information sciences ,02 engineering and technology ,Pseudorandom generator ,State (functional analysis) ,Function (mathematics) ,Binary logarithm ,01 natural sciences ,Randomized algorithm ,020901 industrial engineering & automation ,010201 computation theory & mathematics ,Log-log plot ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,Circuit complexity ,Algorithm ,Randomness - Abstract
We give a randomness-efficient Massively Parallel Computation (MPC) algorithm for deciding whether an undirected graph is connected. For Connectivity on n-vertex, m-edge graphs whose components have diameter at most D = 2o(log n/ log log n), our algorithm runs in R = O(log D + log logm/n,.n) rounds and uses a total of (log n)O(R) random bits, O(m) machines, and n1-Ω(1) space per machine with good probability.1 With good probability means with probability at least 1 - 1/poly ((m log n)/n), which is the same as in Liu, Tarjan, and Zhong (SPAA '20). Our algorithm achieves a super-polynomial saving in randomness complexity as compared to the breakthrough algorithm of Andoni et al. (FOCS '18) and the subsequent improvement by Behnezhad et al. (FOCS '19). Our algorithm has the same round complexity as that of Behnezhad et al., but uses more total space. Our Connectivity algorithm is an instantiation of a general method we develop for converting randomized algorithms in the PRAM model to highly randomness-efficient MPC algorithms. We show that for k = o(log n/log log n) and p = nO(1), any time-k p-processor randomized PRAM algorithm computing a function on n input bits can be converted to an equivalent strongly sublinear MPC algorithm with O(k) rounds and only a total of (log n)O(k) random bits. Our Connectivity algorithm follows from applying this method to the recent CRCW PRAM algorithm of Liu, Tarjan, and Zhong (SPAA '20). Our approach is based on the design of a pseudorandom generator for PRAM algorithms. The analysis of our generator is built on classic and influential results in circuit complexity (Ha stad '86; Nisan and Wigderson '88), which we generalize from the setting of small-depth circuits to the more powerful setting of PRAM algorithms. The parameters that we achieve are optimal given the current state of the art in complexity theory, in the sense that further improvements will imply ≠ NC1.
- Published
- 2021
19. Lower Bounds on the State Complexity of Population Protocols
- Author
-
Javier Esparza and Philipp Czerner
- Subjects
Protocol (science) ,Discrete mathematics ,education.field_of_study ,Computer science ,Model of computation ,010102 general mathematics ,Population ,Inverse ,0102 computer and information sciences ,Function (mathematics) ,16. Peace & justice ,01 natural sciences ,Predicate (grammar) ,Set (abstract data type) ,010201 computation theory & mathematics ,Log-log plot ,0101 mathematics ,education - Abstract
Population protocols are a model of computation in which an arbitrary number of indistinguishable finite-state agents interact in pairs. The goal of the agents is to decide by stable consensus whether their initial global configuration satisfies a given property, specified as a predicate on the set of configurations. The state complexity of a predicate is the number of states of a smallest protocol that computes it. Previous work by Blondin et al. has shown that the counting predicates x ≥ η have state complexity O(log η) for leaderless protocols and Ο(log log η) for protocols with leaders. We obtain the first non-trivial lower bounds: the state complexity of x ≥ η is Ω(log log log η) for leaderless protocols, and the inverse of a non-elementary function for protocols with leaders.
- Published
- 2021
20. Brief Announcement: A Time and Space Optimal Stable Population Protocol Solving Exact Majority
- Author
-
Leszek Gąsieniec, Mahsa Eftekhari, Grzegorz Stachowiak, Przemyslaw Uznanski, David Doty, and Eric E. Severson
- Subjects
Schedule ,education.field_of_study ,Computer science ,Population ,Value (computer science) ,0102 computer and information sciences ,02 engineering and technology ,Population protocol ,16. Peace & justice ,Binary logarithm ,01 natural sciences ,Majority problem ,Combinatorics ,Monotone polygon ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Pairwise comparison ,education - Abstract
We study population protocols, a model of distributed computing where agents exchange information in pairwise interactions, but have no control over their schedule of interaction partners. The well-studied majority problem is that of determining in an initial population of n agents, each with one of two opinions A or B, whether there are more A, more B, or a tie. A stable protocol solves this problem with probability 1 by eventually entering a configuration in which all agents agree on a correct consensus decision of A, B, or T, from which the consensus cannot change. We describe a protocol that solves this problem using O(log n) states (log log n + O(1) bits of memory) and optimal expected time O(log n). The number of states O(log n) is known to be optimal for the class of polylogarithmic time stable protocols that are "output dominant'' and "monotone''. These are two natural constraints satisfied by our protocol, making it simultaneously time- and state-optimal for that class. Our protocol is nonuniform : the transition function has the value log n encoded in it. We show that the protocol can be modified to be uniform, while increasing the state complexity to Θ(log n log log n).
- Published
- 2021
21. Separating Bounded and Unbounded Asynchrony for Autonomous Robots
- Author
-
David G. Kirkpatrick, Giuseppe Prencipe, Alfredo Navarra, Irina Kostitsyna, and Nicola Santoro
- Subjects
Computational Geometry (cs.CG) ,FOS: Computer and information sciences ,0209 industrial biotechnology ,Computer science ,0102 computer and information sciences ,02 engineering and technology ,Topology ,01 natural sciences ,Computer Science - Robotics ,020901 industrial engineering & automation ,mobile robots ,Convergence (routing) ,Point (geometry) ,Visibility (geometry) ,Mobile robot ,distributed geometric algorithms ,Asynchrony (computer programming) ,Computer Science - Distributed, Parallel, and Cluster Computing ,010201 computation theory & mathematics ,Bounded function ,asynchrony ,Computer Science - Computational Geometry ,Convergence problem ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Constant (mathematics) ,Robotics (cs.RO) - Abstract
Among fundamental problems in the context of distributed computing by autonomous mobile entities, one of the most representative and well studied is {\sc Point Convergence}: given an arbitrary initial configuration of identical entities, disposed in the Euclidean plane, move in such a way that, for all $\eps>0$, a configuration in which the separation between all entities is at most $\eps$ is eventually reached and maintained. The problem has been previously studied in a variety of settings, including full visibility, exact measurements (like distances and angles), and synchronous activation of entities. Our study concerns the minimal assumptions under which entities, moving asynchronously with limited and unknown visibility range and subject to limited imprecision in measurements, can be guaranteed to converge in this way. We present an algorithm that solves {\sc Point Convergence}, for entities in the plane, in such a setting, provided the degree of asynchrony is bounded: while any one entity is active, any other entity can be activated at most $k$ times, for some arbitrarily large but fixed $k$. This provides a strong positive answer to a decade old open question posed by Katreniak. We also prove that in a comparable setting that permits unbounded asynchrony, {\sc Point Convergence} in the plane is impossible, contingent on the natural assumption that algorithms maintain the (visible) connectivity among entities present in the initial configuration. This variant, that we call {\sc Cohesive Convergence}, serves to distinguish the power of bounded and unbounded asynchrony in the control of autonomous mobile entities, settling at the same time a long-standing question whether in the Euclidean plane synchronously scheduled entities are more powerful than asynchronously scheduled entities.
- Published
- 2021
22. Brief Announcement: Detectable Sequential Specifications for Recoverable Shared Objects
- Author
-
Nan Li and Wojciech Golab
- Subjects
Memory hierarchy ,Processor register ,Computer science ,Distributed computing ,Concurrency ,020207 software engineering ,Fault tolerance ,Multiprocessing ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Shared memory ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Cache ,Queue - Abstract
The recent commercial release of persistent main memory by Intel has sparked intense interest in recoverable concurrent objects. Specifying and implementing such objects is technically challenging on current generation hardware precisely because the top layers of the memory hierarchy (CPU registers and cache) remain volatile, which causes application threads to lose critical execution state during a failure. Friedman, Herlihy, Marathe, and Petrank (DISC'17) recently proposed that this difficulty can be alleviated by making the recoverable objects detectable, meaning that during recovery, they can resolve the status of an operation that was interrupted by a failure. In this paper, we formalize this important concept using a detectable sequential specification (DSS), which augments an object's interface with auxiliary methods that threads use to first declare their need for detectability, and then perform detection if needed after a failure. Compared to prior work on this topic, our DSS-based approach is less reliant on assumptions regarding the system, and more flexible in the sense that it allows applications to request detectability on demand. As a proof of concept, we present a detectable recoverable lock-free queue algorithm and evaluate its performance on a multiprocessor equipped with Intel Optane persistent memory. Our queue outperforms the detectable log queue of Friedman, Herlihy, Marthe, and Petrank (PPoPP'18) by up to 1.7x.
- Published
- 2021
23. An Efficient Adaptive Partial Snapshot Implementation
- Author
-
Philipp Woelfel and Benyamin Bashari
- Subjects
020203 distributed computing ,Computer science ,Concurrency ,Process (computing) ,Value (computer science) ,0102 computer and information sciences ,02 engineering and technology ,Type (model theory) ,Binary logarithm ,01 natural sciences ,Constant (computer programming) ,Shared memory ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Snapshot (computer storage) ,Algorithm - Abstract
The standard single-writer snapshot type allows processes to obtain a consistent snapshot of an array of n memory locations, each of which can be updated by one of n processes. In almost all algorithms, a \Scan operation returns a linearizable snapshot of the entire array. Under realistic assumptions, where hardware registers do not have the capacity to store many array entries, this inherently leads to a step complexity of Ω(n). In this paper, we consider an alternative version of the snapshot type, where a \Scan operation stores a consistent snapshot of all n memory locations, but does not return anything. Instead, a process can later observe the value of any component of that snapshot using a separate Observe operation. This allows us to implement the type from fetch-and-increment and compare-and-swap objects, such that \Scan operations have constant step complexity and \Update and Observe operations have step complexity O(log n).
- Published
- 2021
24. Search via Parallel Lévy Walks on Z2
- Author
-
Francesco d'Amore, Emanuele Natale, Andrea E. F. Clementi, George Giakkoupis, Università degli Studi di Roma Tor Vergata [Roma], the World Is Distributed Exploring the tension between scale and coordination (WIDE), Inria Rennes – Bretagne Atlantique, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-SYSTÈMES LARGE ÉCHELLE (IRISA-D1), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Combinatorics, Optimization and Algorithms for Telecommunications (COATI), Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-COMmunications, Réseaux, systèmes Embarqués et Distribués (Laboratoire I3S - COMRED), Laboratoire d'Informatique, Signaux, et Systèmes de Sophia Antipolis (I3S), Université Nice Sophia Antipolis (1965 - 2019) (UNS), COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)-Université Nice Sophia Antipolis (1965 - 2019) (UNS), COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)-Laboratoire d'Informatique, Signaux, et Systèmes de Sophia Antipolis (I3S), and COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)
- Subjects
Polynomial ,Computer science ,[SDV.BA]Life Sciences [q-bio]/Animal biology ,Modulo ,Hitting time ,0102 computer and information sciences ,01 natural sciences ,Settore INF/01 ,Combinatorics ,Mathematics::Probability ,Lévy flight ,010201 computation theory & mathematics ,0103 physical sciences ,Line (geometry) ,Exponent ,Interval (graph theory) ,[INFO]Computer Science [cs] ,Almost surely ,[MATH]Mathematics [math] ,010306 general physics ,ComputingMilieux_MISCELLANEOUS - Abstract
Motivated by the Levy foraging hypothesis -- the premise that various animal species have adapted to follow Levy walks to optimize their search efficiency -- we study the parallel hitting time of Levy walks on the infinite two-dimensional grid. We consider k independent discrete-time Levy walks, with the same exponent α ∈(1,∞), that start from the same node, and analyze the number of steps until the first walk visits a given target at distance l. % We show that for any choice of k and l from a large range, there is a unique optimal exponent α_k,∈ (2,3), for which the hitting time is O(l2/k) w.h.p., while modifying the exponent by any constant term e>0 increases the hitting time by a factor polynomial in l, or the walks fail to hit the target almost surely. % Based on that, we propose a surprisingly simple and effective parallel search strategy, for the setting where k and l are unknown: The exponent of each Levy walk is just chosen independently and uniformly at random from the interval (2,3). This strategy achieves optimal search time (modulo polylogarithmic factors) among all possible algorithms (even centralized ones that know k). % Our results should be contrasted with a line of previous work showing that the exponent α = 2 is optimal for various search problems. In our setting of k parallel walks, we show that the optimal exponent depends on k and l, and that randomizing the choice of the exponents works simultaneously for all k and l.
- Published
- 2021
25. Incentive-Compatible Kidney Exchange in a Slightly Semi-Random Model
- Author
-
Avrim Blum and Paul Gölz
- Subjects
FOS: Computer and information sciences ,Mechanism design ,Computer science ,05 social sciences ,0102 computer and information sciences ,Directed graph ,01 natural sciences ,Longest path problem ,Vertex (geometry) ,FOS: Economics and business ,Combinatorics ,Computer Science - Computer Science and Game Theory ,010201 computation theory & mathematics ,Incentive compatibility ,0502 economics and business ,Path (graph theory) ,Economics - Theoretical Economics ,Theoretical Economics (econ.TH) ,Graph (abstract data type) ,Limit (mathematics) ,050207 economics ,Computer Science and Game Theory (cs.GT) - Abstract
Motivated by kidney exchange, we study the following mechanism-design problem: On a directed graph (of transplant compatibilities among patient-donor pairs), the mechanism must select a simple path (a chain of transplantations) starting at a distinguished vertex (an altruistic donor) such that the total length of this path is as large as possible (a maximum number of patients receive a kidney). However, the mechanism does not have direct access to the graph. Instead, the vertices are partitioned over multiple players (hospitals), and each player reports a subset of her vertices to the mechanism. In particular, a player may strategically omit vertices to increase how many of her vertices lie on the path returned by the mechanism. Our objective is to find mechanisms that limit incentives for such manipulation while producing long paths. Unfortunately, in worst-case instances, competing with the overall longest path is impossible while incentivizing (approximate) truthfulness, i.e., requiring that hiding nodes cannot increase a player's utility by more than a factor of $1 + o(1)$. We therefore adopt a semi-random model where a small ($o(n)$) number of random edges are added to worst-case instances. While it remains impossible for truthful mechanisms to compete with the overall longest path, we give a truthful mechanism that competes with a weaker but non-trivial benchmark: the length of any path whose subpaths within each player have a minimum average length. In fact, our mechanism satisfies even a stronger notion of truthfulness, which we call matching-time incentive compatibility. This notion of truthfulness requires that each player not only reports her nodes truthfully but also does not stop the returned path at any of her nodes in order to divert it to a continuation inside her own subgraph., Full version of EC'21 paper
- Published
- 2021
26. iMLCA: Machine Learning-powered Iterative Combinatorial Auctions with Interval Bidding
- Author
-
Benjamin Lubin, Manuel Beyeler, Gianluca Brero, and Sven Seuken
- Subjects
FOS: Computer and information sciences ,TheoryofComputation_MISCELLANEOUS ,Computer science ,0102 computer and information sciences ,Interval (mathematics) ,Machine learning ,computer.software_genre ,01 natural sciences ,Upper and lower bounds ,Combinatorial auction ,Computer Science - Computer Science and Game Theory ,0502 economics and business ,Preference elicitation ,050207 economics ,business.industry ,05 social sciences ,TheoryofComputation_GENERAL ,Bidding ,Small set ,010201 computation theory & mathematics ,Bundle ,Allocative efficiency ,Artificial intelligence ,business ,computer ,Computer Science and Game Theory (cs.GT) - Abstract
Preference elicitation is a major challenge in large combinatorial auctions because the bundle space grows exponentially in the number of items. Recent work has used machine learning (ML) algorithms to identify a small set of bundles to query from each bidder. However, a shortcoming of this prior work is that bidders must submit exact values for the queried bundles, which can be quite costly. To address this, we propose iMLCA, a new ML-powered iterative combinatorial auction with interval bidding (i.e., where bidders submit upper and lower bounds instead of exact values). To steer the auction towards an efficient allocation, we introduce a price-based activity rule, asking bidders to tighten bounds on relevant bundles only. In our experiments, iMLCA achieves the same allocative efficiency as the prior ML-based auction that uses exact bidding. Moreover, it outperforms the well-known combinatorial clock auction in a realistically-sized domain., Comment: 30 pages, 3 figures
- Published
- 2021
27. Distribution Rules Under Dichotomous Preferences
- Author
-
Florian Brandl, Felix Brandt, Christian Stricker, and Dominik Peters
- Subjects
Computer science ,business.industry ,05 social sciences ,Distribution (economics) ,Monotonic function ,0102 computer and information sciences ,01 natural sciences ,Public interest ,Set (abstract data type) ,Product rule ,Resource (project management) ,010201 computation theory & mathematics ,0502 economics and business ,business ,Preference (economics) ,Mathematical economics ,Axiom ,050205 econometrics - Abstract
We consider a setting in which agents contribute amounts of a divisible resource (such as money or time) to a common pool, which is used to finance projects of public interest. How the collected resources are to be distributed among the projects is decided by a distribution rule that takes as input a set of approved projects for each agent. An important application of this setting is donor coordination, which allows philanthropists to find an efficient and mutually agreeable distribution of their donations. We analyze various distribution rules (including the Nash product rule and the conditional utilitarian rule) in terms of classic as well as new axioms, and propose the first fair distribution rule that satisfies efficiency and monotonicity. Our main result settles a long-standing open question of Bogomolnaia, Moulin, and Stong (2005) by showing that no strategyproof and efficient rule can guarantee that at least one approved project of each agent receives a positive amount of the resource. The proof reasons about 386 preference profiles and was obtained using a computer-aided method involving SAT solvers.
- Published
- 2021
28. Bridging kriging believer and expected improvement using bump hunting for expensive black-box optimization
- Author
-
Tapabrata Ray, Hemant Kumar Singh, and Bing Wang
- Subjects
Mathematical optimization ,Bridging (networking) ,Optimization problem ,Computer science ,Sampling (statistics) ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Range (mathematics) ,010201 computation theory & mathematics ,Kriging ,Black box ,0202 electrical engineering, electronic engineering, information engineering ,Infill ,020201 artificial intelligence & image processing ,Bump hunting - Abstract
For several real-world optimization problems, the evaluation of response functions may be expensive, computationally or otherwise. The number of design evaluations one can afford for such problems are therefore severely limited. Surrogate models are commonly used to guide the search for such computationally expensive optimization problems (CEOP). The surrogate models built using a limited number of true evaluations are used to identify the next infill/sampling location. Expected improvement (EI) is a well known infill criteria which balances exploration and exploitation by accounting for both mean and uncertainties in the current model. However, recent studies have shown that, somewhat counter-intuitively, a greedy ("believer") strategy can compete well with EI in solving CEOPs. In this study, we are interested in examining the relative performance of the two infill methods across a range of problems, and identify the influencing factors affecting their performance. Based on the empirical analysis, we further propose an algorithm incorporating the strengths of the two strategies. The numerical experiments demonstrate that the proposed algorithm is able to achieve a competitive performance across a range of problems with diverse characteristics; making it a strong candidate for solving black-box CEOPs.
- Published
- 2021
29. Runtime analysis of population-based evolutionary algorithms
- Author
-
Pietro S. Oliveto and Per Kristian Lehre
- Subjects
Theoretical computer science ,Computer science ,Evolutionary algorithm ,Interactive evolutionary computation ,0102 computer and information sciences ,02 engineering and technology ,Population based ,01 natural sciences ,Evolutionary computation ,Human-based evolutionary computation ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Algorithm ,Evolutionary programming - Published
- 2021
30. Decomposition multi-objective optimisation
- Author
-
Ke Li and Qingfu Zhang
- Subjects
010201 computation theory & mathematics ,Computer science ,business.industry ,0202 electrical engineering, electronic engineering, information engineering ,Decomposition (computer science) ,020201 artificial intelligence & image processing ,0102 computer and information sciences ,02 engineering and technology ,Current (fluid) ,Process engineering ,business ,01 natural sciences - Published
- 2021
31. Theoretical analyses of multi-objective evolutionary algorithms on multi-modal objectives
- Author
-
Weijie Zheng and Benjamin Doerr
- Subjects
Mathematical optimization ,Computer science ,Evolutionary algorithm ,Contrast (statistics) ,0102 computer and information sciences ,02 engineering and technology ,Track (rail transport) ,01 natural sciences ,Multi-objective optimization ,Modal ,010201 computation theory & mathematics ,Simple (abstract algebra) ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,Jump ,020201 artificial intelligence & image processing - Abstract
Previous theory work on multi-objective evolutionary algorithms considers mostly easy problems that are composed of unimodal objectives. This paper takes a first step towards a deeper understanding of how evolutionary algorithms solve multi-modal multi-objective problems. We propose the OneJumpZeroJump problem, a bi-objective problem with single objectives isomorphic to the classic jump function benchmark. We prove that the simple evolutionary multi-objective optimizer (SEMO) cannot compute the full Pareto front. In contrast, for all problem sizes n and all jump sizes [EQUATION], the global SEMO (GSEMO) covers the Pareto front in Θ((n-2k)nk) iterations in expectation. To improve the performance, we combine the GSEMO with two approaches, a heavy-tailed mutation operator and a stagnation detection strategy, that showed advantages in single-objective multi-modal problems. Runtime improvements of asymptotic order at least kΩ(k) are shown for both strategies. Our experiments verify the substantial runtime gains already for moderate problem sizes. Overall, these results show that the ideas recently developed for single-objective evolutionary algorithms can be effectively employed also in multi-objective optimization. This Hot-off-the-Press paper summarizes "Theoretical Analyses of Multi-Objective Evolutionary Algorithms on Multi-Modal Objectives" by B. Doerr and W. Zheng, which has been accepted for publication in AAAI 2021 [9].
- Published
- 2021
32. Supervised Average Consensus in Anonymous Dynamic Networks
- Author
-
Dariusz R. Kowalski and Miguel A. Mosteiro
- Subjects
Sequence ,Mathematical optimization ,Supervisor ,Computer science ,020206 networking & telecommunications ,0102 computer and information sciences ,02 engineering and technology ,Network topology ,01 natural sciences ,Upper and lower bounds ,Variable (computer science) ,Counting problem ,010201 computation theory & mathematics ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Isoperimetric inequality - Abstract
How to reach consensus on an average value in a dynamic crowd without revealing identity? In this work, we study the problem of Average Network Consensus in Anonymous Dynamic Networks (ADN). Network dynamicity is specified by the sequence of topology-graph isoperimetric numbers occurring over time, which we call the isoperimetric dynamicity of the network. The consensus variable is the average of values initially held by nodes, which is customary in the Network-consensus literature. Given that having an algorithm to compute the average one can compute the network size (i.e. the Counting problem) and viceversa, we further focus on the latter. We propose a deterministic distributed Average Network Consensus algorithm for ADNs that we call isoperimetric Scalable Coordinated Anonymous Local Aggregation (iSCALA), and we analyze its performance for different scenarios, including worst-case (adversarial) and stochastic dynamic topologies. Our solution utilizes supervisor nodes, which have been shown to be necessary for computations in ADNs. The algorithm uses the isoperimetric dynamicity of the network as an input, meaning that only the isoperimetric number parameters (or their lower bound) must be given, but topologies may occur arbitrarily or stochastically as long as they comply with those parameters. Previous work for adversarial ADNs overestimates the running time to deal with worst-case scenarios. For ADNs with given isoperimetric dynamicity, our analysis shows improved performance for some practical dynamic topologies, with cubic time or better for stochastic ADNs, and our experimental evaluation confirms that such performance is close to what could be achieved if the algorithm had centralized control on stopping conditions; thus, there is no substantial benefit of having centralized stopping control in the ADN system.
- Published
- 2021
33. Min-Max Gathering of Oblivious Robots
- Author
-
Anisur Rahaman Molla and Subhash Bhagat
- Subjects
Theoretical computer science ,Deterministic algorithm ,Computer science ,Swarm robotics ,Mobile robot ,Context (language use) ,0102 computer and information sciences ,01 natural sciences ,Computer Science::Robotics ,010201 computation theory & mathematics ,Asynchronous communication ,A priori and a posteriori ,Robot ,Efficient energy use - Abstract
Gathering is one of the fundamental and well-studied problems in the context of autonomous and oblivious mobile robots. The gathering problem requires the robots, initially distributed on the Euclidean plane, to coordinate their movements to gather at a single point, not known to them a priori. We study a constrained version of the gathering problem, called the min-max gathering, which requires the robots to achieve gathering by minimizing the maximum distance traversed by any robot. A solution to the problem provides energy efficiency for the robots to achieve the goal. We present a deterministic algorithm for the min-max gathering problem in the Euclidean plane under two of the strongest adversarial models, namely, the asynchronous scheduler and the non-rigid movements of the robots. Moreover, we establish a minimal set of necessary and sufficient conditions to solve the problem.
- Published
- 2021
34. Descriptor based consensus for blockchain transactions
- Author
-
Zachary Painter, Victor Cook, Damian Dechev, and Christina Peterson
- Subjects
020203 distributed computing ,Blockchain ,Computer science ,0102 computer and information sciences ,02 engineering and technology ,Trusted third party ,Computer security ,computer.software_genre ,01 natural sciences ,Proof-of-stake ,010201 computation theory & mathematics ,Validator ,Proof-of-work system ,0202 electrical engineering, electronic engineering, information engineering ,computer ,Database transaction ,Byzantine fault tolerance ,Block (data storage) - Abstract
Blockchain networks use consensus mechanisms so participants can exchange transactions without the need to rely on a trusted third party. Consensus mechanisms using Proof of Work burn significant energy to select a block miner and the delay limits performance. Other consensus mechanisms such as Proof of Stake or Practical Byzantine Fault Tolerance still designate a single validator to append a block to the chain, preventing blocks from being built and published in parallel. In this paper we introduce a new consensus mechanism, Proof of Descriptor, enabling clients to work together to publish blockchain transactions using a descriptor object which stores information on the cooperative parallel execution of transactions. Proof of Descriptor consensus allows commutative transactions to be mined individually. It does not require a leader to propose the next block, enabling clients to cooperate on completing transactions, assembling blocks and publishing them. We demonstrate that our approach is less prone to attack since it is not vulnerable to a malicious leader, while simulations show a potential 20x improvement over the fastest sequential blockchain, Solana.
- Published
- 2021
35. Evolutionary meta reinforcement learning for portfolio optimization
- Author
-
Sang-Yeop Lee, Yujin Cha, Myoung Hoon Ha, Seung-geun Chi, and Moon Byung-Ro
- Subjects
Mathematical optimization ,Process (engineering) ,Computer science ,Financial market ,Context (language use) ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,010201 computation theory & mathematics ,Market data ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,020201 artificial intelligence & image processing ,Profitability index ,Markov decision process ,Portfolio optimization - Abstract
Portfolio optimization is a control problem whose objective is to find the optimal strategy for the process of selecting the proportions of assets that can provide the maximum return. Conventional approaches formulate the problem as a single Markov decision process and apply reinforcement learning methods to provide solutions. However, it is well known that financial markets involve non-stationary processes, leading to violations of this assumption in these methods. In this work, we reformulate the portfolio optimization problem to deal with the non-stationary nature of financial markets. In our approach, we divide a long-term process into multiple short-term processes to adapt to context changes and consider the portfolio optimization problem as a multitask control problem. Thereafter, we propose an evolutionary meta reinforcement learning approach to search for an initial policy that can quickly adapt to the upcoming target tasks. We model the policies as convolutional networks that can score the match of the patterns in market data charts. Finally, we test our approach using real-world cryptographic currency data and show that it adapts well to the changes in the market and leads to better profitability.
- Published
- 2021
36. A genetic algorithm for AC optimal transmission switching
- Author
-
Masood Jabarnejad
- Subjects
Power transmission ,Mathematical optimization ,Binary decision diagram ,Computer science ,Computation ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Electric power system ,Nonlinear system ,010201 computation theory & mathematics ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Heuristics ,Integer (computer science) - Abstract
Optimal transmission switching (OTS) is a new practice in power systems and can improve the economics of electric power systems integrated with renewable resources such as wind. In OTS modeling binary decision variables are added to the optimal power flow (OPF) problem to represent on and off switching status of lines. This extension to alternative current optimal power flow (ACOPF) problem results in a mixed integer nonlinear program (MINLP) which is not guaranteed to be solved optimally by existing solution methods and also requires excessive computation times for large real systems. In this paper we develop a genetic algorithm (GA) for ACOPF based OTS problem. In our GA approach we benefit from the structure of power transmission network and develop a line scoring method and a graphical distance based local improvement technique to better search the solution space. We compare our proposed genetic algorithm with two greedy heuristics on test power systems with renewable resources of energy. The results show that our proposed approach finds more economic solutions especially in larger power systems.
- Published
- 2021
37. Fishing for interactions
- Author
-
Rodrigo C. Lira, Carmelo J. A. Bastos-Filho, Marcos Oliveira, Mariana Macedo, Ronaldo Menezes, Lydia Taw, Nishant Gurrapadi, and Diego Pinheiro
- Subjects
Mathematical optimization ,Optimization problem ,Computer science ,Swarm behaviour ,Particle swarm optimization ,Network science ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Swarm intelligence ,010201 computation theory & mathematics ,Interaction network ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing - Abstract
Computational swarm intelligence has been demonstrably shown to efficiently solve high-dimensional optimization problems due to its flexibility, robustness, and (low) computational cost. Despite these features, swarm-based algorithms are black boxes whose dynamics may be hard to understand. In this paper, we delve into the Fish School Search (FSS) algorithm by looking at how fish interact within the fish school. We find that the network emerging from these interactions is structurally invariant to the optimization of three benchmark functions: Rastrigin, Rosenbrock and Schwefel. However, at the same time, our results also reveal that the level of social interactions among the fish depends on the problem. We show that the absence of highly-influential fish leads to a slow-paced convergence in FSS and that the changes in the intensity of social interactions enable good performance on both unimodal and multimodal problems. Finally, we examine two other swarm-based algorithms---the Artificial Bee Colony (ABC) and Particle Swarm Optimization (PSO) algorithms---and find that for the same three benchmark functions, the structural invariance characteristic only occurs in the FSS algorithm. We argue that FSS, ABC, and PSO have distinctive signatures of interaction structure and flow.
- Published
- 2021
38. Stasis type particle stability in a stochastic model of particle swarm optimization
- Author
-
Krzysztof Trojanowski, Krzysztof Wójcik, and Tomasz Kulpa
- Subjects
Fitness landscape ,Heuristic (computer science) ,Stochastic modelling ,Computer science ,ComputingMethodologies_MISCELLANEOUS ,Swarm behaviour ,Particle swarm optimization ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Stability (probability) ,010201 computation theory & mathematics ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,Particle ,Applied mathematics ,020201 artificial intelligence & image processing - Abstract
Particle Swarm Optimization (PSO) is a heuristic optimization technique where particles explore optima of the fitness landscape defined in the search space. It is a practical approach but vulnerable to incorrect particle movement parameter tuning. Therefore, stochastic models of a particle and a swarm are subject to analysis. In this paper, we study the stability properties of particles controlled by a universal update equation proposed by Cleghorn and Engelbrecht. We propose a new concept of particle state, namely the stasis state when the difference between two consecutive particle location variance values is not changing much after a sufficient time. We also propose a convergence time measure, representing a minimal number of steps a particle location variance needs to reach stasis. Finally, we visualize the convergence time characteristics obtained in simulations.
- Published
- 2021
39. Evolving soft robotic jamming grippers
- Author
-
Frederic Maire, Seth G. Fitzgerald, Gary W. Delaney, and David Howard
- Subjects
Computer science ,GRASP ,Soft robotics ,Mechanical engineering ,Jamming ,0102 computer and information sciences ,02 engineering and technology ,Curvature ,Granular material ,01 natural sciences ,Discrete element method ,010201 computation theory & mathematics ,Grippers ,0202 electrical engineering, electronic engineering, information engineering ,Robot ,020201 artificial intelligence & image processing - Abstract
Jamming Grippers are a novel class of soft robotic actuators that can robustly grasp and manipulate objects of arbitrary shape. They are formed by placing a granular material within a flexible skin connected to a vacuum pump and function by pressing the un-jammed gripper against a target object and evacuating the air to transition the material to a jammed (solid) state, gripping the target object. However, due to the complex interactions between grain morphology and target object shape, much uncertainty still remains regarding optimal constituent grain shapes for specific gripping applications. We address this challenge by utilising a modern Evolutionary Algorithm, NSGA-III, combined with a Discrete Element Method soft robot model to perform a multi-objective optimisation of grain morphology for use in jamming grippers for a range of target object sizes. Our approach optimises the microscopic properties of the system to elicit bespoke functional granular material performance driven by the complex relationship between the individual particle morphologies and the related emergent behaviour of the bulk state. Results establish the important contribution of grain morphology to gripper performance and the critical role of local surface curvature and the length scale governed by the relative sizes of the grains and target object.
- Published
- 2021
40. Resource availability and the evolution of cooperation in a 3D agent-based simulation
- Author
-
Christopher Stone, Lara Dal Molin, and Jasmeen Kanwal
- Subjects
Computer science ,media_common.quotation_subject ,0102 computer and information sciences ,02 engineering and technology ,Complex network ,01 natural sciences ,Data science ,Resource (project management) ,Quantitative analysis (finance) ,010201 computation theory & mathematics ,Artificial life ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Software system ,Social evolution ,Adaptation (computer science) ,Function (engineering) ,media_common - Abstract
There is evidence of a relationship between the dynamics of resource availability and the evolution of cooperative behaviour in complex networks. Previous studies have used mathematical models, agent-based models, and studies of hunter-gatherer societies to investigate the causal mechanisms behind this relationship. Here, we present a novel, agent-based software system, built using Unity 3D, which we employ to investigate the adaptation of food sharing networks to fluctuating resource availability. As a benefit of using Unity, our system possesses an easily customisable, visually compelling interface where evolution can be observed in real-time. Across four types of populations, under three environmental conditions, we performed a quantitative analysis of the evolving structure of social interactions. A biologically-inspired gene-sequencing function translates an arbitrarily extendable genome into phenotypic behaviour. Our results contribute to the understanding of how resource availability affects the evolutionary path taken by artificial societies. It emerges that environmental conditions have a greater impact on social evolution compared to the initial genetic configurations of each society. In particular, we find that scenarios of periodically fluctuating resources lead to the evolution of stable, tightly organised societies, which form small, local, mutualistic food-sharing networks.
- Published
- 2021
41. Concurrent model synchronisation with multiple objectives
- Author
-
Nils Weidmann and Gregor Engels
- Subjects
Mathematical optimization ,Computer science ,media_common.quotation_subject ,Search-based software engineering ,Evolutionary algorithm ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Multi-objective optimization ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Quality (business) ,State (computer science) ,Function (engineering) ,Heuristics ,Integer programming ,media_common - Abstract
Concurrent model synchronisation, i.e. the (bidirectional) propagation of updates between two models, is an important problem in the area of model-driven engineering (MDE). Compared to other consistency management tasks, synchronising concurrent updates is especially challenging as they can be conflicting, such that restoring a consistent state is not possible when all updates must be considered. Recent approaches create a search space of possible solutions and determine the optimum solution via exact methods, such as integer linear programming (ILP), via a configurable, scalarised objective function that takes conflicting goals into account. However, the determination of suitable configuration parameters and runtime efficiency improvements are still an open issue, which is commonly addressed by using heuristics instead of exact methods. We investigate on whether it is beneficial to apply heuristics to solve concurrent model synchronisation problems. First, a multiobjective evolutionary algorithm is used for small instances for which all pareto-optimal solutions can be presented to a user to select the best one. Second, for larger models, we propose a method to determine suitable weightings for aggregating all objectives into a single function. Finally, these insights are used to recommend a strategy for determining solutions of satisfying quality within an acceptable amount of time.
- Published
- 2021
42. An efficient computational approach for automatic itinerary planning on web servers
- Author
-
Yinxuan Gui, Hongshu Guo, Yue-Jiao Gong, and Zeyuan Ma
- Subjects
Service (systems architecture) ,Web server ,Optimization problem ,Heuristic (computer science) ,Computer science ,Distributed computing ,Q-learning ,0102 computer and information sciences ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Task (computing) ,010201 computation theory & mathematics ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,020201 artificial intelligence & image processing ,computer - Abstract
The automatic itinerary planning service requires to generate multiple-day schedules automatically under user-specified POIs and constraints. Knowing as an NP-Hard optimization problem, the task is commonly solved by (meta-)heuristic algorithms such as the genetic algorithms (GAs). However, considering the concurrent requests received by a web server in practice, the time efficiency of the existing itinerary planners can be rather unsatisfactory. To address the issue, this paper proposes a computational approach that hybridizing a GA with the reinforcement learning (RL) technology. The benefit is that we no longer need to re-execute the GA for each new request arrived. Instead, the approach keeps the historical solutions in track and maintains a RL agent to sequentially decide how to handle each new request. Experimental results show that the proposed approach is able to stably provide high-quality solutions, while greatly reducing the average time overhead of the web server.
- Published
- 2021
43. Genetic crossover in the evolution of time-dependent neural networks
- Author
-
Tim Grabowski and Jason Orlosky
- Subjects
Spiking neural network ,education.field_of_study ,Artificial neural network ,business.industry ,Computer science ,Node (networking) ,Crossover ,Population ,0102 computer and information sciences ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,Evolutionary computation ,010201 computation theory & mathematics ,Asynchronous communication ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Genetic representation ,business ,education ,computer - Abstract
Neural networks with temporal characteristics such as asynchronous spiking have made much progress towards biologically plausible artificial intelligence. However, genetic approaches for evolving network structures in this area are still relatively unexplored. In this paper, we examine a specific variant of time-dependent spiking neural networks (NN) in which the spatial and temporal relationships between neurons affect output. First, we built and customized a standard NN implementation to more closely model the time-delay characteristics of biological neurons. Next, we tested this with simulated tasks such as food foraging and image recognition, demonstrating success in multiple domains. We then developed a genetic representation for the network that allows for both scalable network size and compatibility with genetic crossover operations. Finally, we analyzed the effects of genetic crossover algorithms compared to random mutations on the food foraging task. Results showed that crossover operations based on node usage converge on a local maximum more quickly than random mutations, but suffer from genetic defects that reduce overall population performance.
- Published
- 2021
44. Multi-objective optimization across multiple concepts
- Author
-
Tapabrata Ray, Hemant Kumar Singh, and Brandon Parker
- Subjects
Mathematical optimization ,Computer science ,Compromise ,media_common.quotation_subject ,Rigidity (psychology) ,0102 computer and information sciences ,02 engineering and technology ,Benchmarking ,01 natural sciences ,Multi-objective optimization ,Range (mathematics) ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Engineering design process ,Baseline (configuration management) ,media_common - Abstract
Evolutionary multi-objective optimization (EMO) is often used to deal with practical problems in engineering design to identify products that exhibit the best possible compromise between multiple conflicting performance criteria. Much of the literature in EMO considers algorithmic developments and benchmarking problems involving a single concept only. However, in practice, there could be many applications where the solution of a given problem may be represented using multiple concepts, and optimizing each one of them individually to obtain the overall Pareto front may be inefficient. To address this gap, in this study, we firstly develop computer-aided models of multiple concepts for a simulation-based problem which can serve as a good application of multi-concept optimization. The problem involves the design of lattice structures with different types of unit cells (each constituting a concept), which can be customized to suit a range of real-world applications such as design of structures with minimal thermal expansion, low weight and high rigidity etc. Furthermore, we develop baseline evolutionary strategies to search across multiple concepts simultaneously. Empirical experiments show that the presented strategies are able to outperform conventional single concept-based approach. Moreover, some challenges are also uncovered which prompts the need for further developments.
- Published
- 2021
45. Analyzing the impact of product configuration variations on advanced driver assistance systems with search
- Author
-
Tao Yue, Kaiou Yin, Paolo Arcaini, and Shaukat Ali
- Subjects
Computer science ,business.industry ,Automotive industry ,Advanced driver assistance systems ,0102 computer and information sciences ,02 engineering and technology ,Collision ,01 natural sciences ,Reliability engineering ,Random search ,010201 computation theory & mathematics ,Component (UML) ,0202 electrical engineering, electronic engineering, information engineering ,Production (economics) ,020201 artificial intelligence & image processing ,business ,Set (psychology) ,Degradation (telecommunications) - Abstract
Due to the complexity of designing vehicle products and the inherent uncertainties in their operating environments, ensuring the safety of their Advanced Driver Assistance Systems (ADASs) becomes crucial. Especially, very minor changes to a vehicle design, for instance due to production errors or component degradation, might lead to failures of ADASs and, therefore, catastrophic consequences such as collision occurrences. Motivated by this, we propose a multi-objective search-based approach (employing NSGA-II) to find minimum changes to the configuration of a set of configurable parameters of a vehicle design, such that the collision probability is maximized, consequently leading to a reversal change in its safety. We conducted experiments, in a vehicle driving simulator, to evaluate the effectiveness of our approach. Results show that our approach with NSGA-II significantly outperforms the random search. Moreover, based on the detailed analyses of the results, we identify some parameters for which minor changes to their values lead the vehicle into collisions, and demonstrated the importance of studying the configuration of multiple parameters in a single search and the impact of their interactions on causing collisions.
- Published
- 2021
46. The tiebreaking space of constructive heuristics for the permutation flowshop minimizing makespan
- Author
-
Marcus Ritt and Alexander J. Benavides
- Subjects
Mathematical optimization ,Current (mathematics) ,Job shop scheduling ,Computer science ,0102 computer and information sciences ,02 engineering and technology ,Flow shop scheduling ,Space (mathematics) ,01 natural sciences ,Permutation ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Constructive heuristic ,Circuit breaker - Abstract
There has been intensive research on tiebreakers for insertion-based constructive heuristics in the permutation flow shop scheduling problem when the objective is to minimize the makespan. In this paper we analyze the space of all possible tie-breaking rules for the most common insertion orders, and evaluate the efficacy of existing tiebreakers based on this analysis. We find that an optimal tie breaker would produce results that are about 1% better than current best tie breakers. We propose a constructive heuristic based on a truncated cyclic best-first search in the space of all tie breakers and show that it closely approximates the solutions that such an optimal tie breaker can achieve.
- Published
- 2021
47. Evolving and merging hebbian learning rules
- Author
-
Joachim Winther Pedersen and Sebastian Risi
- Subjects
FOS: Computer and information sciences ,Artificial neural network ,Generalization ,business.industry ,Computer science ,Computer Science - Neural and Evolutionary Computing ,0102 computer and information sciences ,02 engineering and technology ,Overfitting ,01 natural sciences ,Bottleneck ,ComputingMethodologies_PATTERNRECOGNITION ,Hebbian theory ,010201 computation theory & mathematics ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Neural and Evolutionary Computing (cs.NE) ,Artificial intelligence ,Cluster analysis ,business ,Set (psychology) - Abstract
Generalization to out-of-distribution (OOD) circumstances after training remains a challenge for artificial agents. To improve the robustness displayed by plastic Hebbian neural networks, we evolve a set of Hebbian learning rules, where multiple connections are assigned to a single rule. Inspired by the biological phenomenon of the genomic bottleneck, we show that by allowing multiple connections in the network to share the same local learning rule, it is possible to drastically reduce the number of trainable parameters, while obtaining a more robust agent. During evolution, by iteratively using simple K-Means clustering to combine rules, our Evolve and Merge approach is able to reduce the number of trainable parameters from 61,440 to 1,920, while at the same time improving robustness, all without increasing the number of generations used. While optimization of the agents is done on a standard quadruped robot morphology, we evaluate the agents' performances on slight morphology modifications in a total of 30 unseen morphologies. Our results add to the discussion on generalization, overfitting and OOD adaptation. To create agents that can adapt to a wider array of unexpected situations, Hebbian learning combined with a regularising "genomic bottleneck" could be a promising research direction.
- Published
- 2021
48. Using novelty search to explicitly create diversity in ensembles of classifiers
- Author
-
David Burth Kurka, Rui P. Cardoso, Emma Hart, and Jeremy Pitt
- Subjects
Neuroevolution ,business.industry ,Computer science ,Novelty ,0102 computer and information sciences ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,Ensemble learning ,Ensembles of classifiers ,010201 computation theory & mathematics ,Search algorithm ,Classifier (linguistics) ,0202 electrical engineering, electronic engineering, information engineering ,sort ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Gradient descent ,computer - Abstract
The diversity between individual learners in an ensemble is known to influence its performance. However, there is no standard agreement on how diversity should be defined, and thus how to exploit it to construct a high-performing classifier. We propose two new behavioural diversity metrics based on the divergence of errors between models. Following a neuroevolution approach, these metrics are then used to guide a novelty search algorithm to search a space of neural architectures and discover behaviourally diverse classifiers, iteratively adding the models with high diversity score to an ensemble. The parameters of each ANN are tuned individually with a standard gradient descent procedure. We test our approach on three benchmark datasets from Computer Vision --- CIFAR-10, CIFAR-100, and SVHN --- and find that the ensembles generated significantly outperform ensembles created without explicitly searching for diversity and that the error diversity metrics we propose lead to better results than others in the literature. We conclude that our empirical results signpost an improved approach to promoting diversity in ensemble learning, identifying what sort of diversity is most relevant and proposing an algorithm that explicitly searches for it without selecting for accuracy.
- Published
- 2021
49. On the impact of tangled program graph marking schemes under the atari reinforcement learning benchmark
- Author
-
Ryan Amaral, Robert J. Smith, Alexandru Ianta, Caleidgh Bayer, and Malcolm I. Heywood
- Subjects
Scheme (programming language) ,Modularity (networks) ,Theoretical computer science ,Heuristic ,Computer science ,Learning environment ,0102 computer and information sciences ,02 engineering and technology ,Variation (game tree) ,01 natural sciences ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,Reinforcement learning ,020201 artificial intelligence & image processing ,Adaptation (computer science) ,computer ,computer.programming_language - Abstract
Tangled program graphs (TPG) support emergent modularity by first identifying subsets of programs that can usefully coexist (a team/ graph node) and then identifying the circumstance under which to reference other teams (arc adaptation). Variation operators manipulate the content of teams and arcs. This introduces cycles into the TPG structures. Previously, this effect was eradicated at run time by marking nodes while evaluating TPG individuals. In this work, a new marking heuristic is introduced, that of arc (learner) marking. This means that nodes can be revisited, but not the same arcs. We investigate the impact of this through 18 titles from the Arcade Learning Environment. The performance and complexity of the policies appear to be similar, but with specific tasks (game titles) resulting in preferences for one scheme over another.
- Published
- 2021
50. MA-ABC
- Author
-
Violet R. Syrotiuk, Muhilan Ramamoorthy, and Stephanie Forrest
- Subjects
Mathematical optimization ,Fitness function ,Heuristic (computer science) ,Total cost ,Computer science ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Multi-objective optimization ,Evolutionary computation ,010201 computation theory & mathematics ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Memetic algorithm ,020201 artificial intelligence & image processing ,Arc routing - Abstract
Services such as garbage collection, road gritting, street sweeping, and power line inspection can each be formulated as a capacitated arc routing problem (CARP). The traditional formulation of CARP has the goal of minimizing the total cost of the routes making up a solution. Recently, operators of such services require routes that are balanced and visually attractive in addition to low cost. Routes that are balanced are about equal in length and provide fair work assignments. Visually attractive routes are subjective, but they usually involve non-crossing routes that provide well defined service areas. These additional features are important because they address operational complexities that arise from using the routes in practice. This paper presents MA-ABC, a memetic algorithm to find solutions for CARP that maximize route attractiveness and balance, while minimizing total cost. A novel fitness function combines route overlap with route contiguity to assess route attractiveness. MA-ABC is the first to incorporate attractiveness in a three-objective search for heuristic solutions for CARP. Experimental results on CARP benchmark instances show that MA-ABC finds a diverse set of heuristic solutions at the Pareto front, providing a wide choice for service operators to tradeoff design objectives.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.