25 results on '"Daniel Johannsen"'
Search Results
2. Micropolar plasticity—Part I: modeling based on curvature tensors related by mixed transformations
- Author
-
Charalampos Tsakmakis and Daniel Johannsen
- Subjects
Physics::Fluid Dynamics ,Classical mechanics ,Deformation (mechanics) ,Cauchy stress tensor ,Mechanical Engineering ,Finite strain theory ,Solid mechanics ,Computational Mechanics ,von Mises yield criterion ,Context (language use) ,Plasticity ,Curvature ,Mathematics - Abstract
When formulating finite deformation micropolar plasticity, the structure of the theory strongly depends on the choice of the variables describing the deformation kinematics. This holds true even for classical plasticity. However, in contrast to classical plasticity, the set of kinematical variables in micropolar plasticity includes, besides strain tensors, so-called micropolar curvature tensors. There are only a few investigations addressing such aspects, so the aim of the paper is to highlight the effect of a specific micropolar curvature kinematics on the structure of a micropolar plasticity. We do this by developing a general finite deformation micropolar plasticity, which relies upon a class of micropolar curvature tensors related to each other by mixed transformations. That means, the pull-back and push-forward transformations characterizing the class involve both deformation gradient and micropolar rotation tensors. The curvature kinematics is discussed by using geometrical methods developed previously. The plasticity theory is based on the assumption that the yield function and the flow rules are functions of specific micropolar Mandel’s stress tensors. The definition of the Mandel’s stress tensor is suggested by the adopted curvature kinematics and reveals a characteristic feature of the resulting plasticity. Moreover, the presence of curvature variables in plastic arc length gives reason to introduce a characteristic internal material length, which in turn seems to urge the form of the formulation of the theory. A specific version of von Mises micropolar plasticity with kinematic and isotropic hardening, derived in the theoretical context of the present paper, is elaborated in Part II.
- Published
- 2019
3. 2. Optimizing spatial and tonal data for PDE-based inpainting
- Author
-
Laurent Hoeltgen, Markus Mainberger, Sebastian Hoffmann, Joachim Weickert, Ching Hoo Tang, Simon Setzer, Daniel Johannsen, Frank Neumann, and Benjamin Doerr
- Published
- 2016
4. Evolutionary Algorithms for Quantum Computers
- Author
-
Piyush P. Kurur, Johannes Lengler, and Daniel Johannsen
- Subjects
Theoretical computer science ,General Computer Science ,Computer science ,business.industry ,Applied Mathematics ,Best-first search ,Evolutionary computation ,Theory ,Quantum algorithm ,Runtime analysis ,Computer Science Applications ,Search algorithm ,Beam search ,Local search (optimization) ,Heuristics ,business ,Quantum computer - Abstract
In this article, we formulate and study quantum analogues of randomized search heuristics, which make use of Grover search (in Proceedings of the 28th Annual ACM Symposium on Theory of Computing, pp. 212–219. ACM, New York, 1996) to accelerate the search for improved offsprings. We then specialize the above formulation to two specific search heuristics: Random Local Search and the (1+1) Evolutionary Algorithm. We call the resulting quantum versions of these search heuristics Quantum Local Search and the (1+1) Quantum Evolutionary Algorithm. We conduct a rigorous runtime analysis of these quantum search heuristics in the computation model of quantum algorithms, which, besides classical computation steps, also permits those unique to quantum computing devices. To this end, we study the six elementary pseudo-Boolean optimization problems OneMax, LeadingOnes, Discrepancy, Needle, Jump, and TinyTrap. It turns out that the advantage of the respective quantum search heuristic over its classical counterpart varies with the problem structure and ranges from no speedup at all for the problem Discrepancy to exponential speedup for the problem TinyTrap. We show that these runtime behaviors are closely linked to the probabilities of performing successful mutations in the classical algorithms., Algorithmica, 68 (1), ISSN:0178-4617, ISSN:1432-0541
- Published
- 2013
5. An SQL-Based Query Language and Engine for Graph Pattern Matching
- Author
-
Christian Krause, Daniel Johannsen, Radwan Deeb, Anton Niadzelka, David Knacker, and Kai-Uwe Sattler
- Subjects
Connected component ,SQL ,Graph rewriting ,Graph database ,Computer science ,Programming language ,020207 software engineering ,02 engineering and technology ,Query language ,computer.software_genre ,SAP HANA ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,computer ,computer.programming_language ,RDF query language - Abstract
The interest for graph databases has increased in the recent years. Several variants of graph query languages exist – from low-level programming interfaces to high-level, declarative languages. In this paper, we describe a novel SQL-based language for modeling high-level graph queries. Our approach is based on graph pattern matching concepts, specifically nested graph conditions with distance constraints, as well as graph algorithms for calculating nested projections, shortest paths and connected components. Extending SQL with graph concepts enables the reuse of syntax elements for arithmetic expressions, aggregates, sorting and limits, and the combination of graph and relational queries. We evaluate the language concepts and our experimental SAP HANA Graph Scale-Out Extension (GSE) prototype (This paper is not official SAP communication material. It discusses a research-only prototype, not an existing or future SAP product. Any business decisions made concerning SAP products should be based on official SAP communication material.) using the LDBC Social Network Benchmark. In this work we consider only complex read-only queries, but the presented language paves the way for a SQL-based graph manipulation language formally based on graph transformations.
- Published
- 2016
6. Non-existence of linear universal drift functions
- Author
-
Carola Winzen, Daniel Johannsen, Benjamin Doerr, Max-Planck-Institut für Informatik (MPII), and Max-Planck-Gesellschaft
- Subjects
FOS: Computer and information sciences ,General Computer Science ,F.2 ,[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS] ,Evolutionary algorithm ,Evolutionary computation ,0102 computer and information sciences ,02 engineering and technology ,[INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE] ,Mathematical proof ,01 natural sciences ,Upper and lower bounds ,Theoretical Computer Science ,Combinatorics ,Bio-inspired computation ,FOS: Mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,Neural and Evolutionary Computing (cs.NE) ,ComputingMilieux_MISCELLANEOUS ,Mathematics ,Linear function (calculus) ,Probability (math.PR) ,Computer Science - Neural and Evolutionary Computing ,Function (mathematics) ,010201 computation theory & mathematics ,020201 artificial intelligence & image processing ,Runtime analysis ,Heuristics ,Mathematics - Probability ,Evolutionary programming ,Computer Science(all) - Abstract
Drift analysis has become a powerful tool to prove bounds on the runtime of randomized search heuristics. It allows, for example, fairly simple proofs for the classical problem how the (1+1) Evolutionary Algorithm (EA) optimizes an arbitrary pseudo-Boolean linear function. The key idea of drift analysis is to measure the progress via another pseudo-Boolean function (called drift function) and use deeper results from probability theory to derive from this a good bound for the runtime of the EA. Surprisingly, all these results manage to use the same drift function for all linear objective functions. In this work, we show that such universal drift functions only exist if the mutation probability is close to the standard value of $1/n$., Comment: 19 pages; This work contains parts of the GECCO 2010 and CEC 2010 papers of the same authors
- Published
- 2012
7. Expanders are universal for the class of all spanning trees
- Author
-
Michael Krivelevich, Wojciech Samotij, and Daniel Johannsen
- Subjects
Statistics and Probability ,Random graph ,Class (set theory) ,Spanning tree ,Degree (graph theory) ,Applied Mathematics ,010102 general mathematics ,05C05, 05C35, 05C80, 05D40 ,Context (language use) ,0102 computer and information sciences ,Function (mathematics) ,01 natural sciences ,Theoretical Computer Science ,Combinatorics ,Computational Theory and Mathematics ,010201 computation theory & mathematics ,Random regular graph ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,FOS: Mathematics ,Mathematics - Combinatorics ,Almost surely ,Combinatorics (math.CO) ,0101 mathematics ,Mathematics - Abstract
Given a class of graphs F, we say that a graph G is universal for F, or F-universal, if every H in F is contained in G as a subgraph. The construction of sparse universal graphs for various families F has received a considerable amount of attention. One is particularly interested in tight F-universal graphs, i.e., graphs whose number of vertices is equal to the largest number of vertices in a graph from F. Arguably, the most studied case is that when F is some class of trees. Given integers n and \Delta, we denote by T(n,\Delta) the class of all n-vertex trees with maximum degree at most \Delta. In this work, we show that every n-vertex graph satisfying certain natural expansion properties is T(n,\Delta)-universal or, in other words, contains every spanning tree of maximum degree at most \Delta. Our methods also apply to the case when \Delta is some function of n. The result has a few very interesting implications. Most importantly, we obtain that the random graph G(n,p) is asymptotically almost surely (a.a.s.) universal for the class of all bounded degree spanning (i.e., n-vertex) trees provided that p \geq c n^{-1/3} \log^2n where c > 0 is a constant. Moreover, a corresponding result holds for the random regular graph of degree pn. In fact, we show that if \Delta satisfies \log n \leq \Delta \leq n^{1/3}, then the random graph G(n,p) with p \geq c \Delta n^{-1/3} \log n and the random r-regular n-vertex graph with r \geq c\Delta n^{2/3} \log n are a.a.s. T(n,\Delta)-universal. Another interesting consequence is the existence of locally sparse n-vertex T(n,\Delta)-universal graphs. For constant \Delta, we show that one can (randomly) construct n-vertex T(n,\Delta)-universal graphs with clique number at most five. Finally, we show robustness of random graphs with respect to being universal for T(n,\Delta) in the context of the Maker-Breaker tree-universality game., Comment: 25 pages
- Published
- 2013
8. More Effective Crossover Operators for the All-Pairs Shortest Path Problem
- Author
-
Madeleine Theile, Daniel Johannsen, Timo Kötzing, Frank Neumann, and Benjamin Doerr
- Subjects
FOS: Computer and information sciences ,Mathematical optimization ,General Computer Science ,Crossover ,MathematicsofComputing_NUMERICALANALYSIS ,Evolutionary algorithm ,Computer Science - Neural and Evolutionary Computing ,Evolutionary computation ,Theoretical Computer Science ,Shortest Path Faster Algorithm ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,Shortest path problem ,K shortest path routing ,Neural and Evolutionary Computing (cs.NE) ,Algorithm ,Yen's algorithm ,Selection (genetic algorithm) ,Mathematics ,Computer Science(all) ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
The all-pairs shortest path problem is the first non-artificial problem for which it was shown that adding crossover can significantly speed up a mutation-only evolutionary algorithm. Recently, the analysis of this algorithm was refined and it was shown to have an expected optimization time (w.r.t. the number of fitness evaluations) of $\Theta(n^{3.25}(\log n)^{0.25})$. In contrast to this simple algorithm, evolutionary algorithms used in practice usually employ refined recombination strategies in order to avoid the creation of infeasible offspring. We study extensions of the basic algorithm by two such concepts which are central in recombination, namely \emph{repair mechanisms} and \emph{parent selection}. We show that repairing infeasible offspring leads to an improved expected optimization time of $\mathord{O}(n^{3.2}(\log n)^{0.2})$. As a second part of our study we prove that choosing parents that guarantee feasible offspring results in an even better optimization time of $\mathord{O}(n^{3}\log n)$. Both results show that already simple adjustments of the recombination operator can asymptotically improve the runtime of evolutionary algorithms.
- Published
- 2012
9. Expanders Are Universal for the Class of All Spanning Trees (Extended Abstract)
- Author
-
Michael Krivelevich, Daniel Johannsen, and Wojciech Samotij
- Subjects
Combinatorics ,Class (set theory) ,Spanning tree ,Mathematics - Published
- 2012
10. Optimising Spatial and Tonal Data for Homogeneous Diffusion Inpainting
- Author
-
Sebastian Hoffmann, Markus Mainberger, Joachim Weickert, Ching Hoo Tang, Benjamin Doerr, Daniel Johannsen, and Frank Neumann
- Subjects
Partial differential equation ,Pixel ,business.industry ,Probabilistic logic ,Inpainting ,Least squares ,Field (computer science) ,Image (mathematics) ,Computer Science::Computer Vision and Pattern Recognition ,Computer Science::Multimedia ,Computer vision ,Artificial intelligence ,business ,Algorithm ,Image compression ,Mathematics - Abstract
Finding optimal inpainting data plays a key role in the field of image compression with partial differential equations (PDEs). In this paper, we optimise the spatial as well as the tonal data such that an image can be reconstructed with minimised error by means of discrete homogeneous diffusion inpainting. To optimise the spatial distribution of the inpainting data, we apply a probabilistic data sparsification followed by a nonlocal pixel exchange. Afterwards we optimise the grey values in these inpainting points in an exact way using a least squares approach. The resulting method allows almost perfect reconstructions with only 5% of all pixels. This demonstrates that a thorough data optimisation can compensate for most deficiencies of a suboptimal PDE interpolant.
- Published
- 2012
11. Evolutionary Computation in Combinatorial Optimization
- Author
-
Daniel Johannsen
- Published
- 2011
12. Runtime analysis of the (1+1) evolutionary algorithm on strings over finite alphabets
- Author
-
Benjamin Doerr, Martin Schmidt, and Daniel Johannsen
- Subjects
Discrete mathematics ,Combinatorics ,Work (thermodynamics) ,Evolutionary algorithm ,Space (mathematics) ,Linear potential ,Time complexity ,Upper and lower bounds ,Linear function ,Integer (computer science) ,Mathematics - Abstract
In this work, we investigate a (1+1) Evolutionary Algorithm for optimizing functions over the space {0,...,r} n, where r is a positive integer. We show that for linear functions over {0,1,2}n, the expected runtime time of this algorithm is O(n log n). This result generalizes an existing result on pseudo-Boolean functions and is derived using drift analysis. We also show that for large values of r, no upper bound for the runtime of the (1+1) Evolutionary Algorithm for linear function on {0,...,r}n can be obtained with this approach nor with any other approach based on drift analysis with weight-independent linear potential functions.
- Published
- 2011
13. Multiplicative Drift Analysis
- Author
-
Benjamin Doerr, Carola Winzen, Daniel Johannsen, Max-Planck-Institut für Informatik (MPII), and Max-Planck-Gesellschaft
- Subjects
FOS: Computer and information sciences ,Logarithm ,General Computer Science ,[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS] ,Evolutionary algorithm ,0102 computer and information sciences ,02 engineering and technology ,[INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE] ,01 natural sciences ,Upper and lower bounds ,symbols.namesake ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,Neural and Evolutionary Computing (cs.NE) ,Time complexity ,ComputingMilieux_MISCELLANEOUS ,Mathematics ,Discrete mathematics ,Linear function (calculus) ,Spanning tree ,Applied Mathematics ,Multiplicative function ,Computer Science - Neural and Evolutionary Computing ,Function (mathematics) ,Linear function ,Computer Science Applications ,010201 computation theory & mathematics ,Theory of computation ,Euler's formula ,symbols ,020201 artificial intelligence & image processing ,Constant (mathematics) ,Algorithm - Abstract
In this work, we introduce multiplicative drift analysis as a suitable way to analyze the runtime of randomized search heuristics such as evolutionary algorithms. We give a multiplicative version of the classical drift theorem. This allows easier analyses in those settings where the optimization progress is roughly proportional to the current distance to the optimum. To display the strength of this tool, we regard the classical problem how the (1+1) Evolutionary Algorithm optimizes an arbitrary linear pseudo-Boolean function. Here, we first give a relatively simple proof for the fact that any linear function is optimized in expected time $O(n \log n)$, where $n$ is the length of the bit string. Afterwards, we show that in fact any such function is optimized in expected time at most ${(1+o(1)) 1.39 \euler n\ln (n)}$, again using multiplicative drift analysis. We also prove a corresponding lower bound of ${(1-o(1))e n\ln(n)}$ which actually holds for all functions with a unique global optimum. We further demonstrate how our drift theorem immediately gives natural proofs (with better constants) for the best known runtime bounds for the (1+1) Evolutionary Algorithm on combinatorial problems like finding minimum spanning trees, shortest paths, or Euler tours., Comment: Contains results from our GECCO 2010 and CEC 2010 conference paper
- Published
- 2011
- Full Text
- View/download PDF
14. Can quantum search accelerate evolutionary algorithms?
- Author
-
Piyush P. Kurur, Daniel Johannsen, and Johannes Lengler
- Subjects
Quantum sort ,Speedup ,Theoretical computer science ,Computer Science::Neural and Evolutionary Computation ,Mutation (genetic algorithm) ,Evolutionary algorithm ,Quantum phase estimation algorithm ,Quantum algorithm ,Quantum algorithm for linear systems of equations ,Algorithm ,Quantum computer ,Mathematics - Abstract
In this article, we formulate for the first time the notion of a quantum evolutionary algorithm. In fact we define a quantum analogue for any elitist (1+1) randomized search heuristic. The quantum evolutionary algorithm, which we call (1+1) quantum evolutionary algorithm (QEA), is the quantum version of the classical (1+1) evolutionary algorithm (EA), and runs only on a quantum computer. It uses Grover search [13] to accelerate the search for improved offsprings.To understand the speedup of the (1+1) QEA over the (1+1) EA, we study the three well known pseudo-Boolean optimization problems OneMax, LeadingOnes, and Discrepancy. We show that although there is a speedup in the case of OneMax and LeadingOnes in the quantum setting, the speedup is less than quadratic. For Discrepancy, we show that the speedup is at best constant.The reason for this inconsistency is due to the difference in the probability of making a successful mutation. On the one hand, if the probability of making a successful mutation is large then quantum acceleration does not help much. On the other hand, if the probabilities of making a successful mutation is small then quantum enhancement indeed helps.
- Published
- 2010
15. Edge-based representation beats vertex-based representation in shortest path problems
- Author
-
Daniel Johannsen and Benjamin Doerr
- Subjects
Combinatorics ,Widest path problem ,Discrete mathematics ,Euclidean shortest path ,Shortest Path Faster Algorithm ,Shortest path problem ,K shortest path routing ,Canadian traveller problem ,Longest path problem ,Distance ,Mathematics - Abstract
In this paper, we present a new representation for individuals in the single-source shortest path problem. Contrary to previous approaches, it has the natural property that different vertex degrees do not induce unfairness in the mutation step. In particular, at any time each edge has roughly the same probability of being added to or removed from the current individual.This turns out to be a crucial property. Mainly based on this, we prove superior bounds for the optimization time two evolutionary algorithms for the single-source shortest path problem. For both the multi-criteria formulation of the problem (introduced by Scharnow, Tinnefeld and Wegener (2002, 2004)) and the single-criteria one (regarded in Baswana et al. (2009)), we improve the existing bounds by a factor of n2/m, where m denotes the number of edges and n the number of vertices of the underlying graph. Given that most graphs found in practical applications are sparse, this is a considerable gain.
- Published
- 2010
16. Drift analysis and linear functions revisited
- Author
-
Benjamin Doerr, Carola Winzen, and Daniel Johannsen
- Subjects
Combinatorics ,Discrete mathematics ,symbols.namesake ,Evolutionary algorithm ,symbols ,Markov process ,Function (mathematics) ,Bit array ,Boolean function ,Random variable ,Upper and lower bounds ,Evolutionary computation ,Mathematics - Abstract
We regard the classical problem how the (1+1) Evolutionary Algorithm optimizes an arbitrary linear pseudo-Boolean function. We show that any such function is optimized in time (1 + o(1)) 1.39en ln (n), where n is the length of the bit string. We also prove a lower bound of (1 −o(1))en ln(n), which in fact holds for all functions with a unique global optimum. This shows that for linear functions, even though the optimization behavior might differ, the resulting runtimes are very similar. Our experimental results suggest that the true optimization times are even closer than what the theoretical guarantees promise.
- Published
- 2010
17. Vertices of Degree k in Random Maps
- Author
-
Konstantinos Panagiotou and Daniel Johannsen
- Subjects
Combinatorics ,Mathematics - Published
- 2010
18. More Effective Crossover Operators for the All-Pairs Shortest Path Problem
- Author
-
Timo Kötzing, Frank Neumann, Daniel Johannsen, Benjamin Doerr, and Madeleine Theile
- Subjects
Combinatorics ,Speedup ,Shortest path problem ,Crossover ,Evolutionary algorithm ,Binary logarithm ,Selection (genetic algorithm) ,Mathematics - Abstract
The all-pairs shortest path problem is the first nonartificial problem for which it was shown that adding crossover can significantly speed up a mutation-only evolutionary algorithm. Recently, the analysis of this algorithm was refined and it was shown to have an expected optimization time of Θ(n3.25(log n)0.25). In this work, we study two variants of the algorithm. These are based on two central concepts in recombination, repair mechanisms and parent selection. We show that repairing infeasible offspring leads to an improved expected optimization time of O(n3.2(log n)0.2). Furthermore, we prove that choosing parents that guarantee feasible offspring results in an optimization time of O(n3 log n).
- Published
- 2010
19. Solving SAT for CNF Formulas with a One-Sided Restriction on Variable Occurrences
- Author
-
Magnus Wahlström, Daniel Johannsen, and Igor Razgon
- Subjects
Discrete mathematics ,Combinatorics ,Class (set theory) ,Generalization ,True quantified Boolean formula ,Conjunctive normal form ,Constant (mathematics) ,Satisfiability ,Complement (set theory) ,Mathematics ,Variable (mathematics) - Abstract
In this paper we consider the class of boolean formulas in Conjunctive Normal Form (CNF) where for each variable all but at most d occurrences are either positive or negative. This class is a generalization of the class of CNF formulas with at most d occurrences (positive and negative) of each variable which was studied in [Wahlstrom, 2005]. Applying complement search [Purdom, 1984], we show that for every d there exists a constant $\gamma_d such that satisfiability of a CNF formula on n variables can be checked in runtime ${\ensuremath{{O}}}(\gamma_d^n)$ if all but at most d occurrences of each variable are either positive or negative. We thoroughly analyze the proposed branching strategy and determine the asymptotic growth constant *** d more precisely. Finally, we show that the trivial ${\ensuremath{{O}}}(2^n)$ barrier of satisfiability checking can be broken even for a more general class of formulas, namely formulas where the positive or negative literals of every variable have what we will call a d---covering . To the best of our knowledge, for the considered classes of formulas there are no previous non-trivial upper bounds on the complexity of satisfiability checking.
- Published
- 2009
20. Theory and Applications of Satisfiability Testing - SAT 2009
- Author
-
Magnus Wahlström, Daniel Johannsen, and Igor Razgon
- Subjects
Algebra ,Discrete mathematics ,One sided ,Mathematics ,Variable (mathematics) - Published
- 2009
21. How Single Ant ACO Systems Optimize Pseudo-Boolean Functions
- Author
-
Benjamin Doerr, Daniel Johannsen, and Ching Hoo Tang
- Subjects
Computer science ,Stochastic process ,business.industry ,Evolutionary algorithm ,Artificial intelligence ,Boolean function ,Tracking (particle physics) ,business ,ANT - Abstract
We undertake a rigorous experimental analysis of the optimization behavior of the two most studied single ant ACO systems on several pseudo-boolean functions. By tracking the behavior of the underlying random processes rather than just regarding the resulting optimization time, we gain additional insight into these systems. A main finding is that in those cases where the single ant ACO system performs well, it basically simulates the much simpler (1+1) evolutionary algorithm.
- Published
- 2008
22. Refined runtime analysis of a basic ant colony optimization algorithm
- Author
-
Daniel Johannsen and Benjamin Doerr
- Subjects
Polynomial ,Mathematical optimization ,Optimization problem ,Meta-optimization ,Computer science ,Ant colony optimization algorithms ,Computer Science::Neural and Evolutionary Computation ,Translation (geometry) ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Exponential function ,Factor (programming language) ,Pheromone ,computer ,Metaheuristic ,Algorithm ,computer.programming_language - Abstract
Neumann and Witt (2006) analyzed the runtime of the basic ant colony optimization (ACO) algorithm 1-Ant on pseudo-Boolean optimization problems. For the problem OneMax they showed how the runtime depends on the evaporation factor. In particular, they proved a phase transition from exponential to polynomial runtime. In this work, we simplify the view on this problem by an appropriate translation of the pheromone model. This results in a profound simplification of the pheromone update rule and, by that, a refinement of the results of Neumann and Witt. In particular, we show how the exponential runtime bound gradually changes to a polynomial bound inside the phase of transition.
- Published
- 2007
23. Adjacency list matchings
- Author
-
Daniel Johannsen and Benjamin Doerr
- Subjects
Combinatorics ,Discrete mathematics ,symbols.namesake ,Mutation operator ,Asymptotically optimal algorithm ,Randomized search ,symbols ,Evolutionary algorithm ,Adjacency list ,Eulerian path ,Coupon collector's problem ,Graph ,Mathematics - Abstract
We propose and analyze a novel genotype to represent walk and cycle covers in graphs, namely matchings in the adjacency lists. This representation admits the natural mutation operator of adding a random match and possibly also matching the former partners.To demonstrate the strength of this set-up, we use it to build a simple (1+1) evolutionary algorithm for the problem of finding an Eulerian cycle in a graph. We analyze several natural variants that stem from different ways to randomly choose the new match.Among other insight, we exhibit a (1+1) evolutionary algorithm that computes an Euler tour in a graph with $m$ edges in expected optimization time Θ(m log m). This significantly improves the previous best evolutionary solution having expected optimization time Θ(m2 log m) in the worst-case, but also compares nicely with the runtime of an optimal classical algorithm which is of order Θ(m). A simple coupon collector argument indicates that our optimization time is asymptotically optimal for any randomized search heuristic.
- Published
- 2007
24. Rigorous analyses of fitness-proportional selection for optimizing linear functions
- Author
-
Daniel Johannsen, Frank Neumann, Christian Klein, and Edda Happ
- Subjects
Mathematical optimization ,business.industry ,Fitness approximation ,Fitness proportionate selection ,Evolutionary algorithm ,Local search (optimization) ,Function (mathematics) ,business ,Time complexity ,Selection (genetic algorithm) ,Mathematics ,Exponential function - Abstract
Rigorous runtime analyses of evolutionary algorithms (EAs) mainly investigate algorithms that use elitist selection methods. Two algorithms commonly studied are Randomized Local Search (RLS) and the (1+1) EA and it is well known that both optimize any linear pseudo-Boolean function on n bits within an expected number of O(n log n) fitness evaluations. In this paper, we analyze variants of these algorithms that use fitness proportional selection.A well-known method in analyzing the local changes in the solutions of RLS is a reduction to the gambler's ruin problem. We extend this method in order to analyze the global changes imposed by the (1+1) EA. By applying this new technique we show that with high probability using fitness proportional selection leads to an exponential optimization time for any linear pseudo-Boolean function with non-zero weights. Even worse, all solutions of the algorithms during an exponential number of fitness evaluations differ with high probability in linearly many bits from the optimal solution.Our theoretical studies are complemented by experimental investigations which confirm the asymptotic results on realistic input sizes.
25. Counting defective parking functions
- Author
-
Daniel Johannsen, Pascal Schweitzer, Thomas Prellberg, and Peter J. Cameron
- Subjects
Discrete mathematics ,Sequence ,Recurrence relation ,Binomial (polynomial) ,Rayleigh distribution ,Applied Mathematics ,Generating function ,Asymptotic distribution ,Function (mathematics) ,Space (mathematics) ,Theoretical Computer Science ,Combinatorics ,Computational Theory and Mathematics ,05A15, 05A16 ,FOS: Mathematics ,Mathematics - Combinatorics ,Discrete Mathematics and Combinatorics ,Combinatorics (math.CO) ,Geometry and Topology ,Mathematics - Abstract
Suppose that $n$ drivers each choose a preferred parking space in a linear car park with $m$ spaces. Each driver goes to the chosen space and parks there if it is free, and otherwise takes the first available space with larger number (if any). If all drivers park successfully, the sequence of choices is called a parking function. In general, if $k$ drivers fail to park, we have a \emph{defective parking function} of \emph{defect} $k$. Let $\cp(n,m,k)$ be the number of such functions. In this paper, we establish a recurrence relation for the numbers $\cp(n,m,k)$, and express this as an equation for a three-variable generating function. We solve this equation using the kernel method, and extract the coefficients explicitly: it turns out that the cumulative totals are partial sums in Abel's binomial identity. Finally, we compute the asymptotics of $\cp(n,m,k)$. In particular, for the case $m=n$, if choices are made independently at random, the limiting distribution of the defect (the number of drivers who fail to park), scaled by the square root of $n$, is the Rayleigh distribution. On the other hand, in case $m=\omega(n)$, the probability that all spaces are occupied tends asymptotically to one., Comment: extended version
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.