52 results on '"Johannes Lengler"'
Search Results
2. Destructiveness of lexicographic parsimony pressure and alleviation by a concatenation crossover in genetic programming
- Author
-
Anna Melnichenko, Timo Kötzing, Johannes Lengler, and J. A. Gregor Lagodzinski
- Subjects
FOS: Computer and information sciences ,Theoretical computer science ,General Computer Science ,Concatenation ,Crossover ,Computer Science - Neural and Evolutionary Computing ,Hasso-Plattner-Institut für Digital Engineering gGmbH ,Genetic programming ,0102 computer and information sciences ,02 engineering and technology ,Lexicographical order ,01 natural sciences ,Evolutionary computation ,Theoretical Computer Science ,Tree (data structure) ,010201 computation theory & mathematics ,Computer Science - Data Structures and Algorithms ,Mutation (genetic algorithm) ,ddc:000 ,0202 electrical engineering, electronic engineering, information engineering ,Data Structures and Algorithms (cs.DS) ,020201 artificial intelligence & image processing ,Neural and Evolutionary Computing (cs.NE) ,Local search (constraint satisfaction) ,Mathematics - Abstract
For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation. First, the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts). Second, the role and realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had a surprisingly little share in this work. We analyze a simple crossover operator in combination with local search, where a preference for small solutions minimizes bloat (lexicographic parsimony pressure); the resulting algorithm is denoted Concatenation Crossover GP. For this purpose three variants of the well-studied MAJORITY test function with large plateaus are considered. We show that the Concatenation Crossover GP can efficiently optimize these test functions, while local search cannot be efficient for all three variants independent of employing bloat control., Comment: to appear in PPSN 2018
- Published
- 2020
- Full Text
- View/download PDF
3. The impact of lexicographic parsimony pressure for ORDER/MAJORITY on the run time
- Author
-
Johannes Lengler, J. A. Gregor Lagodzinski, Benjamin Doerr, and Timo Kötzing
- Subjects
Discrete mathematics ,Decision variables ,Optimization problem ,General Computer Science ,Benchmark (computing) ,init ,Order (group theory) ,Genetic programming ,Lexicographical order ,Representation (mathematics) ,Theoretical Computer Science ,Mathematics - Abstract
While many optimization problems work with a fixed number of decision variables and thus a fixed-length representation of possible solutions, genetic programming (GP) works on variable-length representations. A naturally occurring problem is that of bloat, that is, the unnecessary growth of solution lengths, which may slow down the optimization process. So far, the mathematical runtime analysis could not deal well with bloat and required explicit assumptions limiting bloat. In this paper, we provide the first mathematical runtime analysis of a GP algorithm that does not require any assumptions on the bloat. Previous performance guarantees were only proven conditionally for runs in which no strong bloat occurs. Together with improved analyses for the case with bloat restrictions our results show that such assumptions on the bloat are not necessary and that the algorithm is efficient without explicit bloat control mechanism. More specifically, we analyzed the performance of the ( 1 + 1 ) GP on the two benchmark functions Order and Majority . When using lexicographic parsimony pressure as bloat control, we show a tight runtime estimate of O ( T init + n log n ) iterations both for Order and Majority . For the case without bloat control, the bounds O ( T init log T init + n ( log n ) 3 ) and Ω ( T init + n log n ) (and Ω ( T init log T init ) for n = 1 ) hold for Majority . 1
- Published
- 2020
- Full Text
- View/download PDF
4. Random Sampling with Removal
- Author
-
Kenneth L. Clarkson, Johannes Lengler, Bernd Gärtner, and May Szedlák
- Subjects
Computational Geometry (cs.CG) ,FOS: Computer and information sciences ,Discrete mathematics ,050101 languages & linguistics ,05 social sciences ,Dimension (graph theory) ,Constrained optimization ,Sample (statistics) ,02 engineering and technology ,Expected value ,Upper and lower bounds ,Theoretical Computer Science ,Randomized algorithm ,Combinatorics ,Computational Theory and Mathematics ,Convex optimization ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science - Computational Geometry ,Discrete Mathematics and Combinatorics ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Fraction (mathematics) ,Geometry and Topology ,Mathematics - Abstract
We study randomized algorithms for constrained optimization, in abstract frameworks that include, in strictly increasing generality: convex programming; LP-type problems; violator spaces; and a setting we introduce, consistent spaces. Such algorithms typically involve a step of finding the optimal solution for a random sample of the constraints. They exploit the condition that, in finite dimension $$\delta $$ , this sample optimum violates a provably small expected fraction of the non-sampled constraints, with the fraction decreasing in the sample size r. We extend such algorithms by considering the technique of removal, where a fixed number k of constraints are removed from the sample according to a fixed rule, with the goal of improving the solution quality. This may have the effect of increasing the number of violated non-sampled constraints. We study this increase, and bound it in a variety of general settings. This work is motivated by, and extends, results on removal as proposed for chance-constrained optimization. For many relevant values of r, $$\delta $$ , and k, we prove matching upper and lower bounds for the expected number of constraints violated by a random sample, after the removal of k constraints. For a large range of values of k, the new upper bounds improve the previously best bounds for LP-type problems, which moreover had only been known in special cases, and not in the generality we consider. Moreover, we show that our results extend from finite to infinite spaces, for chance-constrained optimization.
- Published
- 2020
- Full Text
- View/download PDF
5. Two-Dimensional Drift Analysis
- Author
-
Duri Janett and Johannes Lengler
- Published
- 2022
- Full Text
- View/download PDF
6. Self-adjusting Population Sizes for the $$(1, \lambda )$$-EA on Monotone Functions
- Author
-
Marc Kaufmann, Maxime Larcher, Johannes Lengler, and Xun Zou
- Published
- 2022
- Full Text
- View/download PDF
7. Two-Dimensional Drift Analysis: Optimizing Two Functions Simultaneously Can Be Hard
- Author
-
Duri Janett and Johannes Lengler
- Subjects
FOS: Computer and information sciences ,History ,Polymers and Plastics ,Probability (math.PR) ,68W50 ,FOS: Mathematics ,Computer Science - Neural and Evolutionary Computing ,Neural and Evolutionary Computing (cs.NE) ,Business and International Management ,Industrial and Manufacturing Engineering ,Mathematics - Probability - Abstract
In this paper we show how to use drift analysis in the case of two random variables $X_1, X_2$, when the drift is approximatively given by $A\cdot (X_1,X_2)^T$ for a matrix $A$. The non-trivial case is that $X_1$ and $X_2$ impede each other's progress, and we give a full characterization of this case. As application, we develop and analyze a minimal example TwoLinear of a dynamic environment that can be hard. The environment consists of two linear function $f_1$ and $f_2$ with positive weights $1$ and $n$, and in each generation selection is based on one of them at random. They only differ in the set of positions that have weight $1$ and $n$. We show that the $(1+1)$-EA with mutation rate $\chi/n$ is efficient for small $\chi$ on TwoLinear, but does not find the shared optimum in polynomial time for large $\chi$., Comment: Version 1 contained a bug in the proof of Theorem 2b, so this proof has substantially changed. Moreover, introduction and discussion sections have been substantially revised
- Published
- 2022
- Full Text
- View/download PDF
8. Penalising transmission to hubs in scale-free spatial random graphs
- Author
-
John Lapinskas, Júlia Komjáthy, and Johannes Lengler
- Subjects
Statistics and Probability ,Random graph ,Transmission (telecommunications) ,Scale (ratio) ,Statistics, Probability and Uncertainty ,Topology ,Mathematics - Published
- 2021
- Full Text
- View/download PDF
9. Editorial
- Author
-
Johannes Lengler and Frank Neumann
- Subjects
General Computer Science ,Applied Mathematics ,Computer Science Applications - Published
- 2022
- Full Text
- View/download PDF
10. Geometric inhomogeneous random graphs
- Author
-
Ralph Keusch, Johannes Lengler, and Karl Bringmann
- Subjects
Random graph ,Discrete mathematics ,General Computer Science ,Sublinear function ,Generalization ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Giant component ,Theoretical Computer Science ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Constant (mathematics) ,Cluster analysis ,Focus (optics) ,Time complexity ,Mathematics - Abstract
Real-world networks, like social networks or the internet infrastructure, have structural properties such as large clustering coefficients that can best be described in terms of an underlying geometry. This is why the focus of the literature on theoretical models for real-world networks shifted from classic models without geometry, such as Chung–Lu random graphs, to modern geometry-based models, such as hyperbolic random graphs. With this paper we contribute to the theoretical analysis of these modern, more realistic random graph models. Instead of studying directly hyperbolic random graphs, we use a generalization that we call geometric inhomogeneous random graphs (GIRGs). Since we ignore constant factors in the edge probabilities, GIRGs are technically simpler (specifically, we avoid hyperbolic cosines), while preserving the qualitative behavior of hyperbolic random graphs, and we suggest to replace hyperbolic random graphs by this new model in future theoretical studies. We prove the following fundamental structural and algorithmic results on GIRGs. (1) We provide a sampling algorithm that generates a random graph from our model in expected linear time, improving the best-known sampling algorithm for hyperbolic random graphs by a substantial factor O ( n ) . (2) We establish that GIRGs have clustering coefficients in Ω ( 1 ) , (3) we prove that GIRGs have small separators, i.e., it suffices to delete a sublinear number of edges to break the giant component into two large pieces, and (4) we show how to compress GIRGs using an expected linear number of bits.
- Published
- 2019
- Full Text
- View/download PDF
11. Asymptotically optimal amplifiers for the Moran process
- Author
-
Leslie Ann Goldberg, John Lapinskas, Pascal Pfister, Florian Meier, Konstantinos Panagiotou, and Johannes Lengler
- Subjects
FOS: Computer and information sciences ,Discrete Mathematics (cs.DM) ,General Computer Science ,Logarithm ,Extinction probability ,Population ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Theoretical Computer Science ,Combinatorics ,FOS: Mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Moran process ,Mathematics - Combinatorics ,Quantitative Biology::Populations and Evolution ,Quantitative Biology - Populations and Evolution ,education ,Mathematics ,Social and Information Networks (cs.SI) ,education.field_of_study ,Markov chain ,Probability (math.PR) ,Populations and Evolution (q-bio.PE) ,Computer Science - Social and Information Networks ,Digraph ,Extremal graph theory ,Asymptotically optimal algorithm ,010201 computation theory & mathematics ,FOS: Biological sciences ,020201 artificial intelligence & image processing ,Combinatorics (math.CO) ,Mathematics - Probability ,Computer Science - Discrete Mathematics - Abstract
We study the Moran process as adapted by Lieberman, Hauert and Nowak. This is a model of an evolving population on a graph or digraph where certain individuals, called “mutants” have fitness r and other individuals, called “non-mutants” have fitness 1. We focus on the situation where the mutation is advantageous, in the sense that r > 1 . A family of digraphs is said to be strongly amplifying if the extinction probability tends to 0 when the Moran process is run on digraphs in this family. The most-amplifying known family of digraphs is the family of megastars of Galanis et al. We show that this family is optimal, up to logarithmic factors, since every strongly-connected n-vertex digraph has extinction probability Ω ( n − 1 / 2 ) . Next, we show that there is an infinite family of undirected graphs, called dense incubators, whose extinction probability is O ( n − 1 / 3 ) . We show that this is optimal, up to constant factors. Finally, we introduce sparse incubators, for varying edge density, and show that the extinction probability of these graphs is O ( n / m ) , where m is the number of edges. Again, we show that this is optimal, up to constant factors.
- Published
- 2019
- Full Text
- View/download PDF
12. Runtime Analysis of the $$(\mu + 1)$$-EA on the Dynamic BinVal Function
- Author
-
Johannes Lengler and Simone Riedi
- Subjects
Discrete mathematics ,Linear function (calculus) ,Arbitrarily large ,Fitness function ,Mutation (genetic algorithm) ,Evolutionary algorithm ,Order (ring theory) ,Function (mathematics) ,Mathematics ,Exponential function - Abstract
We study evolutionary algorithms in a dynamic setting, where for each generation a different fitness function is chosen, and selection is performed with respect to the current fitness function. Specifically, we consider Dynamic BinVal, in which the fitness functions for each generation is given by the linear function BinVal, but in each generation the order of bits is randomly permuted. For the \((1 + 1)\)-EA it was known that there is an efficiency threshold \(c_0\) for the mutation parameter, at which the runtime switches from quasilinear to exponential. Previous empirical evidence suggested that for larger population size \(\mu \), the threshold may increase. We prove rigorously that this is at least the case in an \(\varepsilon \)-neighborhood around the optimum: the threshold of the \((\mu + 1)\)-EA becomes arbitrarily large if the \(\mu \) is chosen large enough.
- Published
- 2021
- Full Text
- View/download PDF
13. Large Population Sizes and Crossover Help in Dynamic Environments
- Author
-
Johannes Lengler and Jonas Meier
- Subjects
FOS: Computer and information sciences ,Computer Science - Neural and Evolutionary Computing ,Neural and Evolutionary Computing (cs.NE) ,Computer Science Applications - Abstract
Dynamic linear functions on the boolean hypercube are functions which assign to each bit a positive weight, but the weights change over time. Throughout optimization, these functions maintain the same global optimum, and never have defecting local optima. Nevertheless, it was recently shown [Lengler, Schaller, FOCI 2019] that the $$(1+1)$$ ( 1 + 1 ) -Evolutionary Algorithm needs exponential time to find or approximate the optimum for some algorithm configurations. In this experimental paper, we study the effect of larger population sizes for dynamic binval, the extreme form of dynamic linear functions. We find that moderately increased population sizes extend the range of efficient algorithm configurations, and that crossover boosts this positive effect substantially. Remarkably, similar to the static setting of monotone functions in [Lengler, Zou, FOGA 2019], the hardest region of optimization for $$(\mu +1)$$ ( μ + 1 ) -EA is not close the optimum, but far away from it. In contrast, for the $$(\mu +1)$$ ( μ + 1 ) -GA, the region around the optimum is the hardest region in all studied cases.Kindly check and confirm the inserted city name is correctly identified.Correct.
- Published
- 2020
- Full Text
- View/download PDF
14. Large Population Sizes and Crossover Help in Dynamic Environments
- Author
-
Johannes Lengler and Jonas Meier
- Subjects
050101 languages & linguistics ,education.field_of_study ,05 social sciences ,Population ,Crossover ,Contrast (statistics) ,02 engineering and technology ,Exponential function ,Combinatorics ,Range (mathematics) ,Monotone polygon ,Local optimum ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Hypercube ,education ,Mathematics - Abstract
Dynamic linear functions on the boolean hypercube are functions which assign to each bit a positive weight, but the weights change over time. Throughout optimization, these functions maintain the same global optimum, and never have defecting local optima. Nevertheless, it was recently shown [Lengler, Schaller, FOCI 2019] that the \((1+1)\)-Evolutionary Algorithm needs exponential time to find or approximate the optimum for some algorithm configurations. In this experimental paper, we study the effect of larger population sizes for Dynamic BinVal, the extremal form of dynamic linear functions. We find that moderately increased population sizes extend the range of efficient algorithm configurations, and that crossover boosts this positive effect substantially. Remarkably, similar to the static setting of monotone functions in [Lengler, Zou, FOGA 2019], the hardest region of optimization for \((\mu +1)\)-EA is not close the optimum, but far away from it. In contrast, for the \((\mu +1)\)-GA, the region around the optimum is the hardest region in all studied cases (Extended Abstract. A full version is available on arxiv at [11]).
- Published
- 2020
- Full Text
- View/download PDF
15. Runtime analysis of the (mu+1)-EA on the Dynamic BinVal function
- Author
-
Johannes Lengler and Simone Riedi
- Subjects
FOS: Computer and information sciences ,Computer Science - Neural and Evolutionary Computing ,Neural and Evolutionary Computing (cs.NE) - Abstract
We study evolutionary algorithms in a dynamic setting, where for each generation a different fitness function is chosen, and selection is performed with respect to the current fitness function. Specifically, we consider Dynamic BinVal, in which the fitness functions for each generation is given by the linear function BinVal, but in each generation the order of bits is randomly permuted. For the (1+1)-EA it was known that there is an efficiency threshold $c_0$ for the mutation parameter, at which the runtime switches from quasilinear to exponential. A previous empirical evidence suggested that for larger population size $\mu$, the threshold may increase. We prove that this is at least the case in an $\varepsilon$-neighborhood around the optimum: the threshold of the (\mu+1)-EA becomes arbitrarily large if the $\mu$ is chosen large enough. However, the most surprising result is obtained by a second order analysis for $\mu=2$: the threshold INcreases with increasing proximity to the optimum. In particular, the hardest region for optimization is NOT around the optimum., Comment: Full journal version. Conference version has appeared at Evostar 2021, journal version submitted to Springer Nature Computer Science (SNCS)
- Published
- 2020
- Full Text
- View/download PDF
16. On the origin of lognormal network synchrony in CA1
- Author
-
Hafsteinn Einarsson, Florian Meier, Asier Mujika, Marcelo Matheus Gauy, Johannes Lengler, Felix Weissenberger, and Angelika Steger
- Subjects
0301 basic medicine ,Distribution (number theory) ,Cognitive Neuroscience ,Models, Neurological ,Population ,Rodentia ,Membrane Potentials ,Normal distribution ,03 medical and health sciences ,0302 clinical medicine ,Biological neural network ,Animals ,Computer Simulation ,Statistical physics ,education ,CA1 Region, Hippocampal ,Event (probability theory) ,Neurons ,Physics ,education.field_of_study ,Models, Statistical ,CA3 Region, Hippocampal ,030104 developmental biology ,Sharp wave ripple ,Synapses ,Log-normal distribution ,Neuroscience ,030217 neurology & neurosurgery - Abstract
The sharp wave ripple complex in rodent hippocampus is associated with a network burst in CA3 (NB) that triggers a synchronous event in the CA1 population (SE). The number of CA1 pyramidal cells participating in a SE has been observed to follow a lognormal distribution. However, the origin of this skewed and heavy-tailed distribution of population synchrony in CA1 remains unknown. Because the size of SEs is likely to originate from the size of the NBs and the underlying neural circuitry, we model the CA3-CA1 circuit to study the underlying mechanisms and their functional implications. We show analytically that if the size of a NB in CA3 is distributed according to a normal distribution, then the size of the resulting SE in CA1 follows a lognormal distribution. Our model predicts the distribution of the NB size in CA3, which remains to be tested experimentally. Moreover, we show that a putative lognormal NB size distribution leads to an extremely heavy-tailed SE size distribution in CA1, contradicting experimental evidence. In conclusion, our model provides general insight on the origin of lognormally distributed network synchrony as a consequence of synchronous synaptic transmission of normally distributed input events.
- Published
- 2018
- Full Text
- View/download PDF
17. Note on the coefficient of variations of neuronal spike trains
- Author
-
Angelika Steger and Johannes Lengler
- Subjects
Neurons ,0301 basic medicine ,Quantitative Biology::Neurons and Cognition ,General Computer Science ,Spike train ,Models, Neurological ,Probabilistic logic ,Complex system ,Action Potentials ,Random walk ,Poisson distribution ,03 medical and health sciences ,symbols.namesake ,030104 developmental biology ,0302 clinical medicine ,Synapses ,Statistics ,symbols ,Spike (software development) ,Statistical physics ,030217 neurology & neurosurgery ,Biotechnology ,Mathematics - Abstract
It is known that many neurons in the brain show spike trains with a coefficient of variation (CV) of the interspike times of approximately 1, thus resembling the properties of Poisson spike trains. Computational studies have been able to reproduce this phenomenon. However, the underlying models were too complex to be examined analytically. In this paper, we offer a simple model that shows the same effect but is accessible to an analytic treatment. The model is a random walk model with a reflecting barrier; we give explicit formulas for the CV in the regime of excess inhibition. We also analyze the effect of probabilistic synapses in our model and show that it resembles previous findings that were obtained by simulation.
- Published
- 2017
- Full Text
- View/download PDF
18. The $$(1+1)$$ ( 1 + 1 ) Elitist Black-Box Complexity of LeadingOnes
- Author
-
Johannes Lengler and Carola Doerr
- Subjects
General Computer Science ,Unary operation ,Applied Mathematics ,Computation ,Evolutionary algorithm ,0102 computer and information sciences ,02 engineering and technology ,Arity ,Binary logarithm ,01 natural sciences ,Upper and lower bounds ,Computer Science Applications ,Combinatorics ,010201 computation theory & mathematics ,Theory of computation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Heuristics ,Mathematics - Abstract
One important goal of black-box complexity theory is the development of complexity models allowing to derive meaningful lower bounds for whole classes of randomized search heuristics. Complementing classical runtime analysis, black-box models help us to understand how algorithmic choices such as the population size, the variation operators, or the selection rules influence the optimization time. One example for such a result is the $$\varOmega (n \log n)$$ lower bound for unary unbiased algorithms on functions with a unique global optimum (Lehre and Witt in Algorithmica 64:623–642, 2012), which tells us that higher arity operators or biased sampling strategies are needed when trying to beat this bound. In lack of analyzing techniques, such non-trivial lower bounds are very rare in the existing literature on black-box optimization and therefore remain to be one of the main challenges in black-box complexity theory. With this paper we contribute to our technical toolbox for lower bound computations by proposing a new type of information-theoretic argument. We regard the permutation- and bit-invariant version of LeadingOnes and prove that its $$(1+1)$$ elitist black-box complexity is $$\varOmega (n^2)$$ , a bound that is matched by $$(1+1)$$ -type evolutionary algorithms. The $$(1+1)$$ elitist complexity of LeadingOnes is thus considerably larger than its unrestricted one, which is known to be of order $$n\log \log n$$ (Afshani et al. in Lecture notes in computer science, vol 8066, pp 1–11. Springer, New York, 2013). The $$\varOmega (n^2)$$ lower bound does not rely on the fact that elitist black-box algorithms are not allowed to make use of absolute fitness values. In contrast, we show that even if absolute fitness values are revealed to the otherwise elitist algorithm, it cannot significantly profit from this additional information. Our result thus shows that for LeadingOnes the memory-restriction, together with the selection requirement, has a substantial impact on the best possible performance.
- Published
- 2017
- Full Text
- View/download PDF
19. Drift Analysis
- Author
-
Johannes Lengler
- Published
- 2019
- Full Text
- View/download PDF
20. The (1 + 1)-EA with mutation rate (1 + ϵ)/ n is efficient on monotone functions
- Author
-
Angelika Steger, Anders Martinsson, and Johannes Lengler
- Subjects
Polynomial ,Evolutionary algorithm ,Entropy compression ,Monotonic function ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Combinatorics ,Monotone polygon ,010201 computation theory & mathematics ,Algorithmics ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,020201 artificial intelligence & image processing ,Time complexity ,Mathematics - Abstract
An important benchmark for evolutionary algorithms are (strictly) monotone functions. For the (1 + 1)-EA with mutation rate c/n, it was known that it can optimize any monotone function on n bits in time O(n log n) if c c c = 1 it was known that the runtime is always polynomial, but it was unclear whether it is quasilinear, and it was unclear whether c = 1 is the threshold at which the runtime jumps from polynomial to superpolynomial. We show there exists a c0 > 1 such that for all 0 c ≤ c0 the (1 + 1)-Evolutionary Algorithm with rate c/n finds the optimum in O(n log2 n) steps in expectation. The proof is based on an adaptation of Moser's entropy compression argument. That is, we show that a long runtime would allow us to encode the random steps of the algorithm with less bits than their entropy. This short note summarizes the results that have been published as "When Does Hillclimbing Fail on Monotone Functions: An entropy compression argument" in the proceedings of the 16th Workshop on Analytic Algorithmics and Combinatorics (ANALCO), SIAM, 2019 [9].
- Published
- 2019
- Full Text
- View/download PDF
21. Self-Adjusting Mutation Rates with Provably Optimal Success Rules
- Author
-
Johannes Lengler, Carola Doerr, Benjamin Doerr, Laboratoire d'informatique de l'École polytechnique [Palaiseau] (LIX), Centre National de la Recherche Scientifique (CNRS)-École polytechnique (X), Recherche Opérationnelle (RO), LIP6, Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), Eidgenössische Technische Hochschule - Swiss Federal Institute of Technology [Zürich] (ETH Zürich), and École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
FOS: Computer and information sciences ,Mathematical optimization ,Mutation rate ,General Computer Science ,Control (management) ,Evolutionary algorithm ,0102 computer and information sciences ,02 engineering and technology ,[INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE] ,Self adjusting ,01 natural sciences ,Interpretation (model theory) ,Resampling ,0202 electrical engineering, electronic engineering, information engineering ,Literal (computer programming) ,Neural and Evolutionary Computing (cs.NE) ,Mathematics ,Applied Mathematics ,Computer Science - Neural and Evolutionary Computing ,Lower order ,Computer Science Applications ,Asymptotically optimal algorithm ,010201 computation theory & mathematics ,Theory of computation ,020201 artificial intelligence & image processing ,Algorithm - Abstract
The one-fifth success rule is one of the best-known and most widely accepted techniques to control the parameters of evolutionary algorithms. While it is often applied in the literal sense, a common interpretation sees the one-fifth success rule as a family of success-based updated rules that are determined by an update strength $F$ and a success rate. We analyze in this work how the performance of the (1+1) Evolutionary Algorithm on LeadingOnes depends on these two hyper-parameters. Our main result shows that the best performance is obtained for small update strengths $F=1+o(1)$ and success rate $1/e$. We also prove that the running time obtained by this parameter setting is, apart from lower order terms, the same that is achieved with the best fitness-dependent mutation rate. We show similar results for the resampling variant of the (1+1) Evolutionary Algorithm, which enforces to flip at least one bit per iteration., Comment: Conference version appeared at GECCO 2019. This full version appeared in Algorithmica (2021)
- Published
- 2019
- Full Text
- View/download PDF
22. When Does Hillclimbing Fail on Monotone Functions: An entropy compression argument
- Author
-
Johannes Lengler, Anders Martinsson, and Angelika Steger
- Published
- 2019
- Full Text
- View/download PDF
23. The (1+1)-EA on Noisy Linear Functions with Random Positive Weights
- Author
-
Ulysse Schaller and Johannes Lengler
- Subjects
Fitness function ,Noise measurement ,Computer science ,Evolutionary algorithm ,0102 computer and information sciences ,02 engineering and technology ,State (functional analysis) ,Type (model theory) ,01 natural sciences ,Combinatorics ,Monotone polygon ,010201 computation theory & mathematics ,Mutation (genetic algorithm) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Constant (mathematics) - Abstract
We study the well-known black-box optimisation algorithm (1+1)-EA on a novel type of noise model. In our noise model, the fitness function is linear with positive weights, but the absolute values of the weights may fluctuate in each round. Thus in every state, the fitness function indicates that one-bits are preferred over zero-bits. In particular, hillclimbing heuristics should be able to find the optimum fast. We show that the (1+1)-EA indeed finds the optimum in time $O(n\log n)$ if the mutation parameter is $c/n$ for a constant $c\lt c_{0}:= 1.59\ldots$ However, we also show that for $c\gt c_{0}$ the (1+1)-EA needs superpolynomial time to find the optimum. Thus the choice of mutation parameter is critical even for optimisation tasks in which there is a clear path to the the optimum. A similar threshold phenomenon has recently been shown for noise-free monotone fitness functions.
- Published
- 2018
- Full Text
- View/download PDF
24. Nearly-Tight Analysis for 2-Choice and 3-Majority Consensus Dynamics
- Author
-
Johannes Lengler and Mohsen Ghaffari
- Subjects
Polynomial ,Logarithm ,Computer science ,020206 networking & telecommunications ,0102 computer and information sciences ,02 engineering and technology ,Binary logarithm ,01 natural sciences ,Upper and lower bounds ,Combinatorics ,010201 computation theory & mathematics ,Distributed algorithm ,Bounded function ,Convergence (routing) ,Node (computer science) ,0202 electrical engineering, electronic engineering, information engineering - Abstract
We present improved analyses for two of the well-studied randomized dynamics of stabilizing consensus, namely 2-choice and 3-majority. The resulting bounds are tight up to logarithmic factors. In the latter case, this answers an open question of Becchetti et al. [SODA'16]. Consider a distributed system of n nodes, each initially holding an opinion in \1, 2, ...., k\ . The system should converge to a setting where all (non-corrupted) nodes hold the same opinion. This consensus opinion should be valid, meaning that it should be among the initially supported opinions, and the (fast) convergence should happen even in the presence of a malicious adversary who can corrupt a bounded number of nodes per round and in particular modify their opinions. Two of the well-studied distributed algorithms for this problem work as follows: In both of these dynamics, each node gathers the opinion of three nodes and sets its new opinion equal to the majority of this set. In the 2-choice dynamics, the three nodes are the node itself and two random nodes and ties are broken towards the node's own opinion. In the 3-majority dynamics, the three nodes are selected at random and ties are broken randomly. Becchetti et al. [SODA'16] showed that the 3-majority dynamics converges to consensus in O((k^2\sq√rtlog n + klog n)(k+log n)) rounds, even in the presence of a limited adversary, for k bounded by some polynomial of n. We prove that, even with a stronger adversary, the convergence happens within O(klog n) rounds, both for 3-majority and 2-choice. For 3-majority, this bound is known to be optimal for k=\tildeO (√ ). More generally, we prove that 3-majority converges always in O (n^2/3 ) time, improving on a O (n^3/4 ) upper bound of Berenbrink et al. [PODC'17].
- Published
- 2018
- Full Text
- View/download PDF
25. Medium step sizes are harmful for the compact genetic algorithm
- Author
-
Johannes Lengler, Dirk Sudholt, and Carsten Witt
- Subjects
Compact genetic algorithm (cga) ,Evolutionary algorithm ,Statistical model ,0102 computer and information sciences ,02 engineering and technology ,Function (mathematics) ,Running time analysis ,Evolutionary algorithms ,Binary logarithm ,01 natural sciences ,Upper and lower bounds ,Combinatorics ,Estimation-of-distribution algorithms ,Estimation of distribution algorithm ,010201 computation theory & mathematics ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Theory ,020201 artificial intelligence & image processing ,Time complexity ,Mathematics - Abstract
We study the intricate dynamics of the Compact Genetic Algorithm (cGA) on OneMax, and how its performance depends on the step size 1/K, that determines how quickly decisions about promising bit values are fixed in the probabilistic model. It is known that cGA and UMDA, a related algorithm, run in expected time O(n log n) when the step size is just small enough (K = Θ(n log n)) to avoid wrong decisions being fixed. UMDA also shows the same performance in a very different regime (equivalent to K = Θ(log n) in the cGA) with much larger steps sizes, but for very different reasons: many wrong decisions are fixed initially, but then reverted efficiently. We show that step sizes in between these two optimal regimes are harmful as they yield larger runtimes: we prove a lower bound of Ω(K1/3n+n log n) for the cGA on OneMax for K = O(n/log2 n). For K = Ω(log3 n) the runtime increases with growing K before dropping again to O(Kn + n log n) for K = Ω(n log n). This suggests that the expected runtime for cGA is a bimodal function in K with two very different optimal regions and worse performance in between.
- Published
- 2018
- Full Text
- View/download PDF
26. A General Dichotomy of Evolutionary Algorithms on Monotone Functions
- Author
-
Johannes Lengler
- Subjects
Combinatorics ,Physics ,Monotone polygon ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Monotonic function ,0102 computer and information sciences ,02 engineering and technology ,Expected value ,Lambda ,01 natural sciences - Abstract
It is known that the \((1 + 1)\)-EA with mutation rate c/n optimises every monotone function efficiently if \(c c_0 = 2.13692..\). We study the same question for a large variety of algorithms, particularly for \((1 + \lambda )\)-EA, \((\mu + 1)\)-EA, \((\mu + 1)\)-GA, their fast counterparts like fast \((1 + 1)\)-EA, and for \((1 + (\lambda ,\lambda ))\)-GA. We prove that all considered mutation-based algorithms show a similar dichotomy for HotTopic functions, or even for all monotone functions. For the \((1 + (\lambda ,\lambda ))\)-GA, this dichotomy is in the parameter \(c\gamma \), which is the expected number of bit flips in an individual after mutation and crossover, neglecting selection. For the fast algorithms, the dichotomy is in \(m_2/m_1\), where \(m_1\) and \(m_2\) are the first and second falling moment of the number of bit flips. Surprisingly, the range of efficient parameters is not affected by either population size \(\mu \) nor by the offspring population size \(\lambda \).
- Published
- 2018
- Full Text
- View/download PDF
27. A General Dichotomy of Evolutionary Algorithms on Monotone Functions
- Author
-
Johannes Lengler
- Subjects
FOS: Computer and information sciences ,Crossover ,Computer Science - Neural and Evolutionary Computing ,Monotonic function ,02 engineering and technology ,Expected value ,Lambda ,Theoretical Computer Science ,Exponential function ,Combinatorics ,Monotone polygon ,Computational Theory and Mathematics ,Mutation (knot theory) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Neural and Evolutionary Computing (cs.NE) ,Variety (universal algebra) ,Software - Abstract
It is known that the (1 + 1)-EA with mutation rate $c/n$ optimizes every monotone function efficiently if $c , and needs exponential time on some monotone functions (HotTopic functions) if $c\geq 2.2$ . We study the same question for a large variety of algorithms, particularly for the $(1 + \lambda)$ -EA, $(\mu + 1)$ -EA, $(\mu + 1)$ -GA, their “fast” counterparts, and for the $(1 + (\lambda,\lambda))$ -GA. We find that all considered mutation-based algorithms show a similar dichotomy for HotTopic functions, or even for all monotone functions. For the $(1 + (\lambda,\lambda))$ -GA, this dichotomy is in the parameter $c\gamma $ , which is the expected number of bit flips in an individual after mutation and crossover, neglecting selection. For the fast algorithms, the dichotomy is in $m_{2}/m_{1}$ , where $m_{1}$ and $m_{2}$ are the first and second falling moment of the number of bit flips. Surprisingly, the range of efficient parameters is not affected by either population size $\mu $ nor by the offspring population size $\lambda $ . The picture changes completely if crossover is allowed. The genetic algorithms $(\mu + 1)$ -GA and $(\mu + 1)$ -fGA are efficient for arbitrary mutations strengths if $\mu $ is large enough.
- Published
- 2018
- Full Text
- View/download PDF
28. Destructiveness of Lexicographic Parsimony Pressure and Alleviation by a Concatenation Crossover in Genetic Programming
- Author
-
J. A. Gregor Lagodzinski, Johannes Lengler, Timo Kötzing, and Anna Melnichenko
- Subjects
Theoretical computer science ,Concatenation ,Crossover ,Genetic programming ,0102 computer and information sciences ,02 engineering and technology ,Lexicographical order ,01 natural sciences ,Evolutionary computation ,Tree (data structure) ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Representation (mathematics) ,Realization (probability) ,Mathematics - Abstract
For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation. First, the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts). Second, the role and realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had a surprisingly little share in this work.
- Published
- 2018
- Full Text
- View/download PDF
29. Greedy Routing and the Algorithmic Small-World Phenomenon
- Author
-
Ralph Keusch, Johannes Lengler, Yannic Maus, Anisur Rahaman Molla, and Karl Bringmann
- Subjects
Routing protocol ,Random graph ,Static routing ,Theoretical computer science ,General Computer Science ,Computer Networks and Communications ,Computer science ,Equal-cost multi-path routing ,Applied Mathematics ,020206 networking & telecommunications ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Theoretical Computer Science ,Computational Theory and Mathematics ,Link-state routing protocol ,010201 computation theory & mathematics ,Multipath routing ,0202 electrical engineering, electronic engineering, information engineering ,Destination-Sequenced Distance Vector routing ,Greedy algorithm - Abstract
The algorithmic small-world phenomenon, empirically established by Milgram's letter forwarding experiments from the 60s, was theoretically explained by Kleinberg in 2000. However, from today's perspective his model has several severe shortcomings that limit the applicability to real-world networks. In order to give a more convincing explanation of the algorithmic small-world phenomenon, we study decentralized greedy routing in a more flexible random graph model (geometric inhomogeneous random graphs) which overcomes all previous shortcomings. Apart from exhibiting good properties in theory, it has also been extensively experimentally validated that this model reasonably captures real-world networks. In this model, the greedy routing protocol is purely distributed as each vertex only needs to know information about its direct neighbors. We prove that it succeeds with constant probability, and in case of success almost surely finds an almost shortest path of length Θ(log log n), where our bound is tight including the leading constant. Moreover, we study natural local patching methods which augment greedy routing by backtracking and which do not require any global knowledge. We show that such methods can ensure success probability 1 in an asymptotically tight number of steps. These results also address the question of Krioukov et al. whether there are efficient local routing protocols for the internet graph. There were promising experimental studies, but the question remained unsolved theoretically. Our results give for the first time a rigorous and analytical affirmative answer.
- Published
- 2017
- Full Text
- View/download PDF
30. The Factor VIII Variant X5 Enhances Hemophilia a Gene Therapy Efficiency By Its Improved Secretion
- Author
-
Biao Dong, Hanspeter Rottensteiner, Bagirath Gangadharan, Franziska Horling, Friedrich Scheiflinger, Wenjing Cao, Maurus de la Rosa, Werner Hoellriegl, Johannes Lengler, Birgit M. Reipert, and Weidong Xiao
- Subjects
congenital, hereditary, and neonatal diseases and abnormalities ,animal diseases ,Immunogenicity ,Chinese hamster ovary cell ,Genetic enhancement ,Transgene ,Immunology ,Context (language use) ,Cell Biology ,Hematology ,Biology ,Pharmacology ,medicine.disease_cause ,Biochemistry ,law.invention ,Coagulation ,law ,hemic and lymphatic diseases ,medicine ,Recombinant DNA ,Adeno-associated virus - Abstract
Introduction. Adeno-associated virus (AAV)-based factor VIII (FVIII) gene therapy holds great promise to provide clinical benefit in patients with hemophilia A. However, very high doses are currently needed to achieve therapeutic factor levels and the durability appears to be limited to a couple of years. Vector efficiency could be improved by employing more potent liver-specific promoters, but this might come at the price of overstraining the cellular protein folding capacity, causing FVIII to misfold in the lumen of the Endoplasmic Reticulum (ER). This event would in turn activate the unfolded protein response, cause oxidative stress, and if not resolved may even induce cell death. Aims. The objective of the presented study was to test whether the B-domain deleted (BDD)-FVIII-X5 variant can overcome the secretion challenge of high level FVIII expression in the context of hepatic gene therapy. Methods. The human FVIII variant BDD-FVIII-X5 harboring 5 amino acid exchanges in the A1 domain was previously isolated in a screen aimed at identifying those residues in porcine FVIII that are critical for efficient secretion. BDD-FVIII and BDD-FVIII-X5 were produced in Chinese Hamster Ovary (CHO) cells and purified to apparent homogeneity using standard procedures. The preparations were assayed for total protein by UV absorbance at 280 nm and FVIII activity by a chromogenic assay. Both FVIII variants were vectorized using AAV8 and tested in the human liver cell line HepG2 and FVIII knockout mice (E17) at various doses. Resulting samples were assayed for FVIII chromogenic activity. The potential immunogenic risk was evaluated in three hemophilic mouse strains (E17, human FVIII transgenic, humanized HLA-DRB1*1501). Results. A characterization of purified recombinant Refacto-like BDD-FVIII and the corresponding X5 variant revealed similarity of the two proteins and their specific activities in particular, indicating that introduction of the 5 amino acids from porcine FVIII did not alter functionality of human BDD-FVIII. In vitro expression of BDD-FVIII-X5 in a human liver cell line resulted in substantially increased FVIII activity levels in the supernatant compared with the non-modified BDD-FVIII, commensurate with enhanced secretion of the X5 variant. Intravenous delivery of liver-targeted AAV8 vectors carrying the BDD-FVIII-X5 transgene achieved substantial increases in plasma coagulation activity over BDD-FVIII in FVIII-deficient mice, even when highly efficient codon-optimized F8 nucleotide sequences were employed. Evaluation of the immunogenicity of the BDD-FVIII-X5 variant by an immunological risk assessment did not reveal any increased immunogenic risk compared to BDD-FVIII. Conclusions: The fully active BDD-FVIII-X5 variant demonstrated improved secretion in vitro and in vivo, resulting in substantially higher FVIII levels in a hemophilia A mouse model. No signs of enhanced immunogenicity were noted in a comparative immunogenicity study. The results obtained warrant further exploration of the BDD-FVIII-X5 variant for a next generation hemophilia A gene therapy. Disclosures Horling: Baxalta Innovations GmbH, a Takeda company: Employment. Lengler:Baxalta Innovations GmbH, a Takeda company: Employment. Gangadharan:Baxalta Innovations GmbH, a Takeda company: Employment. De La Rosa:Baxalta Innovations GmbH, a Takeda company: Employment, Equity Ownership. Hoellriegl:Baxalta Innovations GmbH, a Takeda company: Employment, Equity Ownership. Reipert:Baxalta Innovations GmbH, a Takeda company: Employment, Equity Ownership. Scheiflinger:Baxalta Innovations GmbH, a Takeda company: Employment, Equity Ownership. Xiao:Ivygen: Other: Patent application on FVIII-X5 has been submitted. Rottensteiner:Baxalta Innovations GmbH, a Takeda company: Employment, Equity Ownership.
- Published
- 2019
- Full Text
- View/download PDF
31. Evolutionary Algorithms for Quantum Computers
- Author
-
Piyush P. Kurur, Johannes Lengler, and Daniel Johannsen
- Subjects
Theoretical computer science ,General Computer Science ,Computer science ,business.industry ,Applied Mathematics ,Best-first search ,Evolutionary computation ,Theory ,Quantum algorithm ,Runtime analysis ,Computer Science Applications ,Search algorithm ,Beam search ,Local search (optimization) ,Heuristics ,business ,Quantum computer - Abstract
In this article, we formulate and study quantum analogues of randomized search heuristics, which make use of Grover search (in Proceedings of the 28th Annual ACM Symposium on Theory of Computing, pp. 212–219. ACM, New York, 1996) to accelerate the search for improved offsprings. We then specialize the above formulation to two specific search heuristics: Random Local Search and the (1+1) Evolutionary Algorithm. We call the resulting quantum versions of these search heuristics Quantum Local Search and the (1+1) Quantum Evolutionary Algorithm. We conduct a rigorous runtime analysis of these quantum search heuristics in the computation model of quantum algorithms, which, besides classical computation steps, also permits those unique to quantum computing devices. To this end, we study the six elementary pseudo-Boolean optimization problems OneMax, LeadingOnes, Discrepancy, Needle, Jump, and TinyTrap. It turns out that the advantage of the respective quantum search heuristic over its classical counterpart varies with the problem structure and ranges from no speedup at all for the problem Discrepancy to exponential speedup for the problem TinyTrap. We show that these runtime behaviors are closely linked to the probabilities of performing successful mutations in the classical algorithms., Algorithmica, 68 (1), ISSN:0178-4617, ISSN:1432-0541
- Published
- 2013
- Full Text
- View/download PDF
32. Black-box complexities of combinatorial problems
- Author
-
Johannes Lengler, Carola Winzen, Benjamin Doerr, Timo Kötzing, Max-Planck-Institut für Informatik (MPII), Max-Planck-Gesellschaft, Friedrich-Schiller-Universität = Friedrich Schiller University Jena [Jena, Germany], and Eidgenössische Technische Hochschule - Swiss Federal Institute of Technology [Zürich] (ETH Zürich)
- Subjects
FOS: Computer and information sciences ,Mathematical optimization ,Optimization problem ,General Computer Science ,Computer science ,Bidirectional search ,Quadratic assignment problem ,Evolutionary algorithm ,0102 computer and information sciences ,02 engineering and technology ,Minimum spanning tree ,01 natural sciences ,Theoretical Computer Science ,Search algorithm ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Combinatorial search ,[INFO]Computer Science [cs] ,Neural and Evolutionary Computing (cs.NE) ,ComputingMilieux_MISCELLANEOUS ,Mathematics ,Computer Science - Neural and Evolutionary Computing ,010201 computation theory & mathematics ,Simulated annealing ,Shortest path problem ,Combinatorial optimization ,020201 artificial intelligence & image processing ,Computational problem ,Heuristics ,Computer Science(all) - Abstract
Black-box complexity is a complexity theoretic measure for how difficult a problem is to be optimized by a general purpose optimization algorithm. It is thus one of the few means trying to understand which problems are tractable for genetic algorithms and other randomized search heuristics. Most previous work on black-box complexity is on artificial test functions. In this paper, we move a step forward and give a detailed analysis for the two combinatorial problems minimum spanning tree and single-source shortest paths. Besides giving interesting bounds for their black-box complexities, our work reveals that the choice of how to model the optimization problem is non-trivial here. This in particular comes true where the search space does not consist of bit strings and where a reasonable definition of unbiasedness has to be agreed on., 26 pages. This is the full version of the one presented at the Genetic and Evolutionary Computation Conference (GECCO 2011)
- Published
- 2013
- Full Text
- View/download PDF
33. Introducing Elitist Black-Box Models: When Does Elitist Behavior Weaken the Performance of Evolutionary Algorithms?
- Author
-
Johannes Lengler, Carola Doerr, Centre National de la Recherche Scientifique (CNRS), Recherche Opérationnelle (RO), Laboratoire d'Informatique de Paris 6 (LIP6), Université Pierre et Marie Curie - Paris 6 (UPMC)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre et Marie Curie - Paris 6 (UPMC)-Centre National de la Recherche Scientifique (CNRS), Department of Computer Science [ETH Zürich] (D-INFK), and Eidgenössische Technische Hochschule - Swiss Federal Institute of Technology [Zürich] (ETH Zürich)
- Subjects
FOS: Computer and information sciences ,Mathematical optimization ,Evolutionary algorithm ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Evolutionary computation ,Evolution, Molecular ,General Relativity and Quantum Cosmology ,Computer Science - Data Structures and Algorithms ,0202 electrical engineering, electronic engineering, information engineering ,[INFO]Computer Science [cs] ,Data Structures and Algorithms (cs.DS) ,Computer Simulation ,Neural and Evolutionary Computing (cs.NE) ,Mathematics ,Probability ,Black box (phreaking) ,Truncation selection ,Computer Science - Neural and Evolutionary Computing ,Base (topology) ,Computational Mathematics ,010201 computation theory & mathematics ,020201 artificial intelligence & image processing ,Heuristics ,Monte Carlo Method ,Algorithms - Abstract
Black-box complexity theory provides lower bounds for the runtime of black-box optimizers like evolutionary algorithms and serves as an inspiration for the design of new genetic algorithms. Several black-box models covering different classes of algorithms exist, each highlighting a different aspect of the algorithms under considerations. In this work we add to the existing black-box notions a new \emph{elitist black-box model}, in which algorithms are required to base all decisions solely on (a fixed number of) the best search points sampled so far. Our model combines features of the ranking-based and the memory-restricted black-box models with elitist selection. We provide several examples for which the elitist black-box complexity is exponentially larger than that the respective complexities in all previous black-box models, thus showing that the elitist black-box complexity can be much closer to the runtime of typical evolutionary algorithms. We also introduce the concept of $p$-Monte Carlo black-box complexity, which measures the time it takes to optimize a problem with failure probability at most $p$. Even for small~$p$, the $p$-Monte Carlo black-box complexity of a function class $\mathcal F$ can be smaller by an exponential factor than its typically regarded Las Vegas complexity (which measures the \emph{expected} time it takes to optimize $\mathcal F$)., Comment: A short version of this work has been presented at the GECCO conference 2015 in Madrid, Spain. Available at http://dl.acm.org/citation.cfm?doid=2739480.2754654
- Published
- 2016
- Full Text
- View/download PDF
34. Randomness as a Building Block for Reproducibility in Local Cortical Networks
- Author
-
Johannes Lengler and Angelika Steger
- Subjects
Reproducibility ,Computer science ,Block (telecommunications) ,Algorithm ,Randomness - Published
- 2016
- Full Text
- View/download PDF
35. The global Cohen–Lenstra heuristic
- Author
-
Johannes Lengler
- Subjects
Class (set theory) ,Algebra and Number Theory ,Finite abelian group ,Group (mathematics) ,Heuristic ,Probability measure ,Statistic behavior of class groups of number fields ,Universal law ,Cohen–Lenstra measure ,Combinatorics ,Set (abstract data type) ,Abelian group ,Mathematics - Abstract
The Cohen–Lenstra heuristic is a universal principle that assigns to each group a probability that tells how often this group should occur “in nature”. The most important, but not the only, applications are sequences of class groups, which conjecturally behave like random sequences of groups with respect to the so-called Cohen–Lenstra probability measure. So far, it was only possible to define this probability measure for finite abelian p-groups. We prove that it is also possible to define an analogous probability measure on the set of all finite abelian groups when restricting to the Σ-algebra on the set of all finite abelian groups that is generated by uniform properties.
- Published
- 2012
- Full Text
- View/download PDF
36. A combinatorial interpretation of the probabilities of p-groups in the Cohen–Lenstra measure
- Author
-
Johannes Lengler
- Subjects
Combinatorics ,Discrete mathematics ,Corollary ,Algebra and Number Theory ,Group (mathematics) ,Combinatorial interpretation ,Exponent ,Type (model theory) ,Abelian group ,Link (knot theory) ,Measure (mathematics) ,Mathematics - Abstract
In this paper, I will introduce a link between the volume of a finite abelian p -group in the Cohen–Lenstra measure and partitions of a certain type. These partitions will be classified by the output of an algorithm. As a corollary, I will give a formula (7.2) for the probability of a p -group to have a specific exponent.
- Published
- 2008
- Full Text
- View/download PDF
37. Bacteriophage-encoded toxins: the ?-holin protein causes caspase-independent non-apoptotic cell death of eukaryotic cells
- Author
-
Reinhard Klein, Wolfgang Gregor, Udo Bläsi, Brian Salmons, Chukwuma A. Agu, Walter H. Günzburg, Thomas Peterbauer, Christine Hohenadl, Johannes Lengler, and Franz Schilcher
- Subjects
Programmed cell death ,Poly ADP ribose polymerase ,Immunology ,Apoptosis ,Mitochondrion ,Biology ,Endoplasmic Reticulum ,Microbiology ,Membrane Potentials ,Necrosis ,Viral Proteins ,Cell Line, Tumor ,Virology ,Humans ,Endoplasmic reticulum ,Bacteriophage lambda ,Molecular biology ,Mitochondria ,Cell biology ,Eukaryotic Cells ,Cytoplasm ,Caspases ,Holin ,Cell fractionation ,HeLa Cells - Abstract
The bacteriophage-encoded holin proteins are known to promote bacterial cell lysis by forming lesions within the cytoplasmic membrane. Recently, we have shown that the bacteriophage lambda-holin protein exerts cytotoxic activity also in eukaryotic cells accounting for a reduced tumour growth in vivo. In order to elucidate the mechanisms of lambda-holin-induced mammalian cell death, detailed biochemical and morphological analyses were performed. Colocalization analyses by subcellular fractionation and organelle-specific fluorescence immunocytochemistry indicated the presence of the lambda-holin protein in the endoplasmic reticulum and in mitochondria. Functional studies using the mitochondria-specific fluorochrome JC-1 demonstrated a loss of mitochondrial transmembrane potential in response to lambda-holin expression. Morphologically, these cells exhibited unfragmented nuclei but severe cytoplasmic vacuolization representing signs of oncosis/necrosis rather than apoptosis. Consistently, Western blot analyses indicated neither an activation of effector caspases 3 and 7 nor cleavage of the respective substrate poly(ADP-ribose) polymerase (PARP) in an apoptosis-specific manner. These findings suggest that the lambda-holin protein mediates a caspase-independent non-apoptotic mode of cell death.
- Published
- 2007
- Full Text
- View/download PDF
38. The Interval Liar Game
- Author
-
Johannes Lengler, David Steurer, and Benjamin Doerr
- Subjects
Set (abstract data type) ,Combinatorics ,Log-log plot ,Applied Mathematics ,Perfect information ,Structure (category theory) ,Discrete Mathematics and Combinatorics ,Interval (graph theory) ,Binary logarithm ,Constant (mathematics) ,Upper and lower bounds ,Mathematics - Abstract
We regard the problem of communication in the presence of faulty transmissions. In contrast to the classical works in this area, we assume some structure on the times when the faults occur. More realistic seems the model that all faults occur in some small time interval. Like previous work, our problem can best be modelled as a two-player perfect information game, in which one player (“Paul”) has to guess a number x from { 1 , … , n } using Yes/No-questions, which the second player (“Carole”) has to answer truthfully apart from few lies. In our setting, all lies have to be in a consecutive set of k rounds. We show that, for large n, Paul needs roughly log n + log log 2 n + k rounds to determine the number, which is only k more than the case of just one single lie. Our main theorem (1.1) gives precise upper and lower bounds which differ only by a (small) constant.
- Published
- 2007
- Full Text
- View/download PDF
39. Fixed Budget Performance of the (1+1) EA on Linear Functions
- Author
-
Nicholas Spooner and Johannes Lengler
- Subjects
Mathematical optimization ,Current (mathematics) ,Fitness function ,Differential equation ,media_common.quotation_subject ,Evolutionary algorithm ,A priori and a posteriori ,Quality (business) ,Function (mathematics) ,Chebyshev filter ,Mathematics ,media_common - Abstract
We present a fixed budget analysis of the (1+1) evolutionary algorithm for general linear functions, considering both the quality of the solution after a predetermined 'budget' of fitness function evaluations (a priori) and the improvement in quality when the algorithm is given additional budget, given the quality of the current solution (a posteriori). Two methods are presented: one based on drift analysis, the other on the differential equation method and Chebyshev's inequality. While the first method is superior for general linear functions, the second can be more precise for specific functions and provides concentration guarantees. As an example, we provide tight a posteriori fixed budget results for the function OneMax.
- Published
- 2015
- Full Text
- View/download PDF
40. Normalization Phenomena in Asynchronous Networks
- Author
-
Amin Karbasi, Johannes Lengler, and Angelika Steger
- Subjects
Explosive behaviour ,Random graph ,Discrete mathematics ,Normalization (statistics) ,Diffusion process ,Asynchronous communication ,Time model ,Percolation process ,Transmission time ,Mathematics - Abstract
In this work we study a diffusion process in a network that consists of two types of vertices: inhibitory vertices those obstructing the diffusion and excitatory vertices those facilitating the diffusion. We consider a continuous time model in which every edge of the network draws its transmission time randomly. For such an asynchronous diffusion process it has been recently proven that in Erdi¾?s-Renyi random graphs a normalization phenomenon arises: whenever the diffusion starts from a large enough but still tiny set of active vertices, it only percolates to a certain level that depends only on the activation threshold and the ratio of inhibitory to excitatory vertices. In this paper we extend this result to all networks in which the percolation process exhibits an explosive behaviour. This includes in particular inhomogeneous random networks, as given by Chung-Lu graphs with degree parameter $$\beta \in 2,3$$.
- Published
- 2015
- Full Text
- View/download PDF
41. Cytochrome P450 reductase dependent inhibition of cytochrome P450 2B1 activity: Implications for gene directed enzyme prodrug therapy
- Author
-
Walter H. Günzburg, Wolfgang Gregor, Brian Salmons, Dana Düvier, Markus Omann, Johannes Lengler, Harry Holzmüller, and Matthias Renner
- Subjects
Cell Survival ,Blotting, Western ,education ,Cytochrome c Group ,Transfection ,Biochemistry ,Cell Line ,Mice ,Cell Line, Tumor ,Oxazines ,Animals ,Humans ,Prodrugs ,Ifosfamide ,cardiovascular diseases ,Antineoplastic Agents, Alkylating ,health care economics and organizations ,NADPH-Ferrihemoprotein Reductase ,Pharmacology ,Dose-Response Relationship, Drug ,biology ,Cytochrome c ,HEK 293 cells ,Cytochrome P450 ,Cytochrome P450 reductase ,Biological activity ,Genetic Therapy ,Suicide gene ,Molecular biology ,Cell culture ,Cytochrome P-450 CYP2B1 ,NIH 3T3 Cells ,biology.protein ,Plasmids - Abstract
Cytochrome P450 (P450) enzymes are often used in suicide gene cancer therapy strategies to convert an inactive prodrug into its therapeutic active metabolites. However, P450 activity is dependent on electrons supplied by cytochrome P450 reductase (CPR). Since endogenous CPR activity may not be sufficient for optimal P450 activity, the overexpression of additional CPR has been considered to be a valuable approach in gene directed enzyme prodrug therapy (GDEPT). We have analysed a set of cell lines for the effects of CPR on cytochrome P450 isoform 2B1 (CYP2B1) activity. CPR transfected human embryonic kidney 293 (HEK293) cells showed both strong CPR expression in Western blot analysis and 30-fold higher activity in cytochrome c assays as compared to parental HEK293 cells. In contrast, resorufin and 4-hydroxy-ifosfamide assays revealed that CYP2B1 activity was up to 10-fold reduced in CPR/CYP2B1 cotransfected HEK293 cells as compared to cells transfected with the CYP2B1 expression plasmid alone. Determination of ifosfamide-mediated effects on cell viability allowed independent confirmation of the reduction in CYP2B1 activity upon CPR coexpression. Inhibition of CYP2B1 activity by CPR was also observed in CYP2B1/CPR transfected or infected pancreatic tumour cell lines Panc-1 and Pan02, the human breast tumour cell line T47D and the murine embryo fibroblast cell line NIH3T3. A CPR mediated increase in CYP2B1 activity was only observed in the human breast tumour cell line Hs578T. Thus, our data reveal an effect of CPR on CYP2B1 activity dependent on the cell type used and therefore demand a careful evaluation of the therapeutic benefit of combining cytochrome P450 and CPR in respective in vivo models in each individual target tissue to be treated.
- Published
- 2006
- Full Text
- View/download PDF
42. FMDV–2A sequence and protein arrangement contribute to functionality of CYP2B1–reporter fusion protein
- Author
-
Walter H. Günzburg, Harry Holzmüller, Brian Salmons, Matthias Renner, and Johannes Lengler
- Subjects
Recombinant Fusion Proteins ,Green Fluorescent Proteins ,Biophysics ,Gene Expression ,Biology ,Endoplasmic Reticulum ,Transfection ,Biochemistry ,Cell Line ,Green fluorescent protein ,Gene expression ,Animals ,Humans ,Molecular Biology ,chemistry.chemical_classification ,Reporter gene ,Endoplasmic reticulum ,Cell Biology ,Molecular biology ,Fusion protein ,Cell biology ,Amino acid ,chemistry ,Cytochrome P-450 CYP2B1 ,Linker - Abstract
Two optimized forms of green fluorescence proteins (GFP), enhanced GFP (EGFP) and humanized Renilla GFP (hrGFP), were used to track expression of cytochrome P450 2B1 (CYP2B1), an endoplasmic reticulum membrane-bound protein. In transiently expressing HEK293 cells we show that CYP2B1-GFP fusion proteins are stable and functional, whereas the vice-versa-arranged GFP-CYP2B1 fusions are not. The CYP2B1-hrGFP fusion protein is characterized by reduction in mean fluorescence intensity (MFI) to less than 20% of that of the hrGFP protein alone, accompanied by a 50% loss of CYP2B1 activity. Exchanging the linker for an alpha-helical peptide structure between CYP2B1 and hrGFP does not improve fusion protein activity. Insertion of a short linker (five amino acids) increases reporter protein fluorescence intensity twofold without improving CYP2B1 activity. Introduction of the foot and mouth disease virus 2A sequence providing cotranslational cleavage led to an unstable hrGFP-2A protein, whereas the corresponding EGFP-2A protein was stable and yielded an MFI superior to those of all other fusion constructs tested. CYP2B1 activity of the EGFP-2A-CYP2B1 protein was in the range of that of the unmodified CYP2B1. These data indicate that the protein arrangement EGFP-2A-CYP2B1 is superior to others, since it is most active and visible, which is essential for an effective tracking of the CYP2B1 enzyme.
- Published
- 2005
- Full Text
- View/download PDF
43. Agonistic and Antagonistic Action of AP2, Msx2, Pax6, Prox1 and Six3 in the Regulation of Sox2 Expression
- Author
-
Doris Münster, Johannes Lengler, Alaa El-Din A. Gawad, Jochen Graw, and Tobias Bittner
- Subjects
GAL4/UAS system ,Regulation of gene expression ,Reporter gene ,General Medicine ,TCF4 ,Biology ,Molecular biology ,Sensory Systems ,Cellular and Molecular Neuroscience ,Ophthalmology ,SOX2 ,Sp3 transcription factor ,sense organs ,PAX6 ,Transcription factor - Abstract
Sox2 transcription factor is expressed in neural tissues and sensory epithelia from the early stages of development. Particularly, it is known to activate crystallin gene expression and to be involved in differentiation of lens and neural tissues. However, its place in the signaling cascade is not well understood. Here, we report about the response of its promoter to the presence of other transcription factors, AP2α, Msx2, Pax6, Prox1 and Six3, in a transient reporter gene assay using HEK293 cells as recipient cells. Taking our data together, AP2, Pax6 and PROX1 can activate the Sox2 promoter. Msx2 has an inhibitory effect, whereas Six3 does not affect the Sox2 promoter. These data indicate a common activating cascade at least for AP2, Pax6, Prox1 and Sox2.
- Published
- 2005
- Full Text
- View/download PDF
44. Bootstrap percolation with inhibition
- Author
-
Johannes Lengler, Hafsteinn Einarsson, Frank Mousset, Konstantinos Panagiotou, and Angelika Steger
- Subjects
Random graph ,Discrete mathematics ,bootstrap percolation ,input normalization ,random graphs ,education.field_of_study ,Bootstrap percolation ,Quantitative Biology::Neurons and Cognition ,Computer science ,Applied Mathematics ,General Mathematics ,Population ,Probability (math.PR) ,Process (computing) ,0102 computer and information sciences ,01 natural sciences ,Computer Graphics and Computer-Aided Design ,Diffusion process ,010201 computation theory & mathematics ,Asynchronous communication ,FOS: Mathematics ,Mathematics - Combinatorics ,Combinatorics (math.CO) ,Diffusion (business) ,education ,Mathematics - Probability ,Software - Abstract
Bootstrap percolation is a prominent framework for studying the spreading of activity on a graph. We begin with an initial set of active vertices. The process then proceeds in rounds, and further vertices become active as soon as they have a certain number of active neighbors. A recurring feature in bootstrap percolation theory is an `all-or-nothing' phenomenon: either the size of the starting set is so small that the process stops very soon, or it percolates (almost) completely. Motivated by several important phenomena observed in various types of real-world networks we propose in this work a variant of bootstrap percolation that exhibits a vastly different behavior. Our graphs have two types of vertices: some of them obstruct the diffusion, while the others facilitate it. We study the effect of this setting by analyzing the process on Erd\H{o}s-R\'enyi random graphs. Our main findings are two-fold. First we show that the presence of vertices hindering the diffusion does not result in a stable behavior: tiny changes in the size of the starting set can dramatically influence the size of the final active set. In particular, the process is non-monotone: a larger starting set can result in a smaller final set. In the second part of the paper we show that this phenomenom arises from the round-based approach: if we move to a continuous time model in which every edge draws its transmission time randomly, then we gain stability, and the process stops with an active set that contains a non-trivial constant fraction of all vertices. Moreover, we show that in the continuous time model percolation occurs significantly faster compared to the classical round-based model. Our findings are in line with empirical observations and demonstrate the importance of introducing various types of vertex behaviors in the mathematical model.
- Published
- 2014
- Full Text
- View/download PDF
45. 286. Genome Editing of Inducible Cell Lines for Scalable Production of Improved Lentiviral Vectors for Human Gene Therapy
- Author
-
Tiziano Di Tomaso, Andrea Annoni, Luigi Naldini, Alessio Cantore, Angelo Lombardo, Sara Bartolaccini, Michela Milani, Michael C. Holmes, Friedrich Scheiflinger, and Johannes Lengler
- Subjects
Pharmacology ,Genetic enhancement ,Biology ,Virology ,Cell biology ,Genome engineering ,Haematopoiesis ,Plasmid ,Genome editing ,Cell culture ,Drug Discovery ,Genetics ,Molecular Medicine ,Progenitor cell ,Molecular Biology ,Gene - Abstract
Lentiviral vectors (LVs) represent efficient and versatile vehicles for gene therapy. Current manufacturing of clinical-grade LVs mostly relies on transient transfection of plasmids expressing the multiple vector components. This method is labor and cost intensive and becomes challenging when facing the need of scale-up and standardization. The development of stable LV producer cell lines will greatly facilitate overcoming these hurdles. We have generated an inducible LV packaging cell line, carrying the genes encoding for third-generation vector components stably integrated in the genome under the control of tetracycline-regulated promoters. These LV packaging cells are stable in culture even after single-cell cloning and can be scaled up to large volumes. In order to minimize the immunogenicity of LVs for in vivo administration, we set out to remove the highly polymorphic class-I major histocompatibility complexes (MHC-I) expressed on LV packaging cells and incorporated in the LV envelope. We performed genetic disruption of the β-2 microglobulin (B2M) gene, a required component for the assembly and trafficking of all MHC-I to the plasma membrane in LV producer cells, exploiting the RNA-guided Cas9 nuclease. The resulting B2M-negative cells were devoid of surface-exposed MHC-I and produced MHC-free LVs. These LVs retain their infectivity on all tested cells in vitro and efficiently transduced the mouse liver upon intravenous administration. Strikingly, the MHC-free LVs showed significantly reduced immunogenicity in a T-cell activation assay performed on human primary T cells co-cultured with syngeneic monocytes exposed to LV, from several (n=7) healthy donors. To reproducibly generate LV-producer cell lines from these cells, we insert the LV genome of interest in the AAVS1 locus, chosen for robust expression, exploiting engineered nucleases and homology-directed repair. By this strategy, we have obtained several independent producer cell lines for LVs that express marker or therapeutic genes and are devoid of plasmid DNA contamination. LVs produced by these cells reproducibly show titer and infectivity within the lower bound range of standard optimized transient transfection, and effectively transduce relevant target cells, such as hematopoietic stem/progenitor cells and T cells ex vivo and the mouse liver in vivo. Overall, we provide evidence that rationally designed targeted genome engineering can be used to improve the yield, quality, safety and sustainability of LV production for clinical use.
- Published
- 2016
- Full Text
- View/download PDF
46. Can quantum search accelerate evolutionary algorithms?
- Author
-
Piyush P. Kurur, Daniel Johannsen, and Johannes Lengler
- Subjects
Quantum sort ,Speedup ,Theoretical computer science ,Computer Science::Neural and Evolutionary Computation ,Mutation (genetic algorithm) ,Evolutionary algorithm ,Quantum phase estimation algorithm ,Quantum algorithm ,Quantum algorithm for linear systems of equations ,Algorithm ,Quantum computer ,Mathematics - Abstract
In this article, we formulate for the first time the notion of a quantum evolutionary algorithm. In fact we define a quantum analogue for any elitist (1+1) randomized search heuristic. The quantum evolutionary algorithm, which we call (1+1) quantum evolutionary algorithm (QEA), is the quantum version of the classical (1+1) evolutionary algorithm (EA), and runs only on a quantum computer. It uses Grover search [13] to accelerate the search for improved offsprings.To understand the speedup of the (1+1) QEA over the (1+1) EA, we study the three well known pseudo-Boolean optimization problems OneMax, LeadingOnes, and Discrepancy. We show that although there is a speedup in the case of OneMax and LeadingOnes in the quantum setting, the speedup is less than quadratic. For Discrepancy, we show that the speedup is at best constant.The reason for this inconsistency is due to the difference in the probability of making a successful mutation. On the one hand, if the probability of making a successful mutation is large then quantum acceleration does not help much. On the other hand, if the probabilities of making a successful mutation is small then quantum enhancement indeed helps.
- Published
- 2010
- Full Text
- View/download PDF
47. 6. Targeted Genome Editing of Cell Lines for Improved and Scalable Production of Lentiviral Vectors for Human Gene Therapy
- Author
-
Philip D. Gregory, Michela Milani, Alessio Cantore, Sara Bartolaccini, Friedrich Scheiflinger, Tiziano Di Tomaso, Luigi Naldini, Johannes Lengler, and Angelo Lombardo
- Subjects
Pharmacology ,biology ,Genetic enhancement ,Major histocompatibility complex ,Genome ,Virology ,Genome engineering ,Cell biology ,Genome editing ,Cell culture ,Drug Discovery ,Genetics ,biology.protein ,Molecular Medicine ,Progenitor cell ,Molecular Biology ,Gene - Abstract
Lentiviral vectors (LVs) represent efficient and versatile vehicles for gene therapy. The manufacturing of clinical-grade LVs relies on transient transfection of vector components. This method is labor and cost intensive and becomes challenging when facing the need of scale-up and standardization. The development of stable LV producer cell lines will greatly facilitate overcoming these hurdles. We have generated an inducible LV packaging cell line, carrying the genes encoding for third-generation vector components stably integrated in the genome under the control of tetracycline-regulated promoters. In order to minimize the immunogenicity of LVs for in vivo administration, we set out to remove the highly polymorphic and antigenic class-I major histocompatibility complex (MHC-I) expressed on LV packaging cells and subsequently incorporated on the LV envelope. We performed genetic disruption of the β-2 microglobulin (B2M) gene, a required component for the assembly and trafficking of the MHC-I to the plasma membrane in LV producer cells, exploiting the RNA-guided Cas9 nuclease. We generated B2M-negative cells devoid of surface-exposed MHC-I, which retain the ability to produce LVs. In order to insert the LV genome of interest in the packaging cell line, we performed site-specific integration in predetermined loci of the genome of these cells, chosen for robust expression, exploiting artificial nucleases and homology-directed repair. In several independent iterations of this process, we generated producer cell lines both for LV expressing marker genes and a therapeutic gene, i.e. coagulation factor IX (FIX), the gene mutated in hemophilia B. We show that these LV producer cells are stable in culture and can produce several liters of LV-containing conditioned medium. These LVs have comparable or only slightly lower infectious titer and specific infectivity than LVs produced by state-of-the-art transient transfection process and can transduce therapeutically relevant target cells, such as hematopoietic stem/progenitor cells and T lymphocytes to high efficiency. Moreover, we intravenously administered FIX-expressing LVs produced by the cell line to hemophilia B mice and established therapeutic levels of circulating FIX. These data indicate that site-specific integration is an efficient, rapid and reproducible method to generate LV producer cells, starting from a universal stable inducible LV packaging cell line. Overall, we provide evidence that rationally designed targeted genome engineering can be used to improve the quality, safety and sustainability of LV production for clinical use.
- Published
- 2015
- Full Text
- View/download PDF
48. The Interval Liar Game
- Author
-
Benjamin Doerr, David Steurer, and Johannes Lengler
- Subjects
Combinatorics ,Set (abstract data type) ,Structure (category theory) ,Perfect information ,TheoryofComputation_GENERAL ,Contrast (statistics) ,Interval (graph theory) ,Burst error ,Game theory ,Algorithm ,Transmission errors ,Mathematics - Abstract
We regard the problem of communication in the presence of faulty transmissions. In contrast to the classical works in this area, we assume some structure on the times when the faults occur. More realistic seems the “burst error model”, in which all faults occur in some small time interval. Like previous work, our problem can best be modelled as a two-player perfect information game, in which one player (“Paul”) has to guess a number x from {1, ..., n} using Yes/No-questions, which the second player (“Carole”) has to answer truthfully apart from few lies. In our setting, all lies have to be in a consecutive set of k rounds. We show that (for big n) Paul needs roughly logn+loglogn+k rounds to determine the number, which is only k more than the case of just one single lie.
- Published
- 2006
- Full Text
- View/download PDF
49. Agonistic and antagonistic action of AP2, Msx2, Pax6, Prox1 AND Six3 in the regulation of Sox2 expression
- Author
-
Johannes, Lengler, Tobias, Bittner, Doris, Münster, Alaa El-Din A, Gawad, and Jochen, Graw
- Subjects
Homeodomain Proteins ,PAX6 Transcription Factor ,SOXB1 Transcription Factors ,Tumor Suppressor Proteins ,Blotting, Western ,Nerve Tissue Proteins ,Kidney ,Transfection ,Cell Line ,DNA-Binding Proteins ,Repressor Proteins ,Gene Expression Regulation ,HMGB Proteins ,Humans ,Paired Box Transcription Factors ,Eye Proteins ,Plasmids ,Transcription Factors - Abstract
Sox2 transcription factor is expressed in neural tissues and sensory epithelia from the early stages of development. Particularly, it is known to activate crystallin gene expression and to be involved in differentiation of lens and neural tissues. However, its place in the signaling cascade is not well understood. Here, we report about the response of its promoter to the presence of other transcription factors, AP2alpha, Msx2, Pax6, Prox1 and Six3, in a transient reporter gene assay using HEK293 cells as recipient cells. Taking our data together, AP2, Pax6 and PROX1 can activate the Sox2 promoter. Msx2 has an inhibitory effect, whereas Six3 does not affect the Sox2 promoter. These data indicate a common activating cascade at least for AP2, Pax6, Prox1 and Sox2.
- Published
- 2004
50. Regulation of the human SIX3 gene promoter
- Author
-
Johannes Lengler and Jochen Graw
- Subjects
PAX6 Transcription Factor ,Response element ,Biophysics ,Repressor ,Nerve Tissue Proteins ,Biology ,Eye ,Biochemistry ,Humans ,Paired Box Transcription Factors ,Eye Proteins ,Promoter Regions, Genetic ,Molecular Biology ,Transcription factor ,Gene ,Cells, Cultured ,Regulation of gene expression ,Homeodomain Proteins ,Reporter gene ,Tumor Suppressor Proteins ,Gene Amplification ,Promoter ,Cell Biology ,Molecular biology ,DNA-Binding Proteins ,Repressor Proteins ,Gene Expression Regulation ,sense organs ,PAX6 - Abstract
A 2-kb promoter fragment of SIX3, a human transcription factor essential for vertebrate eye development, has been characterized in a gene reporter assay system. The peak of activity implies the 2-kb sequence of SIX3, whereas 5'-deletion constructs of the promoter decreases successively to 60% of the activity starting from the entire promoter. In contrast, cutting off 300 bp of the 3' promoter extinguishes its activity completely. Coexpression experiments of different other transcription factors illuminate the regulation of SIX3 during eye development: Pax6 activates the -703/-349 SIX3 promoter threefold, and PROX1 even eightfold. In contrast, Msx2 represses the entire SIX3 promoter. Furthermore, Six3 is regulated by its own negative feedback loop. In conclusion, SIX3 expression underlies a complex regulation, which is an important part to understand the network of transcription factors during eye development.
- Published
- 2001
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.