19 results on '"Woerden, W.P.J. (Wessel) van"'
Search Results
2. An algorithmic reduction theory for binary codes: LLL and more
- Author
-
Debris-Alazard, T. (Thomas), Ducas, L. (Léo), Woerden, W.P.J. (Wessel) van, Debris-Alazard, T. (Thomas), Ducas, L. (Léo), and Woerden, W.P.J. (Wessel) van
- Abstract
In this article, we propose an adaptation of the algorithmic reduction theory of lattices to binary codes. This includes the celebrated LLL algorithm (Lenstra, Lenstra, Lovasz, 1982), as well as adaptations of associated algorithms such as the Nearest Plane Algorithm of Babai (1986). Interestingly, the adaptation of LLL to binary codes can be interpreted as an algorithmic version of the bound of Griesmer (1960) on the minimal distance of a code. Using these algorithms, we demonstrate —both with a heuristic analysis and in practice— a small polynomial speed-up over the Information-Set Decoding algorithm of Lee and Brickell (1988) for random binary codes. This appears to be the first such speed-up that is not based on a time-memory trade-off. The above speed-up should be read as a very preliminary example of the potential of a reduction theory for codes, for example in cryptanalysis.
- Published
- 2022
- Full Text
- View/download PDF
3. On the Lattice Isomorphism Problem, quadratic forms, remarkable lattices, and cryptography
- Author
-
Ducas, L. (Léo), Woerden, W.P.J. (Wessel) van, Ducas, L. (Léo), and Woerden, W.P.J. (Wessel) van
- Abstract
A natural and recurring idea in the knapsack/lattice cryptography literature is to start from a lattice with remarkable decoding capability as your private key, and hide it somehow to make a public key. This is also how the code-based encryption scheme of McEliece (1978) proceeds. This idea has never worked out very well for lattices: ad-hoc approaches have been proposed, but they have been subject to ad-hoc attacks, using tricks beyond lattice reduction algorithms. On the other hand the framework offered by the Short Integer Solution (SIS) and Learning With Errors (LWE) problems, while convenient and well founded, remains frustrating from a coding perspective: the underlying decoding algorithms are rather trivial, with poor decoding performance. In this work, we provide generic realizations of this natural idea (independently of the chosen remarkable lattice) by basing cryptography on the lattice isomorphism problem (LIP). More specifically, we provide: a worst-case to average-case reduction for search-LIP and distinguish-LIP within an isomorphism class, by extending techniques of Haviv and Regev (SODA 2014).a zero-knowledge proof of knowledge (ZKPoK) of an isomorphism. This implies an identification scheme based on search-LIP.a key encapsulation mechanism (KEM) scheme and a hash-then-sign signature scheme, both based on distinguish-LIP. The purpose of this approach is for remarkable lattices to improve the security and performance of lattice-based cryptography. For example, decoding within poly-logarithmic factor from Minkowski’s bound in a remarkable lattice would lead to a KEM resisting lattice attacks down to poly-logarithmic approximation factor, provided that the dual lattice is also close to Minkowski’s bound. Recent works have indeed reached such decoders for certain lattices (Chor-Rivest, Barnes-Sloan), but these do not perfectly fit our need as their duals have poor minimal distance.
- Published
- 2022
- Full Text
- View/download PDF
4. On the Lattice Isomorphism Problem, quadratic forms, remarkable lattices, and cryptography
- Author
-
Ducas, L. (Léo), Woerden, W.P.J. (Wessel) van, Ducas, L. (Léo), and Woerden, W.P.J. (Wessel) van
- Abstract
A natural and recurring idea in the knapsack/lattice cryptography literature is to start from a lattice with remarkable decoding capability as your private key, and hide it somehow to make a public key. This is also how the code-based encryption scheme of McEliece (1978) proceeds. This idea has never worked out very well for lattices: ad-hoc approaches have been proposed, but they have been subject to ad-hoc attacks, using tricks beyond lattice reduction algorithms. On the other hand the framework offered by the Short Integer Solution (SIS) and Learning With Errors (LWE) problems, while convenient and well founded, remains frustrating from a coding perspective: the underlying decoding algorithms are rather trivial, with poor decoding performance. In this work, we provide generic realizations of this natural idea (independently of the chosen remarkable lattice) by basing cryptography on the lattice isomorphism problem (LIP). More specifically, we provide: a worst-case to average-case reduction for search-LIP and distinguish-LIP within an isomorphism class, by extending techniques of Haviv and Regev (SODA 2014).a zero-knowledge proof of knowledge (ZKPoK) of an isomorphism. This implies an identification scheme based on search-LIP.a key encapsulation mechanism (KEM) scheme and a hash-then-sign signature scheme, both based on distinguish-LIP. The purpose of this approach is for remarkable lattices to improve the security and performance of lattice-based cryptography. For example, decoding within poly-logarithmic factor from Minkowski’s bound in a remarkable lattice would lead to a KEM resisting lattice attacks down to poly-logarithmic approximation factor, provided that the dual lattice is also close to Minkowski’s bound. Recent works have indeed reached such decoders for certain lattices (Chor-Rivest, Barnes-Sloan), but these do not perfectly fit our need as their duals have poor minimal distance.
- Published
- 2022
- Full Text
- View/download PDF
5. Hawk: Module LIP makes Lattice Signatures Fast, Compact and Simple
- Author
-
Ducas, L. (Léo), Postlethwaite, E.W. (Eamonn Walter), Pulles, L.N. (Ludo), Woerden, W.P.J. (Wessel) van, Ducas, L. (Léo), Postlethwaite, E.W. (Eamonn Walter), Pulles, L.N. (Ludo), and Woerden, W.P.J. (Wessel) van
- Abstract
We propose the signature scheme Hawk, a concrete instantiation of proposals to use the Lattice Isomorphism Problem (LIP) as a foundation for cryptography that focuses on simplicity. This simplicity stems from LIP, which allows the use of lattices such as , leading to signature algorithms with no floats, no rejection sampling, and compact precomputed distributions. Such design features are desirable for constrained devices, and when computing signatures inside FHE or MPC. The most significant change from recent LIP proposals is the use of module lattices, reusing algorithms and ideas from NTRUSign and Falcon. Its simplicity makes Hawk competitive. We provide cryptanalysis with experimental evidence for the design of Hawk and implement two parameter sets, Hawk-512 and Hawk-1024. Signing using Hawk-512 and Hawk-1024 is four times faster than Falcon on x86 architectures, produces signatures that are about 15% more compact, and is slightly more secure against forgeries by lattice reduction attacks. When floating-points are unavailable, Hawk signs 15 times faster than Falcon. We provide a worst case to average case reduction for module LIP. For certain parametrisations of Hawk this applies to secret key recovery and we reduce signature forgery in the random oracle model to a new problem called the one more short vector problem.
- Published
- 2022
6. An algorithmic reduction theory for binary codes: LLL and more
- Author
-
Debris-Alazard, T. (Thomas), Ducas, L. (Léo), Woerden, W.P.J. (Wessel) van, Debris-Alazard, T. (Thomas), Ducas, L. (Léo), and Woerden, W.P.J. (Wessel) van
- Abstract
In this article, we propose an adaptation of the algorithmic reduction theory of lattices to binary codes. This includes the celebrated LLL algorithm (Lenstra, Lenstra, Lovasz, 1982), as well as adaptations of associated algorithms such as the Nearest Plane Algorithm of Babai (1986). Interestingly, the adaptation of LLL to binary codes can be interpreted as an algorithmic version of the bound of Griesmer (1960) on the minimal distance of a code. Using these algorithms, we demonstrate —both with a heuristic analysis and in practice— a small polynomial speed-up over the Information-Set Decoding algorithm of Lee and Brickell (1988) for random binary codes. This appears to be the first such speed-up that is not based on a time-memory trade-off. The above speed-up should be read as a very preliminary example of the potential of a reduction theory for codes, for example in cryptanalysis.
- Published
- 2022
- Full Text
- View/download PDF
7. HAWK: Module LIP makes lattice signatures fast, compact and simple
- Author
-
Ducas, L. (Léo), Postlethwaite, E.W. (Eamonn Walter), Pulles, L.N. (Ludo), Woerden, W.P.J. (Wessel) van, Ducas, L. (Léo), Postlethwaite, E.W. (Eamonn Walter), Pulles, L.N. (Ludo), and Woerden, W.P.J. (Wessel) van
- Abstract
We propose the signature scheme Hawk, a concrete instantiation of proposals to use the Lattice Isomorphism Problem (LIP) as a foundation for cryptography that focuses on simplicity. This simplicity stems from LIP, which allows the use of lattices such as Zn , leading to signature algorithms with no floats, no rejection sampling, and compact precomputed distributions. Such design features are desirable for constrained devices, and when computing signatures inside FHE or MPC. The most significant change from recent LIP proposals is the use of module lattices, reusing algorithms and ideas from NTRUSign and Falcon. Its simplicity makes Hawk competitive. We provide cryptanalysis with experimental evidence for the design of Hawk and implement two parameter sets, Hawk-512 and Hawk-1024. Signing using Hawk-512 and Hawk-1024 is four times faster than Falcon on x86 architectures, produces signatures that are about 15% more compact, and is slightly more secure against forgeries by lattice reduction attacks. When floating-points are unavailable, Hawk signs 15 times faster than Falcon. We provide a worst case to average case reduction for module LIP. For certain parametrisations of Hawk this applies to secret key recovery and we reduce signature forgery in the random oracle model to a new problem called the one more short vector problem.
- Published
- 2022
- Full Text
- View/download PDF
8. NTRU Fatigue: How stretched is overstretched?
- Author
-
Ducas, L. (Léo), Woerden, W.P.J. (Wessel) van, Ducas, L. (Léo), and Woerden, W.P.J. (Wessel) van
- Abstract
Until recently lattice reduction attacks on NTRU lattices were thought to behave similar as on (ring-)LWE lattices with the same parameters. However several works (Albrecht-Bai-Ducas 2016, Kirchner-Fouque 2017) showed a significant gap for large moduli q, the so-called overstretched regime of NTRU. With the NTRU scheme being a finalist to the NIST PQC competition it is important to understand —both asymptotically and concretely— where the fatigue point lies exactly, i.e. at which q the overstretched regime begins. Unfortunately the analysis by Kirchner and Fouque is based on an impossibility argument, which only results in an asymptotic upper bound on the fatigue point. It also does not really explain how lattice reduction actually recovers secret-key information. We propose a new analysis that asymptotically improves on that of Kirchner and
- Published
- 2021
- Full Text
- View/download PDF
9. Advanced lattice sieving on GPUs, with Tensor Cores
- Author
-
Ducas, L. (Léo), Stevens, M.M.J. (Marc), Woerden, W.P.J. (Wessel) van, Ducas, L. (Léo), Stevens, M.M.J. (Marc), and Woerden, W.P.J. (Wessel) van
- Abstract
In this work, we study GPU implementations of various state-of-the-art sieving algorithms for lattices (Becker-Gama-Joux 2015, Becker-Ducas-Gama-Laarhoven 2016, Herold-Kirshanova 2017) inside the General Sieve Kernel (G6K, Albrecht et al. 2019). In particular, we extensively exploit the recently introduced Tensor Cores – originally designed for raytracing and machine learning – and demonstrate their fitness for the cryptanalytic task at hand. We also propose a new dual-hash technique for efficient detection of ‘lift-worthy’ pairs to accelerate a key ingredient of G6K: finding short lifted vectors. We obtain new computational records, reaching dimension 180 for the SVP Darmstadt Challenge improving upon the previous record for dimension 155. This computation ran for 51.6 days on a server with 4 NVIDIA Turing GPUs and 1.5TB of RAM. This corresponds to a gain of about two orders of magnitude over previous records both in terms of wall-clock time and of energy efficiency.
- Published
- 2021
- Full Text
- View/download PDF
10. NTRUFatigue
- Author
-
Ducas, L. (Léo), Woerden, W.P.J. (Wessel) van, Ducas, L. (Léo), and Woerden, W.P.J. (Wessel) van
- Abstract
Experiments and Predictions for attacks on NTRU in the overstretched Regime
- Published
- 2021
11. NTRU Fatigue: How stretched is overstretched?
- Author
-
Ducas, L. (Léo), Woerden, W.P.J. (Wessel) van, Ducas, L. (Léo), and Woerden, W.P.J. (Wessel) van
- Abstract
Until recently lattice reduction attacks on NTRU lattices were thought to behave similar as on (ring-)LWE lattices with the same parameters. However several works (Albrecht-Bai-Ducas 2016, Kirchner-Fouque 2017) showed a significant gap for large moduli q, the so-called overstretched regime of NTRU. With the NTRU scheme being a finalist to the NIST PQC competition it is important to understand —both asymptotically and concretely— where the fatigue point lies exactly, i.e. at which q the overstretched regime begins. Unfortunately the analysis by Kirchner and Fouque is based on an impossibility argument, which only results in an asymptotic upper bound on the fatigue point. It also does not really explain how lattice reduction actually recovers secret-key information. We propose a new analysis that asymptotically improves on that of Kirchner and Fouque, narrowing down the fatigue point for ternary NTRU from q≤ n2.783+o(1) to q= n2.484+o(1), and finally explaining the mechanism behind this phenomenon. We push this analysis further to a concrete one, settling the fatigue point at q≈ 0.004 · n2.484, and allowing precise hardness predictions in the overstretched regime. These predictions are backed by extensive experiments.
- Published
- 2021
- Full Text
- View/download PDF
12. A note on a Claim of Eldar & Hallgren: LLL already solves it
- Author
-
Ducas, L. (Léo), Woerden, W.P.J. (Wessel) van, Ducas, L. (Léo), and Woerden, W.P.J. (Wessel) van
- Abstract
In a recent talk of Hallgren on a joint work with Eldar (Sept 21, 2021, Simons Institute), a polynomial-time quantum algorithm for solving BDD in a certain class of lattices was claimed. We show here that known classical (and even, deterministic) polynomial-time algorithms already achieve this result.
- Published
- 2021
13. Advanced lattice sieving on GPUs, with Tensor Cores
- Author
-
Ducas, L. (Léo), Stevens, M.M.J. (Marc), Woerden, W.P.J. (Wessel) van, Ducas, L. (Léo), Stevens, M.M.J. (Marc), and Woerden, W.P.J. (Wessel) van
- Abstract
In this work, we study GPU implementations of various state-of-the-art sieving algorithms for lattices (Becker-Gama-Joux 2015, Becker-Ducas-Gama-Laarhoven 2016, Herold-Kirshanova 2017) inside the General Sieve Kernel (G6K, Albrecht et al. 2019). In particular, we extensively exploit the recently introduced Tensor Cores – originally designed for raytracing and machine learning – and demonstrate their fitness for the cryptanalytic task at hand. We also propose a new dual-hash technique for efficient detection of ‘lift-worthy’ pairs to accelerate a key ingredient of G6K: finding short lifted vectors. We obtain new computational records, reaching dimension 180 for the SVP Darmstadt Challenge improving upon the previous record for dimension 155. This computation ran for 51.6 days on a server with 4 NVIDIA Turing GPUs and 1.5TB of RAM. This corresponds to a gain of about two orders of magnitude over previous records both in terms of wall-clock time and of energy efficiency.
- Published
- 2021
- Full Text
- View/download PDF
14. An upper bound on the number of perfect quadratic forms
- Author
-
Woerden, W.P.J. (Wessel) van and Woerden, W.P.J. (Wessel) van
- Published
- 2020
- Full Text
- View/download PDF
15. A canonical form for positive definite matrices
- Author
-
Dutour Sikirić, M. (Mathieu), Haensch, A. (Anna), Voight, J. (John), Woerden, W.P.J. (Wessel) van, Dutour Sikirić, M. (Mathieu), Haensch, A. (Anna), Voight, J. (John), and Woerden, W.P.J. (Wessel) van
- Abstract
We exhibit an explicit, deterministic algorithm for finding a canonical form for a positive definite matrix under unimodular integral transformations. We use characteristic sets of short vectors and partition-backtracking graph software. The algorithm runs in a number of arithmetic operations that is exponential in the dimension n, but it is practical and more efficient than canonical forms based on Minkowski reduction.
- Published
- 2020
- Full Text
- View/download PDF
16. The Randomized Slicer for CVPP: Sharper, Faster, Smaller, Batchier
- Author
-
Ducas, L. (Léo), Laarhoven, T. (Thijs), Woerden, W.P.J. (Wessel) van, Ducas, L. (Léo), Laarhoven, T. (Thijs), and Woerden, W.P.J. (Wessel) van
- Abstract
Following the recent line of work on solving the closest vector problem with preprocessing (CVPP) using approximate Voronoi cells, we improve upon previous results in the following ways:-We derive sharp asymptotic bounds on the success probability of the randomized slicer, by modelling the behaviour of the algorithm as a random walk on the coset of the lattice of the target vector. We thereby solve the open question left by Doulgerakis–Laarhoven–De Weger [PQCrypto 2019] and Laarhoven [MathCrypt 2019].-We obtain better trade-offs for CVPP and its generalisations (strictly, in certain regimes), both with and without nearest neighbour searching, as a direct result of the above sharp bounds on the success probabilities.-We show how to reduce the memory requirement of the slicer, and in particular the corresponding nearest neighbour data structures, using ideas similar to those proposed by Becker–Gama–Joux [Cryptology ePrint Archive, 2015]. Using 20.185d+o(d)memory, we can solve a single CVPP instance in 20.264d+o(d)time.-We further improve on the per-instance time complexities in certain memory regimes, when we are given a sufficiently large batch of CVPP problem instances for the same lattice. Using memory, we can heuristically solve CVPP instances in amortized time, for batches of size at least.Our random walk model for analysing arbitrary-step transition probabilities in complex step-wise algorithms may be of independent interest, both for deriving analytic bounds through convexity arguments, and for computing optimal paths numerically with a shortest path algorithm. As a side result we apply the same random walk model to graph-based nearest neighbour searching, where we improve upon results of Laarhoven [SOCG 2018] by deriving sharp bounds on the success probability of the corresponding greedy search procedure.
- Published
- 2020
- Full Text
- View/download PDF
17. G6K-GPU-Tensor
- Author
-
Ducas, L. (Léo), Stevens, M.M.J. (Marc), Woerden, W.P.J. (Wessel) van, Ducas, L. (Léo), Stevens, M.M.J. (Marc), and Woerden, W.P.J. (Wessel) van
- Abstract
G6K is an open-source C++ and Python (2) library that implements several Sieve algorithms to be used in more advanced lattice reduction tasks. It follows the stateful machine framework from: Martin R. Albrecht and Léo Ducas and Gottfried Herold and Elena Kirshanova and Eamonn W. Postlethwaite and Marc Stevens, The General Sieve Kernel and New Records in Lattice Reduction. The main source is available in fplll/g6k This fork expands the G6K implementation with GPU, and in particular Tensor Core, accelerated sieves, and is accompanied by the work: Léo Ducas, Marc Stevens, Wessel van Woerden, Advanced Lattice Sieving on GPUs, with Tensor Cores, Eurocrypt 2021 (eprint).
- Published
- 2020
18. A canonical form for positive definite matrices
- Author
-
Dutour Sikirić, M. (Mathieu), Haensch, A. (Anna), Voight, J. (John), Woerden, W.P.J. (Wessel) van, Dutour Sikirić, M. (Mathieu), Haensch, A. (Anna), Voight, J. (John), and Woerden, W.P.J. (Wessel) van
- Abstract
We exhibit an explicit, deterministic algorithm for finding a canonical form for a positive definite matrix under unimodular integral transformations. We use characteristic sets of short vectors and partition-backtracking graph software. The algorithm runs in a number of arithmetic operations that is exponential in the dimension n, but it is practical and more efficient than canonical forms based on Minkowski reduction.
- Published
- 2020
- Full Text
- View/download PDF
19. The Randomized Slicer for CVPP: Sharper, Faster, Smaller, Batchier
- Author
-
Ducas, L. (Léo), Laarhoven, T. (Thijs), Woerden, W.P.J. (Wessel) van, Ducas, L. (Léo), Laarhoven, T. (Thijs), and Woerden, W.P.J. (Wessel) van
- Abstract
Following the recent line of work on solving the closest vector problem with preprocessing (CVPP) using approximate Voronoi cells, we improve upon previous results in the following ways:-We derive sharp asymptotic bounds on the success probability of the randomized slicer, by modelling the behaviour of the algorithm as a random walk on the coset of the lattice of the target vector. We thereby solve the open question left by Doulgerakis–Laarhoven–De Weger [PQCrypto 2019] and Laarhoven [MathCrypt 2019].-We obtain better trade-offs for CVPP and its generalisations (strictly, in certain regimes), both with and without nearest neighbour searching, as a direct result of the above sharp bounds on the success probabilities.-We show how to reduce the memory requirement of the slicer, and in particular the corresponding nearest neighbour data structures, using ideas similar to those proposed by Becker–Gama–Joux [Cryptology ePrint Archive, 2015]. Using 20.185d+o(d)memory, we can solve a single CVPP instance in 20.264d+o(d)time.-We further improve on the per-instance time complexities in certain memory regimes, when we are given a sufficiently large batch of CVPP problem instances for the same lattice. Using memory, we can heuristically solve CVPP instances in amortized time, for batches of size at least.Our random walk model for analysing arbitrary-step transition probabilities in complex step-wise algorithms may be of independent interest, both for deriving analytic bounds through convexity arguments, and for computing optimal paths numerically with a shortest path algorithm. As a side result we apply the same random walk model to graph-based nearest neighbour searching, where we improve upon results of Laarhoven [SOCG 2018] by deriving sharp bounds on the success probability of the corresponding greedy search procedure.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.