344 results on '"Universal hashing"'
Search Results
2. EliMAC: Speeding Up LightMAC by around 20%
- Author
-
Christoph Dobraunig, Bart Mennink, and Samuel Neves
- Subjects
universal hashing ,MAC ,EliHash ,EliMAC ,length independence ,Computer engineering. Computer hardware ,TK7885-7895 - Abstract
Universal hash functions play a prominent role in the design of message authentication codes and the like. Whereas it is known how to build highly efficient sequential universal hash functions, parallel non-algebraic universal hash function designs are always built on top of a PRP. In such case, one employs a relatively strong primitive to obtain a function with a relatively weak security model. In this work, we present EliHash, a construction of a parallel universal hash function from non-compressing universal hash functions, and we back it up with supporting security analysis. We use this construction to design EliMAC, a message authentication code similar to LightMAC. We consider a heuristic instantiation of EliMAC with roundreduced AES, and argue that this instantiation of EliMAC is much more efficient than LightMAC, it is around 21% faster, and additionally allows for precomputation of the keys, albeit with a stronger assumption on the AES primitive than in LightMAC. These observations are backed up with an implementation of our scheme.
- Published
- 2023
- Full Text
- View/download PDF
3. HalftimeHash: Modern Hashing Without 64-Bit Multipliers or Finite Fields
- Author
-
Apple, Jim, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lubiw, Anna, editor, and Salavatipour, Mohammad, editor
- Published
- 2021
- Full Text
- View/download PDF
4. Quantum key distribution using universal hash functions over finite fields.
- Author
-
Bibak, Khodakhast
- Subjects
- *
FINITE fields , *QUANTUM communication , *PARALLEL programming , *DATA structures , *DATA security , *BINARY codes - Abstract
One of the most important functions used in a quantum key distribution (QKD) network is universal hash functions, specially, (almost) strongly universal hash functions which are used in at least three steps of QKD, in particular, in error correction, privacy amplification, and authentication. Also, they have been recently used in several other quantum communication protocols like quantum secret sharing (QSS). These hash functions have also many other important applications from information security to data structures and parallel computing. Recently, Bibak et al. [Quantum Inf. Comput., 2021] introduced quadratic hash which gives much better collision bound than the well-known polynomial hash. In this paper, we define three new universal hash function families which strongly generalize all the previous families and have several advantages over them. Then, using highly influential and pioneering results of Schmidt and of Weil, we show that these new families are (almost) Δ -universal which can then be easily converted to (almost) strongly universal families. This makes them useful for applications in QKD and many other areas. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Parallelizable MACs Based on the Sum of PRPs with Security Beyond the Birthday Bound
- Author
-
Moch, Alexander, List, Eik, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Deng, Robert H., editor, Gauthier-Umaña, Valérie, editor, and Ochoa, Martín, editor
- Published
- 2019
- Full Text
- View/download PDF
6. Efficient Unconditionally Secure Signatures Using Universal Hashing
- Author
-
Amiri, Ryan, Abidin, Aysajan, Wallden, Petros, Andersson, Erika, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Preneel, Bart, editor, and Vercauteren, Frederik, editor
- Published
- 2018
- Full Text
- View/download PDF
7. Quantum key distribution with PRF(Hash, Nonce) achieves everlasting security.
- Author
-
Bibak, Khodakhast and Ritchie, Robert
- Subjects
- *
INFORMATION-theoretic security , *SECURITY management - Abstract
Peev et al. (Int J Quantum Inf 03:225–231, 2005) introduced a key-efficient two-step hash function for authentication in quantum key distribution (QKD). They suggested using a publicly known hash function as part of this scheme. Improving on this, Pacher et al. (Quantum Inf Process 15:327–362, 2016) suggested a method to restore information-theoretic security (ITS) by using almost universal hash functions instead of publicly known hash functions. While their scheme is a key-efficient almost-strongly universal (ASU) family, like any other ASU family, it only provides a one-time MAC. Here, we propose the use of a MAC paradigm called PRF(Hash, Nonce) for authentication in QKD. This MAC has several advantages which make it suited for QKD. In particular, unlike the above constructions, it is a many-time MAC and is also more key-efficient. In fact, PRF(Hash, Nonce) is even more key-efficient than the Wegman–Carter paradigm, the most widely used MAC scheme for authentication in QKD. Furthermore, it provides everlasting security, which means that if authentication remains unbroken during the execution of QKD, then the resulting keys retain ITS, which guarantees that the adversary cannot gain any new information on the keys even with unlimited computational power. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. EliMAC: Speeding Up LightMAC by around 20%
- Author
-
Dobraunig, Christoph, Mennink, Bart, and Neves, Samuel
- Subjects
MAC ,EliHash ,universal hashing ,length independence ,EliMAC - Abstract
Universal hash functions play a prominent role in the design of message authentication codes and the like. Whereas it is known how to build highly efficient sequential universal hash functions, parallel non-algebraic universal hash function designs are always built on top of a PRP. In such case, one employs a relatively strong primitive to obtain a function with a relatively weak security model. In this work, we present EliHash, a construction of a parallel universal hash function from non-compressing universal hash functions, and we back it up with supporting security analysis. We use this construction to design EliMAC, a message authentication code similar to LightMAC. We consider a heuristic instantiation of EliMAC with roundreduced AES, and argue that this instantiation of EliMAC is much more efficient than LightMAC, it is around 21% faster, and additionally allows for precomputation of the keys, albeit with a stronger assumption on the AES primitive than in LightMAC. These observations are backed up with an implementation of our scheme.
- Published
- 2023
9. Generation of k-wise independent random variables with small randomness.
- Author
-
Achiha, Taku, Sugita, Hiroshi, Tonohiro, Kenta, and Yamamoto, Yuto
- Subjects
- *
MONTE Carlo method , *INDEPENDENT variables , *RANDOM numbers , *REPRODUCTION - Abstract
A quick generation method of k-wise independent uniformly distributed m-bit random variables with small randomness is proposed with applications to the Monte Carlo method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. Universal Hash Based Built-In Secure Transport in FlexE Over WDM Networks
- Author
-
Jiabin Cui, Yuefeng Ji, and Pengfei Zhu
- Subjects
Ethernet ,Radio access network ,business.industry ,Network security ,Universal hashing ,Computer science ,Ant colony optimization algorithms ,Hash function ,Overhead (computing) ,business ,Atomic and Molecular Physics, and Optics ,Networking hardware ,Computer network - Abstract
Flexible Ethernet (FlexE) over WDM transport is one of the key technologies for achieving high-speed next generation radio access network (NG-RAN). The security issue, such as optical fiber wiretap, becomes increasingly prominent. However, optical or electrical layer secure strategies cannot support higher rate and lower overhead transport, respectively. To this end, in this paper, we propose a cross-layer secure transport strategy in end-to-end FlexE over WDM networks, which leverages Universal Hashing based FlexE data block permutation and multiple parallel fiber transmission to achieve anti-eavesdropping. Different levels of attack ability are considered for measuring the impact on network security and resources utilization. Furthermore, the trade-off between efficient resources utilization and guarantee of higher level of transport security is also explored through building an integer linear programming (ILP) model. For the computation complexity of ILP model at large scale input conditions, a Security and resource-Efficiency-oriented Ant Colony optimization algorithm (SEAC) is proposed. Simulation results show that the cross-layer defense strategy is effective to struggle against intruders with different levels of attack ability, and the SEAC algorithm can achieve near-optimal performance and outperform the traditional first-fit algorithm.
- Published
- 2021
11. Symmetric Blind Information Reconciliation and Hash-function-based Verification for Quantum Key Distribution.
- Author
-
Fedorov, A. K., Kiktenko, E. O., and Trushechkin, A. S.
- Abstract
We consider an information reconciliation protocol for quantum key distribution (QKD). In order to correct down the error rate, we suggest a method, which is based on symmetric blind information reconciliation for the low-density parity-check (LDPC) codes. We develop a subsequent verification protocol with the use of ϵ-universal hash functions, which allows verifying the identity between the keys with a certain probability. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
12. On an Almost-Universal Hash Function Family with Applications to Authentication and Secrecy Codes.
- Author
-
Bibak, Khodakhast, Kapron, Bruce M., Srinivasan, Venkatesh, and Tóth, László
- Subjects
- *
HASHING , *GEOMETRIC congruences , *COMPUTER access control , *PROBLEM solving , *INTEGERS - Abstract
Universal hashing, discovered by Carter and Wegman in 1979, has many important applications in computer science. MMH, which was shown to be -universal by Halevi and Krawczyk in 1997, is a well-known universal hash function family. We introduce a variant of MMH, that we call GRDH, where we use an arbitrary integer instead of prime and let the keys satisfy the conditions (), where are given positive divisors of . Then via connecting the universal hashing problem to the number of solutions of restricted linear congruences, we prove that the family GRDH is an -almost--universal family of hash functions for some if and only if is odd and . Furthermore, if these conditions are satisfied then GRDH is -almost--universal, where is the smallest prime divisor of . Finally, as an application of our results, we propose an authentication code with secrecy scheme which strongly generalizes the scheme studied by Alomair et al. [ J. Math. Cryptol. 4 (2010) 121-148], and [ J.UCS 15 (2009) 2937-2956]. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
13. DOTMIX-Pro: faster and more efficient variants of DOTMIX for dynamic-multithreading platforms
- Author
-
Khodakhast Bibak and Robert Ritchie
- Subjects
020203 distributed computing ,Universal hashing ,Computer science ,Model of computation ,Concurrency ,media_common.quotation_subject ,Hash function ,02 engineering and technology ,Parallel computing ,Cilk ,computer.software_genre ,Theoretical Computer Science ,Debugging ,Hardware and Architecture ,Multithreading ,0202 electrical engineering, electronic engineering, information engineering ,Compiler ,computer ,Software ,Information Systems ,media_common ,computer.programming_language - Abstract
Many concurrency platforms offer a processor oblivious model of computation, where the scheduler dynamically distributes work across threads. While this is convenient, it introduces non-determinism at runtime, which complicates debugging, because a program may have different outputs after each run. Leiserson et. al. [PPoPP ’12] persuaded Intel to modify its C/C++ compiler, which provided the Cilk Plus concurrency platform, to include a feature called pedigrees, which enables determinism by uniquely identifying strands with low overhead. They used pedigrees to design a DPRNG called DOTMIX, which hashes a pedigree, then mixes the result into a random number for a given strand. Improving the efficiency of DOTMIX by using a faster hash function is an open problem put forth by Leiserson et al. [PPoPP ’12]. We address this problem by introducing DOTMIX-Pro, which replaces the compression function used in DOTMIX with a faster universal hash function family called Square Hash due to Etzel et al. [CRYPTO ’99] and some of its variants, as well as other optimizations, which can be up to $$31\%$$ faster. Additionally, we introduce a generalization of Square Hash which works with arbitrary moduli.
- Published
- 2021
14. Regular and almost universal hashing: an efficient implementation.
- Author
-
Ivanchykhin, Dmytro, Ignatchenko, Sergey, and Lemire, Daniel
- Subjects
HASHING ,DATA structures ,PROBABILITY theory ,ARRAY processors ,MATHEMATICAL optimization - Abstract
Random hashing can provide guarantees regarding the performance of data structures such as hash tables - even in an adversarial setting. Many existing families of hash functions are universal: given two data objects, the probability that they have the same hash value is low given that we pick hash functions at random. However, universality fails to ensure that all hash functions are well behaved. We might further require regularity: when picking data objects at random they should have a low probability of having the same hash value, for any fixed hash function. We present the efficient implementation of a family of non-cryptographic hash functions (PM+) offering good running times, good memory usage, and distinguishing theoretical guarantees: almost universality and component-wise regularity. On a variety of platforms, our implementations are comparable with the state of the art in performance. On recent Intel processors, PM+ achieves a speed of 4.7 bytes per cycle for 32-bit outputs and 3.3 bytes per cycle for 64-bit outputs. We review vectorization through Single Instruction on Multiple Data instructions (e.g., AVX2) and optimizations for superscalar execution. Copyright © 2016 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
15. Development of a modified UMAC algorithm based on cryptocode constructions
- Author
-
Alla Gavrilova, Ihor Volkov, Yuliia Kozhedub, Roman Korolev, Oleksandr Lezik, Volodymyr Medvediev, Oleksandr Milov, Bogdan Tomashevsky, Andrii Trystan, and Oksana Chekunova
- Subjects
modified elliptic codes ,Computer science ,020209 energy ,Hash function ,0211 other engineering and technologies ,Energy Engineering and Power Technology ,umac algorithm ,02 engineering and technology ,Industrial and Manufacturing Engineering ,Public-key cryptography ,authenticity ,Management of Technology and Innovation ,lcsh:Technology (General) ,021105 building & construction ,UMAC ,0202 electrical engineering, electronic engineering, information engineering ,mv2 algorithm (universal damage mechanism) ,lcsh:Industry ,Message authentication code ,Electrical and Electronic Engineering ,crypto-code constructions ,Post-quantum cryptography ,business.industry ,Universal hashing ,Applied Mathematics ,Mechanical Engineering ,hashing algorithm ,post-quantum cryptography ,Computer Science Applications ,elliptic codes ,damaged codes ,Symmetric-key algorithm ,Control and Systems Engineering ,McEliece cryptosystem ,lcsh:T1-995 ,lcsh:HD2321-4730.9 ,business ,Algorithm - Abstract
The development of computer technology has determined the vector for the expansion of services based on the Internet and “G” technologies. The main requirements for modern services in the banking sector are security and reliability. At the same time, security is considered not only as ensuring the confidentiality and integrity of transactions, but also their authenticity. However, in the post-quantum period, US NIST specialists question the durability of modern means of providing basic security services based on symmetric and asymmetric cryptography algorithms. The increase in computing resources allows attackers to use modern threats in combination. Thus, there is a need to search for new and/or modify known algorithms for generating MAC (message authentication codes). In addition, the growth of services increases the amount of information that needs to be authenticated. Among the well-known hash algorithms, the hash functions of universal hashing are distinguished, which allow initially determining the number of collisions and their uniform distribution over the entire set of hash codes. Possibilities of modifying the cascade hashing algorithm UMAC (message authentication code based on universal hashing, universal MAC) based on the use of McEliece crypto-code construction on algebrogeometric (elliptic codes (EC), modified elliptic codes (MEC) and damaged codes (DC). This approach allows preserving the uniqueness property, in contrast to the classical UMAC scheme based on a block symmetric cipher (AES). The presented algorithms for evaluating the properties of universality and strict universality of hash codes make it possible to evaluate the security of the proposed hashing constructs based on universal hash functions, taking into account the preservation of the universality property
- Published
- 2020
16. Differential properties of authenticated encryption mode based on universal hash function (XTSMAC)
- Author
-
Alexey Yu. Nesterenko
- Subjects
Authenticated encryption ,Theoretical computer science ,business.industry ,Computer science ,Universal hashing ,Hash function ,Encryption ,law.invention ,Software ,law ,Redundancy (engineering) ,business ,Cryptanalysis ,Block cipher - Abstract
A description of an authenticated encryption with associated data (AEAD) block cypher mode, called XTSMAC, is presented. The results of the cryptanalysis, including the non-applicability of a new attack, based on differential properties of non-linear permutations, is also presented. The comparison with other AEAD modes and the results of practical software implementation on Intel’s CPU conclude the article.
- Published
- 2021
17. Faster 64-bit universal hashing using carry-less multiplications.
- Author
-
Lemire, Daniel and Kaser, Owen
- Abstract
Intel and AMD support the carry-less multiplication (CLMUL) instruction set in their x64 processors. We use CLMUL to implement an almost universal 64-bit hash family ( CLHASH). We compare this new family with what might be the fastest almost universal family on x64 processors ( VHASH). We find that CLHASH is at least 60 % faster. We also compare CLHASH with a popular hash function designed for speed (Google's CityHash). We find that CLHASH is 40 % faster than CityHash on inputs larger than 64 bytes and just as fast otherwise. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
18. Low-Complexity Secret Sharing Schemes Using Correlated Random Variables and Rate-Limited Public Communication
- Author
-
Remi A. Chou and Rumia Sultana
- Subjects
Theoretical computer science ,Universal hashing ,Computer science ,business.industry ,Reliability (computer networking) ,Vector quantization ,Cryptography ,Special case ,business ,Secret sharing ,Random variable ,Coding (social sciences) - Abstract
We consider secret sharing where a dealer wants to share a secret with several participants such that predefined subsets of participants can reconstruct the secret and all other subsets of participants cannot learn any information about the secret. To this end, the dealer and the participants have access to samples of correlated random variables and a one-way (from the dealer to the participants), authenticated, public, and rate-limited communication channel. For this problem, we propose the first constructive and low-complexity coding scheme able to handle arbitrary access structures. Our construction relies on a vector quantization coupled with distribution approximations with polar codes to handle the reliability constraints, followed by universal hashing to handle the security constraints. We stress that our coding scheme does not require symmetry or degradation assumptions on the correlated random variables, and does not need a pre-shared secret among the participants and dealer. Our result is also optimal in the special case of rate-unlimited public communication when all the participants are needed to reconstruct the secret.
- Published
- 2021
19. Generation of k-wise independent random variables with small randomness
- Author
-
Taku Achiha, Kenta Tonohiro, Hiroshi Sugita, and Yuto Yamamoto
- Subjects
010302 applied physics ,Statistics and Probability ,Universal hashing ,Applied Mathematics ,Monte Carlo method ,Probability and statistics ,01 natural sciences ,010104 statistics & probability ,0103 physical sciences ,Statistical physics ,0101 mathematics ,Random variable ,Randomness ,Mathematics - Abstract
A quick generation method of k-wise independent uniformly distributed m-bit random variables with small randomness is proposed with applications to the Monte Carlo method.
- Published
- 2019
20. Reducing DRAM Refresh Rate Using Retention Time Aware Universal Hashing Redundancy Repair
- Author
-
Seon Wook Kim, Minseong Kim, Kyu Hyun Choi, and Jaeyung Jun
- Subjects
010302 applied physics ,Dynamic random-access memory ,Hardware_MEMORYSTRUCTURES ,Universal hashing ,Computer science ,business.industry ,02 engineering and technology ,01 natural sciences ,Computer Graphics and Computer-Aided Design ,Memory controller ,020202 computer hardware & architecture ,Computer Science Applications ,Refresh rate ,law.invention ,law ,Embedded system ,0103 physical sciences ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Redundancy (engineering) ,Electrical and Electronic Engineering ,Error detection and correction ,business ,Dram - Abstract
As the device capacity of Dynamic Random Access Memory (DRAM) increases, refresh operation becomes a significant contributory factor toward total power consumption and memory throughput of the device. To reduce the problems associated with the refresh operation, a multi-rate refresh technique that changes the refresh period based on the retention time of DRAM cells has been proposed. Unfortunately, the multi-rate refresh technique has a scalability issue, because the additional storage and logic overhead on a memory controller increases as the device capacity increases. In this article, we propose a novel redundancy repair technique to increase the refresh period of DRAM by using a universal hashing technique. Our redundancy repair technique efficiently repairs both hard faults, which occur during the manufacturing process, and weak cells that have short retention time using the remaining spare elements after the process. Also, our technique solves the Variable Retention Time problem by repairing weak cells at boot time by exploiting the Built-in self-repair (BISR) technique and Error Correction Code. Our technique outperforms a conventional BISR redundancy repair with very little hardware overhead, and ensure reliability with more extended refresh period in the entire system. In particular, our experimental results show that our BISR technique achieves 100% repair rate at a 384ms refresh period in 1.0e-6 hard fault BER configuration, and reduces the refresh energy consumption by 83.9% compared to the 64ms refresh and 12% compared to the conventional multi-rate refresh technique for the state-of-the-art 4Gb device.
- Published
- 2019
21. DETERMINATION OF WAYS OF SPOOFING RESISTANCE IMPROVEMENT OF WARNING SYSTEMS OF NGU SECURITY SYSTEMS AT NUCLEAR POWER LANTS
- Author
-
O. Yu. Iohov, V. H. Maliuk, and I. I. Sydorenko
- Subjects
автентичності повідомлень, алгебраїчні криві, хеш-функції, універсальне хешування ,Guard (information security) ,Theoretical computer science ,Universal hashing ,business.industry ,Computer science ,Hash function ,Cryptography ,Encryption ,Information protection policy ,Digital signature ,Cipher ,business ,message authenticity, algebraic curves, hash functions, universal hashing - Abstract
Досліджено шляхи підвищення імітостійкості сигналів попередження систем охорони НГУ на об’єктах критичної інфраструктури. Виконано аналіз автентичності повідомлень з конструктивними елементами MAC кодів на основі сімей хеш-функцій. Зроблений висновок, що практичні схеми хешування повинні включати класи хеш-функцій з великим коефіцієнтом стиснення для даних можливо дуже великого об՚єму. Наведені оцінки колізійної стійкості для схем універсального хешування за найкращими алгебраїчними кривими з великим числом точок і максимальними кривими., An important part of the combat activity of units of the National Guard of Ukraine is the physical protection of important state critical infrastructure facilities. The development of modern information technologies leads to the emergence of new threats to the security of such facilities, in particular, the number and power of cyber-attacks motivated by the interests of individual states, organizations and groups of people is growing. This determines the relevance of ensuring the confidentiality and integrity of messages exchanged between units of the National Guard of Ukraine through a warning system with potentially dangerous objects. To solve this problem, imitation-resistant data encoding is used, which makes it possible to check whether the data has been changed by a third party. The likelihood that the data has been changed serves as a measure of the resistance of the cipher.Ensuring the simulated resistance of data transmission systems is possible on the basis of solving an interconnected set of information protection tasks. The quality of solving security problems is largely determined by cryptographic algorithms for encryption, digital signature, hashing and generation of authentication codes used.A study of ways to increase the spoofing resistance of warning signals of the guard systems of the National Guard of Ukraine at critical infrastructure facilities was conducted.An analysis of the authenticity of messages with constructive elements of MAC codes based on hash function families is made. It is concluded that practical hashing schemes should include hash classes with a large compression ratio for data of very large volumes as possible. For these purposes, hash families on the basis of long algebraic geometric codes are of interest.An analysis of the general characteristics of algebraic-geometric codes showed that the best properties in terms of their code distance and computational complexity are the extended Reed-Solomon code, codes on Hermite curves, and codes on Suzuki curves. Collision stability estimates are presented for universal hashing schemes for the best algebraic curves with a large number of points and a maximum curve.The presented theory of universal hashing with Reed-Solomon algebraic-geometrical codes and codes on Hermite curves shows the ability to provide the necessary measure of the probability of collision of hash functions. Since a linear increase in collision probability for Reed-Solomon hashing limits the size of the hashed message, one of the ways to increase the simplicity of data transmission is to use codes on Hermite curves.
- Published
- 2019
22. On Weak Keys and Forgery Attacks Against Polynomial-Based MAC Schemes.
- Author
-
Procter, Gordon and Cid, Carlos
- Subjects
CYBERTERRORISM ,MESSAGE passing (Computer science) ,POLYNOMIALS ,COMPUTER security ,COMPUTER software - Abstract
Universal hash functions are commonly used primitives for fast and secure message authentication in the form of message authentication codes or authenticated encryption with associated data schemes. These schemes are widely used and standardised, the most well known being McGrew and Viega's Galois/Counter Mode (GCM). In this paper we identify some properties of hash functions based on polynomial evaluation that arise from the underlying algebraic structure. As a result we are able to describe a general forgery attack, of which Saarinen's cycling attack from FSE 2012 is a special case. Our attack removes the requirement for long messages and applies regardless of the field in which the hash function is evaluated. Furthermore we provide a common description of all published attacks against GCM, by showing that the existing attacks are the result of these algebraic properties of the polynomial-based hash function. We also greatly expand the number of known weak GCM keys and show that almost every subset of the keyspace is a weak key class. Finally, we demonstrate that these algebraic properties and the corresponding attacks are highly relevant to GCM/ $$2^+$$ , a variant of GCM designed to increase the efficiency in software. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
23. Secret key extraction in direct reconciliation CV-QKD systems
- Author
-
Margarida Almeida, Margarida Facão, Nelson J. Muga, Armando N. Pinto, and Nuno A. Silva
- Subjects
Quantum technology ,Homodyne detection ,Universal hashing ,Computer science ,Estimation theory ,Low-density parity-check code ,Quantum key distribution ,Algorithm ,Toeplitz matrix ,Parity bit - Abstract
Quantum Key Distribution (QKD) is, nowadays, the dominant practical application in the quantum technologies field. Nevertheless, extracting secure keys from a physical implementation of a QKD system remains a complicated procedure. In this paper, post-processing algorithms were implemented to extract secret keys from a Continuous-Variable QKD (CV-QKD) system. The Discrete Modulated Four State CV-QKD protocol was considered due to lower complexity in the reconciliation procedure. The secret keys were extracted considering Maximum-Likelihood Estimators (MLEs) for parameter estimation, Iterative Low-Density Parity Check (LDPC) codes complemented with the Sum-Product Algorithm and error groups iterative reconciliation for information reconciliation, and the Toeplitz matrix as the universal hash function for privacy amplification. The post-processing algorithms were applied to extract secret keys from a simulated implementation of the Discrete Modulated Four State CV-QKD protocol, with double homodyne detection.
- Published
- 2021
24. HalftimeHash: Modern Hashing Without 64-Bit Multipliers or Finite Fields
- Author
-
Jim Apple
- Subjects
Bit (horse) ,Finite field ,Universal hashing ,Computer science ,Hash function ,Data_FILES ,Computer Science::Data Structures and Algorithms ,Algorithm ,Computer Science::Databases ,Computer Science::Cryptography and Security ,Randomized algorithm - Abstract
HalftimeHash is a new algorithm for hashing long strings. The goals are few collisions (different inputs that produce identical output hash values) and high performance.
- Published
- 2021
25. Revisiting Construction of Online Cipher in Hash-ECB-Hash Structure
- Author
-
Peng Wang, Dingfeng Ye, Gang Liu, and Rong Wei
- Subjects
Authenticated encryption ,Theoretical computer science ,Cipher ,Computer science ,business.industry ,Universal hashing ,Hash function ,Data_FILES ,Cryptography ,Construct (python library) ,Layer (object-oriented design) ,business ,Block cipher - Abstract
Online cipher is an important primitive in many cryptographic schemes, such as authenticated encryption schemes. Considering performance and security, the Hash-ECB-Hash (HEH) structure provides a potential way to construct parallelizable and CCA secure online cipher. In this paper, we start from the online cipher POE which is the only instantiation of Hash-ECB-Hash structure in the literature. However, the AXU property of hash function in the hash layer cannot guarantee the security of POE. Then we propose a new concept of online universal hash function (OUHF) for the hash layer and prove that the Hash-ECB-Hash structure is CCA secure, if the hash layer is online almost universal (OAU) hash function and the underlying block cipher is CCA secure. We also give several concrete constructions of OAU.
- Published
- 2021
26. Single-Trace Side-Channel Analysis on Polynomial-Based MAC Schemes
- Author
-
Yuto Nakano, Kazuhide Fukushima, Rei Ueno, Naofumi Homma, and Shinsaku Kiyomoto
- Subjects
Authenticated encryption ,Polynomial ,Computer science ,Universal hashing ,Hash function ,02 engineering and technology ,020202 computer hardware & architecture ,Power analysis ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Multiplication ,Message authentication code ,Side channel attack ,Algorithm - Abstract
This paper presents the first side-channel analysis (SCA) on polynomial-based message authentication code (MAC) schemes which is applicable to Poly1305. Typical SCAs (e.g., simple power analysis (SPA) and differential power analysis (DPA)) and conventional attacks on GCM/GMAC that focus on the first multiplication result in the universal hashing (i.e., polynomial evaluation) cannot be applied to Poly1305 owing to one-time keys and the structure of prime-field multiplication. On the other hand, the proposed attack retrieves the hash key from a single side-channel trace (e.g., a power/EM trace given by one execution) with a non-negligible probability and is applicable to polynomial-based MAC schemes implemented on an 8-bit micro-controller. The proposed attack allows the attacker to forge the authentication tag even if the hash key is a one-time key. The basic idea of the proposed attack is to exploit the addition in polynomial-based MAC schemes. Since the output or one input of the addition in these MAC schemes is known, we can efficiently estimate the unknown operands of addition, and then retrieve the hash key by the polynomial factorizations with the estimated candidates. This study also shows a cost-effective countermeasure for ChaCha20-Poly1305 using a combination of a lightweight masked Poly1305 and first-order mask conversion from Boolean to arithmetic.
- Published
- 2021
27. A Lightweight and Efficient Physical Layer Key Generation Mechanism for MANETs
- Author
-
Wenjie Du, Weijiang Liu, Ruiwen Wu, Furui Zhan, and Xiaomiao Gao
- Subjects
Key generation ,Computer science ,Universal hashing ,business.industry ,05 social sciences ,Physical layer ,050801 communication & media studies ,020206 networking & telecommunications ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Shared secret ,0508 media and communications ,Channel state information ,0202 electrical engineering, electronic engineering, information engineering ,Bit error rate ,Low-density parity-check code ,business ,Communication channel ,Computer network - Abstract
Due to the reciprocity of wireless channels, the communication parties can directly extract the shared key from channel. This solution were verified through several schemes. However, in real situations, channel sampling of legitimate transceivers might be impacted by noises and other interferences, which makes the channel states obtained by initiator and responder might be obvious different. The efficiency and even availability of physical layer key generation are thus reduced. In this paper, we propose a lightweight and efficient physical layer key generation scheme, which extract shared secret keys from channel state information (CSI). To improve the key generation process, the discrete cosine transform (DCT) is employed to reduce differences of channel states of legitimate transceivers. Then, these outputs are quantified and encoded through multi-bit adaptive quantization(MAQ) quantizer and gray code to generate binary bit sequence, which can greatly reduce the bit error rate. Moreover, the low density parity check (LDPC) code and universal hashing functions are used to achieve information reconciliation and privacy amplifification. By adding preprocessing methods in the key generation process and using the rich information of CSI, the security of communications can be increased on the basis of improving the key generation rate. To evaluate this scheme, a number of experiments in various real environments are conducted. The experimental results show that the proposed scheme can effificiently generate shared secret keys for nodes and protect their communication.
- Published
- 2020
28. A Built-in Hash Permutation Assisted Cross-layer Secure Transport in End-to-End FlexE over WDM Networks
- Author
-
Yuefeng Ji, Jiabin Cui, and Pengfei Zhu
- Subjects
Ethernet ,End-to-end principle ,Universal hashing ,Network security ,business.industry ,Computer science ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Hash function ,Physical layer ,Eavesdropping ,business ,Encryption ,Computer network - Abstract
With the traffic growth with different deterministic transport and isolation requirements in radio access networks (RAN), Flexible Ethernet (FlexE) over wavelength division multiplexing (WDM) network is as a candidate for next generation RAN transport, and the security issue in RAN transport is much more obvious, especially the eavesdropping attack in physical layer. Therefore, in this work, we put forward a cross-layer design for security enhancement through leveraging universal Hashing based FlexE data block permutation and multiple parallel fibre transmission for anti-eavesdropping in end-to-end FlexE over WDM network. Different levels of attack ability are considered for measuring the impact on network security and resource utilization. Furthermore, the trade-off problem between efficient resource utilization and guarantee of higher level of security is also explored. Numerical results demonstrate the cross-layer defense strategies are effective to struggle against intruders with different levels of attack ability.
- Published
- 2020
29. Applications in Universal Hashing and Authentication with Secrecy
- Author
-
Khodakhast Bibak
- Subjects
Authentication ,Universal hashing ,Computer science ,Secrecy ,Computer security ,computer.software_genre ,computer - Published
- 2020
30. Sparse Hashing for Scalable Approximate Model Counting: Theory and Practice
- Author
-
Kuldeep S. Meel and Sundararaman Akshay
- Subjects
FOS: Computer and information sciences ,050101 languages & linguistics ,Computer Science - Machine Learning ,Computer Science - Logic in Computer Science ,Theoretical computer science ,Universal hashing ,Computer science ,05 social sciences ,Hash function ,Relaxation (iterative method) ,02 engineering and technology ,Oracle ,Machine Learning (cs.LG) ,Logic in Computer Science (cs.LO) ,Variable (computer science) ,Scalability ,Computer Science - Data Structures and Algorithms ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Data Structures and Algorithms (cs.DS) ,Hypercube ,Boolean satisfiability problem - Abstract
Given a CNF formula F on n variables, the problem of model counting or #SAT is to compute the number of satisfying assignments of F . Model counting is a fundamental but hard problem in computer science with varied applications. Recent years have witnessed a surge of effort towards developing efficient algorithmic techniques that combine the classical 2-universal hashing with the remarkable progress in SAT solving over the past decade. These techniques augment the CNF formula F with random XOR constraints and invoke an NP oracle repeatedly on the resultant CNF-XOR formulas. In practice, calls to the NP oracle calls are replaced a SAT solver whose runtime performance is adversely affected by size of XOR constraints. The standard construction of 2-universal hash functions chooses every variable with probability p = 1/2 leading to XOR constraints of size n/2 in expectation. Consequently, the challenge is to design sparse hash functions where variables can be chosen with smaller probability and lead to smaller sized XOR constraints. In this paper, we address this challenge from theoretical and practical perspectives. First, we formalize a relaxation of universal hashing, called concentrated hashing and establish a novel and beautiful connection between concentration measures of these hash functions and isoperimetric inequalities on boolean hypercubes. This allows us to obtain (log m) tight bounds on variance and dispersion index and show that p = O( log(m)/m ) suffices for design of sparse hash functions from {0, 1}^n to {0, 1}^m. We then use sparse hash functions belonging to this concentrated hash family to develop new approximate counting algorithms. A comprehensive experimental evaluation of our algorithm on 1893 benchmarks demonstrates that usage of sparse hash functions can lead to significant speedups., Full version of paper accepted in LICS2020 conference
- Published
- 2020
31. SQUAREMIX: A Faster Pseudorandom Number Generator for Dynamic-Multithreading Platforms
- Author
-
Robert Ritchie and Khodakhast Bibak
- Subjects
Pseudorandom number generator ,Universal hashing ,Computer science ,Concurrency ,Model of computation ,Multithreading ,Hash function ,Parallel computing ,Compiler ,Cilk ,computer.software_genre ,computer ,computer.programming_language - Abstract
Many concurrency platforms offer a processor oblivious model of computation, where the scheduler dynamically distributes work across threads. While this is convenient, it introduces non\hyp determinism at runtime, which complicates debugging, since it precludes repeatability. Leiserson et al. [PPoPP '12] persuaded Intel to modify its C/C++ compiler, which provided the Cilk Plus concurrency platform, to include a feature called pedigrees, which enables determinism by uniquely identifying strands with low overhead. They used pedigrees to design a DPRNG called DOTMIX, which hashes a pedigree, then mixes the result into a random number for a given strand. Improving the efficiency of DOTMIX by using a faster hash function is an open problem put forth by Leiserson et al. [PPoPP '12]. We address this problem and improve the speed of the algorithm roughly by a factor of two, without sacrificing any statistical quality. Specifically, we replace the compression function used in DOTMIX with a faster universal hash function family due to Etzel et al. [CRYPTO '99] called Square Hash.
- Published
- 2020
32. Towards closing the security gap of Tweak-aNd-Tweak (TNT)
- Author
-
Eik List, Jian Guo, Ling Song, Chun Guo, School of Physical and Mathematical Sciences, and International Conference on the Theory and Application of Cryptology and Information Security
- Subjects
Theoretical computer science ,Universal hashing ,Computer science ,Science ,Process (computing) ,0102 computer and information sciences ,02 engineering and technology ,Reuse ,01 natural sciences ,law.invention ,Cryptanalysis ,010201 computation theory & mathematics ,law ,Data_GENERAL ,0202 electrical engineering, electronic engineering, information engineering ,Block Cipher ,020201 artificial intelligence & image processing ,Closing (morphology) ,Block cipher - Abstract
Tweakable block ciphers (TBCs) have been established as a valuable replacement for many applications of classical block ciphers. While several dedicated TBCs have been proposed in the previous years, generic constructions that build a TBC from a classical block cipher are still highly useful, for example, to reuse an existing implementation. However, most generic constructions need an additional call to either the block cipher or a universal hash function to process the tweak, which limited their efficiency. To address this deficit, Bao et al. proposed Tweak-aNd-Tweak (TNT) at EUROCRYPT’20. Their construction chains three calls to independent keyed permutations and adds the unmodified tweak to the state in between the calls. They further suggested an efficient instantiation TNT-AES that was based on round-reduced AES for each of the permutations. Their work could prove 2n/3-bit security for their construction, where n is the block size in bits. Though, in the absence of an upper bound, their analysis had to consider all possible attack vectors with up to 2n time, data, and memory. Still, closing the gap between both bounds remained a highly interesting research question. In this work, we show that a variant of Mennink’s distinguisher on CLRW2 with O(n23n/4) data and O(23n/2) time from TCC’18 also applies to TNT. We reduce its time complexity to O(n23n/4), show the existence of a second similar distinguisher, and demonstrate how to transform the distinguisher to a key-recovery attack on from an impossible differential. From a constructive point of view, we adapt the rigorous STPRP analysis of CLRW2 by Jha and Nandi to show O(23n/4) TPRP security for TNT. Thus, we move towards closing the gap between the previous proof and attacks for TNT as well as its proposed instance. Ministry of Education (MOE) Accepted version This research has been partially supported by Nanyang Technological University in Singapore under Grant 04INS000397C230, Singapore’s Ministry of Education under Grants RG18/19 and MOE2019-T2-1-060.
- Published
- 2020
33. Robust and Flexible Discrete Hashing for Cross-Modal Similarity Search
- Author
-
Xinbo Gao, Quan Wang, and Di Wang
- Subjects
Universal hashing ,business.industry ,Computer science ,Dynamic perfect hashing ,Nearest neighbor search ,Hash function ,Pattern recognition ,02 engineering and technology ,Hash table ,Locality-sensitive hashing ,K-independent hashing ,020204 information systems ,Locality preserving hashing ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Feature hashing ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Double hashing - Abstract
Multimodal hashing approaches have gained great success on large-scale cross-modal similarity search applications, due to their appealing computation and storage efficiency. However, it is still a challenge work to design binary codes to represent the original features with good performance in an unsupervised manner. We argue that there are some limitations that need to be further considered for unsupervised multimodal hashing: 1) most existing methods drop the discrete constraints to simplify the optimization, which will cause large quantization error; 2) many methods are sensitive to outliers and noises since they use $\ell _{2}$ -norm in their objective functions which can amplify the errors; and 3) the weight of each modality, which greatly influences the retrieval performance, is manually or empirically determined and may not fully fit the specific training set. The above limitations may significantly degrade the retrieval accuracy of unsupervised multimodal hashing methods. To address these problems, in this paper, a novel hashing model is proposed to efficiently learn robust discrete binary codes, which is referred as Robust and Flexible Discrete Hashing (RFDH). In the proposed RFDH model, binary codes are directly learned based on discrete matrix decomposition, so that the large quantization error caused by relaxation is avoided. Moreover, the $\ell _{2,1}$ -norm is used in the objective function to improve the robustness, such that the learned model is not sensitive to data outliers and noises. In addition, the weight of each modality is adaptively adjusted according to training data. Hence the important modality will get large weights during the hash learning procedure. Owing to above merits of RFDH, it can generate more effective hash codes. Besides, we introduce two kinds of hash function learning methods to project unseen instances into hash codes. Extensive experiments on several well-known large databases demonstrate superior performance of the proposed hash model over most state-of-the-art unsupervised multimodal hashing methods.
- Published
- 2018
34. Scalable Discrete Supervised Multimedia Hash Learning With Clustering
- Author
-
Bo Zhang, Jianmin Li, Mengqing Jiang, Shifeng Zhang, and Peijiang Yuan
- Subjects
Theoretical computer science ,Computer science ,Universal hashing ,Dynamic perfect hashing ,Hash function ,02 engineering and technology ,010501 environmental sciences ,2-choice hashing ,Rolling hash ,01 natural sciences ,Hash table ,Hopscotch hashing ,K-independent hashing ,Locality-sensitive hashing ,Hash tree ,Kernel method ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Feature hashing ,Electrical and Electronic Engineering ,Cluster analysis ,Perfect hash function ,Double hashing ,0105 earth and related environmental sciences - Abstract
The hashing method maps similar data of various types to binary hashcodes with smaller hamming distance, and it has received broad attention due to its low-storage cost and fast retrieval speed. However, the existing limitations make the present algorithms difficult to deal with for large-scale data sets: 1) discrete constraints are involved in the learning of the hash function and 2) pairwise or triplet similarity is adopted to generate efficient hashcodes, resulting in both time and space complexity greater than $O(n^{2})$ . To address these issues, we propose a novel discrete supervised hash learning framework that can be scalable to large-scale data sets of various types. First, the discrete learning procedure is decomposed into a binary classifier learning scheme and binary codes learning scheme, which makes the learning procedure more efficient. Second, by adopting the asymmetric low-rank matrix factorization , we propose the fast clustering-based batch coordinate descent method, such that the time and space complexity are reduced to $O(n)$ . The proposed framework also provides a flexible paradigm to incorporate with arbitrary hash function, including deep neural networks and kernel methods, as well as any types of data to hash, including images and videos. Experiments on large-scale data sets demonstrate that the proposed method is superior or comparable with the state-of-the-art hashing algorithms.
- Published
- 2018
35. SHISS: Supervised hashing with informative set selection
- Author
-
Deying Feng, Jie Yang, Chen Gong, Chao Ma, and Yun Gu
- Subjects
Biometrics ,Computer science ,Hash function ,02 engineering and technology ,010501 environmental sciences ,computer.software_genre ,Machine learning ,01 natural sciences ,Set (abstract data type) ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Image retrieval ,0105 earth and related environmental sciences ,Authentication ,Universal hashing ,business.industry ,Identification (information) ,Signal Processing ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Feature hashing ,Artificial intelligence ,Data mining ,business ,computer ,Software - Abstract
In recent years, there has been increasing demand for social security and identity authentication, which leads to the booming of many biometrics involved large-scale problems such as recognition, retrieval, and identification. In this case, traditional models are infeasible due to the limited capability for handling large-scale data. Therefore, hashing technique is becoming prevalent due to its low storage cost and fast query speed. Recently, researchers have shown that supervised hashing can achieve higher accuracy than unsupervised hashing by incorporating tag or label information of data for learning hashing function. However, existing supervised methods treat all training examples equally, ignoring the different impacts of various training examples on the learning process. As a result, their performance is not satisfactory under some practical situations. As an improvement, this paper proposes a new method called “Supervised Hashing with Informative Set Selection” (SHISS), which assumes that different training examples have different influence on the learning process, and their usage should follow a logic way during optimization. In particular, we propose two criteria, certainty and diversity, to evaluate the informativeness of the subsets of training examples and encourage the more informative subsets to be learned ahead of the less informative ones. The experiments on two typical image retrieval datasets and one face dataset demonstrate that the proposed SHISS obtains higher mean average precision and shows faster convergence rate compared with the state-of-the-art hashing methods.
- Published
- 2018
36. On an Almost-Universal Hash Function Family with Applications to Authentication and Secrecy Codes
- Author
-
Venkatesh Srinivasan, Bruce M. Kapron, Khodakhast Bibak, and László Tóth
- Subjects
FOS: Computer and information sciences ,Authentication ,Computer Science - Cryptography and Security ,Theoretical computer science ,Mathematics - Number Theory ,Universal hashing ,Computer science ,010102 general mathematics ,0102 computer and information sciences ,01 natural sciences ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,010201 computation theory & mathematics ,Computer Science - Data Structures and Algorithms ,Secrecy ,FOS: Mathematics ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Computer Science (miscellaneous) ,Mathematics - Combinatorics ,Data Structures and Algorithms (cs.DS) ,Combinatorics (math.CO) ,Number Theory (math.NT) ,0101 mathematics ,Cryptography and Security (cs.CR) ,ComputingMilieux_MISCELLANEOUS - Abstract
Universal hashing, discovered by Carter and Wegman in 1979, has many important applications in computer science. MMH$^*$, which was shown to be $\Delta$-universal by Halevi and Krawczyk in 1997, is a well-known universal hash function family. We introduce a variant of MMH$^*$, that we call GRDH, where we use an arbitrary integer $n>1$ instead of prime $p$ and let the keys $\mathbf{x}=\langle x_1, \ldots, x_k \rangle \in \mathbb{Z}_n^k$ satisfy the conditions $\gcd(x_i,n)=t_i$ ($1\leq i\leq k$), where $t_1,\ldots,t_k$ are given positive divisors of $n$. Then via connecting the universal hashing problem to the number of solutions of restricted linear congruences, we prove that the family GRDH is an $\varepsilon$-almost-$\Delta$-universal family of hash functions for some $\varepsilon, Comment: International Journal of Foundations of Computer Science, to appear
- Published
- 2018
37. Adaptive hash retrieval with kernel based similarity
- Author
-
Jun Zhou, Xiao Bai, Cheng Yan, Lu Bai, Edwin R. Hancock, and Haichuan Yang
- Subjects
Hash function ,02 engineering and technology ,Similarity measure ,01 natural sciences ,Locality-sensitive hashing ,Artificial Intelligence ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,010306 general physics ,Mathematics ,Universal hashing ,business.industry ,Dynamic perfect hashing ,Pattern recognition ,Hash table ,Signal Processing ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Feature hashing ,Artificial intelligence ,business ,Software ,Double hashing - Abstract
Indexing methods have been widely used for fast data retrieval on large scale datasets. When the data are represented by high dimensional vectors, hashing is often used as an efficient solution for approximate similarity search. When a retrieval task does not involve supervised training data, most hashing methods aim at preserving data similarity defined by a distance metric on the feature vectors. Hash codes generated by these approaches normally maintain the Hamming distance of the data in accordance with the similarity function, but ignore the local details of the distribution of data. This objective is not suitable for k-nearest neighbor search since the similarity to the nearest neighbors can vary significantly for different data samples. In this paper, we present a novel adaptive similarity measure which is consistent with k-nearest neighbor search, and prove that it leads to a valid kernel if the original similarity function is a kernel function. Next we propose a method which calculates hash codes using the kernel function. With a low-rank approximation, our hashing framework is more effective than existing methods that preserve similarity over an arbitrary kernel. The proposed similarity function, hashing framework, and their combination demonstrate significant improvement when compared with several alternative state-of-the-art methods.
- Published
- 2018
38. Robust discrete code modeling for supervised hashing
- Author
-
Heng Tao Shen, Fumin Shen, Zi Huang, Pan Zhou, Yang Yang, and Yadan Luo
- Subjects
business.industry ,Universal hashing ,Dynamic perfect hashing ,Hash function ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,K-independent hashing ,Artificial Intelligence ,Discrete optimization ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,020201 artificial intelligence & image processing ,Binary code ,Computer Vision and Pattern Recognition ,Feature hashing ,Artificial intelligence ,business ,Algorithm ,Software ,Mathematics - Abstract
Recent years have witnessed the promising efficacy and efficiency of hashing (also known as binary code learning) for retrieving nearest neighbor in large-scale data collections. Particularly, with supervision knowledge (e.g., semantic labels), we may further gain considerable performance boost. Nevertheless, most existing supervised hashing schemes suffer from the following limitations: (1) severe quantization error caused by continuous relaxation of binary codes; (2) disturbance of unreliable codes in subsequent hash function learning; and (3) erroneous guidance derived from imprecise and incomplete semantic labels. In this work, we propose a novel supervised hashing approach, termed as Robust Discrete Code Modeling (RDCM), which directly learns high-quality discrete binary codes and hash functions by effectively suppressing the influence of unreliable binary codes and potentially noisily-labeled samples. RDCM employs l2, p norm, which is capable of inducing sample-wise sparsity, to jointly perform code selection and noisy sample identification. Moreover, we preserve the discrete constraint in RDCM to eliminate the quantization error. An efficient algorithm is developed to solve the discrete optimization problem. Extensive experiments conducted on various real-life datasets show the superiority of the proposed RDCM approach as compared to several state-of-the-art hashing methods.
- Published
- 2018
39. Collaborative multiview hashing
- Author
-
Jie Zhou and Zhixiang Chen
- Subjects
021110 strategic, defence & security studies ,Theoretical computer science ,Universal hashing ,business.industry ,Computer science ,Hash function ,0211 other engineering and technologies ,02 engineering and technology ,Machine learning ,computer.software_genre ,Artificial Intelligence ,Signal Processing ,Similarity (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Binary code ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Layer (object-oriented design) ,business ,Projection (set theory) ,computer ,Software - Abstract
In this paper, we propose a collaborative multiview hashing (CMH) approach to incorporate multiview representations into the binary code learning for scalable visual retrieval. Unlike most existing multiview hashing methods which learn linear projections to preserve the fused similarity relationship across different views in unsupervised manner, we employ the nonlinear hashing functions as the projection in each view and exploit the diverse information of multiview representations by utilizing the collaboration between view representations and the correlation between the view representations and the semantic labels. Specifically, the binary codes in each view are constrained to be predictive to each other to exploit the collaboration between the descriptors in different views that describe the same sample. Furthermore, the binary codes in all views are enforced to preserve the semantic relationship between data samples. The hashing functions are implemented in the form of multi-layer neural network with nonlinear transformations at each layer and trained with both the view collaboration and semantic preserving constraints on the outputs. Experimental results on two datasets validate the superiority of the proposed approach in comparison with several state-of-the-art hashing methods.
- Published
- 2018
40. Quantization-based hashing: a general framework for scalable image and video retrieval
- Author
-
Lianli Gao, Nicu Sebe, Li Liu, Jingkuan Song, and Xiaofeng Zhu
- Subjects
Computer science ,Hash function ,Hashing ,Multimedia retrieval ,Pseudo labels ,Software ,Signal Processing ,1707 ,Artificial Intelligence ,02 engineering and technology ,Linear hashing ,K-independent hashing ,Locality-sensitive hashing ,Open addressing ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science::Databases ,Universal hashing ,business.industry ,Dynamic perfect hashing ,Vector quantization ,020207 software engineering ,Pattern recognition ,Hamming distance ,2-choice hashing ,Hash table ,Hopscotch hashing ,Cuckoo hashing ,Locality preserving hashing ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Feature hashing ,Artificial intelligence ,business ,Perfect hash function ,Extendible hashing ,Double hashing - Abstract
As far as we know, we are the first to propose a general framework to incorporate the quantization-based methods into the conventional similarity-preserving hashing, in order to improve the effectiveness of hashing methods. In theory, any quantization method can be adopted to reduce the quantization error of any similarity-preserving hashing methods to improve their performance.This framework can be applied to both unsupervised and supervised hashing. We experimentally obtained the best performance compared to state-ofthe-art supervised and unsupervised hashing methods on six popular datasets.We successfully show it to work on a huge dataset SIFT1B (1 billion data points) by utilizing the graph approximation and out-of-sample extension. Nowadays, due to the exponential growth of user generated images and videos, there is an increasing interest in learning-based hashing methods. In computer vision, the hash functions are learned in such a way that the hash codes can preserve essential properties of the original space (or label information). Then the Hamming distance of the hash codes can approximate the data similarity. On the other hand, vector quantization methods quantize the data into different clusters based on the criteria of minimal quantization error, and then perform the search using look-up tables. While hashing methods using Hamming distance can achieve faster search speed, their accuracy is often outperformed by quantization methods with the same code length, due to the low quantization error and more flexible distance lookups. To improve the effectiveness of the hashing methods, in this work, we propose Quantization-based Hashing (QBH), a general framework which incorporates the advantages of quantization error reduction methods into conventional property preserving hashing methods. The learned hash codes simultaneously preserve the properties in the original space and reduce the quantization error, and thus can achieve better performance. Furthermore, the hash functions and a quantizer can be jointly learned and iteratively updated in a unified framework, which can be readily used to generate hash codes or quantize new data points. Importantly, QBH is a generic framework that can be integrated to different property preserving hashing methods and quantization strategies, and we apply QBH to both unsupervised and supervised hashing models as showcases in this paper. Experimental results on three large-scale unlabeled datasets (i.e., SIFT1M, GIST1M, and SIFT1B), three labeled datastes (i.e., ESPGAME, IAPRTC and MIRFLICKR) and one video dataset (UQ_VIDEO) demonstrate the superior performance of our QBH over existing unsupervised and supervised hashing methods.
- Published
- 2018
41. Exclusive grouped spatial hashing
- Author
-
Yi Gao, Jianxin Luo, Qi Hu, Bin Tang, Guiqiang Ni, and Weiwei Duan
- Subjects
Theoretical computer science ,Computer science ,Universal hashing ,Dynamic perfect hashing ,General Engineering ,020207 software engineering ,02 engineering and technology ,Linear hashing ,Computer Graphics and Computer-Aided Design ,Hash table ,Hopscotch hashing ,Human-Computer Interaction ,Open addressing ,020204 information systems ,Data_FILES ,0202 electrical engineering, electronic engineering, information engineering ,Consistent hashing ,Double hashing - Abstract
A novel multidimensional hashing scheme, named the Exclusive Grouped Spatial Hashing (EGSH), which compresses repetitive spatial data into several compact tables while retaining efficient random access, is presented. EGSH represents a multi-level hashing without any losses. Moreover, EGSH compresses a group of repetitive elements into the same entry of the hash tables, while it uses a coverage table to mark the corresponding hash tables of the compressed data. Although, prior hashing work is related to hash collisions mitigation, here a full use of these collisions is obtained and therefore the spatial data compression rate is improved. The performance of exclusive grouped spatial hashing is presented in 2D and 3D graphic examples.
- Published
- 2018
42. Binary Multidimensional Scaling for Hashing
- Author
-
Zhouchen Lin and Yameng Huang
- Subjects
Theoretical computer science ,Computer science ,Nearest neighbor search ,Hash function ,02 engineering and technology ,010501 environmental sciences ,Linear hashing ,Rolling hash ,01 natural sciences ,K-independent hashing ,Locality-sensitive hashing ,Open addressing ,0202 electrical engineering, electronic engineering, information engineering ,Consistent hashing ,Computer Science::Databases ,0105 earth and related environmental sciences ,Universal hashing ,Dynamic perfect hashing ,2-choice hashing ,Computer Graphics and Computer-Aided Design ,Hash table ,Hopscotch hashing ,Cuckoo hashing ,SUHA ,Locality preserving hashing ,020201 artificial intelligence & image processing ,Feature hashing ,Extendible hashing ,Perfect hash function ,Software ,Double hashing - Abstract
Hashing is a useful technique for fast nearest neighbor search due to its low storage cost and fast query speed. Unsupervised hashing aims at learning binary hash codes for the original features so that the pairwise distances can be best preserved. While several works have targeted on this task, the results are not satisfactory mainly due to the over-simplified model. In this paper, we propose a unified and concise unsupervised hashing framework, called binary multidimensional scaling , which is able to learn the hash code for distance preservation in both batch and online mode. In the batch mode, unlike most existing hashing methods, we do not need to simplify the model by predefining the form of hash map. Instead, we learn the binary codes directly based on the pairwise distances among the normalized original features by alternating minimization. This enables a stronger expressive power of the hash map. In the online mode, we consider the holistic distance relationship between current query example and those we have already learned, rather than only focusing on current data chunk. It is useful when the data come in a streaming fashion. Empirical results show that while being efficient for training, our algorithm outperforms state-of-the-art methods by a large margin in terms of distance preservation, which is practical for real-world applications.
- Published
- 2018
43. Bagging–boosting-based semi-supervised multi-hashing with query-adaptive re-ranking
- Author
-
Wing W. Y. Ng, Xizhao Wang, Daniel S. Yeung, Xing Tian, and Xiancheng Zhou
- Subjects
Universal hashing ,Computer science ,business.industry ,Cognitive Neuroscience ,Dynamic perfect hashing ,Hash function ,Pattern recognition ,2-choice hashing ,computer.software_genre ,Hash table ,Computer Science Applications ,K-independent hashing ,Locality-sensitive hashing ,Hopscotch hashing ,Hash tree ,Open addressing ,Artificial Intelligence ,Feature hashing ,Artificial intelligence ,Data mining ,business ,Extendible hashing ,computer ,Double hashing - Abstract
Hashing-based methods have been widely applied in large scale image retrieval problem due to its high efficiency. In real world applications, it is difficult to require all images in a large database being labeled while unsupervised methods waste information from labeled images. Therefore, semi-supervised hashing methods are proposed to use partially labeled database to train hash functions using both the semantic and the unsupervised information. Multi-hashing methods achieve better precision-recall in comparison to single hashing method. However, current boosting-based multi-hashing methods do not improve performance after a small number of hash tables are created. Therefore, a bagging–boosting-based semi-supervised multi-hashing with query-adaptive re-ranking (BBSHR) is proposed in this paper. In the proposed method, an individual hash table of multi-hashing is trained using the boosting-based BSPLH, such that each hash bit corrects errors made by previous bits. Moreover, we propose a new semi-supervised weighting scheme for the query-adaptive re-ranking. Experimental results show that the proposed method yields better precision and recall rates for given numbers of hash tables and bits.
- Published
- 2018
44. The Pitfalls of Hashing for Privacy
- Author
-
Cédric Lauradoux, Levent Demir, Mathieu Cunche, Amrit Kumar, Privacy Models, Architectures and Tools for the Information Society (PRIVATICS), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-CITI Centre of Innovation in Telecommunications and Integration of services (CITI), Institut National des Sciences Appliquées de Lyon (INSA Lyon), Université de Lyon-Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National des Sciences Appliquées de Lyon (INSA Lyon), Université de Lyon-Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National des Sciences Appliquées (INSA)-Inria Lyon, Institut National de Recherche en Informatique et en Automatique (Inria), INCAS-ITSec, Université Grenoble Alpes [2016-2019] (UGA [2016-2019]), CITI Centre of Innovation in Telecommunications and Integration of services (CITI), Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National des Sciences Appliquées de Lyon (INSA Lyon), and Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Inria Grenoble - Rhône-Alpes
- Subjects
Information privacy ,Computer science ,Data_MISCELLANEOUS ,Hash function ,0211 other engineering and technologies ,Anonymization ,Cryptography ,02 engineering and technology ,Computer security ,computer.software_genre ,[INFO.INFO-CR]Computer Science [cs]/Cryptography and Security [cs.CR] ,Hashing ,Pseudonymization ,0202 electrical engineering, electronic engineering, information engineering ,Cryptographic hash function ,Index Terms-Anonymity set ,Electrical and Electronic Engineering ,021110 strategic, defence & security studies ,Data anonymization ,Universal hashing ,business.industry ,Anonymity set ,020206 networking & telecommunications ,Balls-into-bins ,business ,computer ,Anonymity - Abstract
International audience; Boosted by recent legislations, data anonymizationis fast becoming a norm. However, as of yet no generic solutionhas been found to safely release data. As a consequence, datacustodians often resort to ad-hoc means to anonymize datasets.Both past and current practices indicate that hashing is oftenbelieved to be an effective way to anonymize data. Unfortunately,in practice it is only rarely effective. This paper is a tutorialto explain the limits of cryptographic hash functions as ananonymization technique. Anonymity set is the best privacymodel that can be achieved by hash functions. However, thismodel has several shortcomings. We provide three case studiesto illustrate how hashing only yields a weakly anonymized data.The case studies include MAC and email address anonymizationas well as the analysis of Google Safe Browsing.Boosted by recent legislations, data anonymizationis fast becoming a norm. However, as of yet no generic solutionhas been found to safely release data. As a consequence, datacustodians often resort to ad-hoc means to anonymize datasets.Both past and current practices indicate that hashing is oftenbelieved to be an effective way to anonymize data. Unfortunately,in practice it is only rarely effective. This paper is a tutorialto explain the limits of cryptographic hash functions as ananonymization technique. Anonymity set is the best privacymodel that can be achieved by hash functions. However, thismodel has several shortcomings. We provide three case studiesto illustrate how hashing only yields a weakly anonymized data.The case studies include MAC and email address anonymizationas well as the analysis of Google Safe Browsing.
- Published
- 2018
45. Tolerating Sensitive-Leakage With Larger Plaintext-Space and Higher Leakage-Rate in Privacy-Aware Internet-of-Things
- Author
-
Yong Ding, Mingwu Zhang, Wentao Leng, and Chunming Tang
- Subjects
General Computer Science ,Computer science ,Cryptography ,0102 computer and information sciences ,02 engineering and technology ,Encryption ,01 natural sciences ,Public-key cryptography ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Key derivation function ,Key size ,key entropy ,Cryptographic primitive ,business.industry ,Universal hashing ,randomness leakage ,General Engineering ,Plaintext ,Symmetric-key algorithm ,010201 computation theory & mathematics ,medical Internet of Things ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,Semantic security ,lcsh:TK1-9971 ,Sensitive information leakage ,leakage rate ,Computer network - Abstract
When executing a program or storing data in a medical Internet of Things (mIoT) system, physical side-channels analysis, such as recent-timing, cold-reboot, and virtual-machine attacks, might obtain partial information about internal sensitive medical data/states in memory that the attacker can gain partial privacy information. Leakage-resilient cryptography has led to better implementation of many cryptographic primitives that can be proven secure against attackers who can obtain limited sensitive information about private keys , randomness , and other internal states , and therefore prevents from breaking the security. In this paper, to tolerate the sensitive information leakage in mIoT, we first present a leakage-resilient public-key encryption mechanism that is semantically secure against adaptively chosen-ciphertext attacks in the presence of key leakage under standard decisional Diffie–Hellman assumption. Our construction employs a special universal hashing in multiplicative group to provide an efficient strong extractor, and a key derivation function to derive one or more symmetric keys from a single value. Also, the plaintext space of the scheme is extended to the full domain field of group so as to provide a larger space for the message. We emphasis that our scheme can be deployed in mIoT since the limited power and energy budgets, the communication and computation cost, and the leakage attack are taken into account. Using the first scheme as a building block, we also give a protocol construction to achieve the security resilient to randomness leakage and key leakage. Our schemes feature with a shorter key size and a larger plaintext space. Concretely, the private-key contains only four elements in the finite field, and the allowable key-leakage rate is 25%, which provides a higher leakage rate than Naor Segev (leakage rate is 16.7%) and its variants. It is worth highlighting of the construction resilient to both key leakage and randomness leakage, simultaneously, and is flexible to deploy in easy-to-attack outdoor nodes such as in medical IoT and smart grids, since in these nodes the private keys and randomness are either stored or generated in outdoor privacy-aware environments.
- Published
- 2018
46. A Study on Secure and Efficient KSI System based on Multi Path Hash Chain with Universal Hashing Function
- Author
-
Gyeong-Jin Ra and Im-Yeong Lee
- Subjects
Theoretical computer science ,General Computer Science ,Universal hashing ,Computer science ,Hash chain ,Multi path ,Function (mathematics) - Published
- 2017
47. Faster compression methods for a weighted graph using locality sensitive hashing
- Author
-
Tu Nguyen Anh, Waqas Nawaz, Batjargal Dolgorsuren, Young-Koo Lee, and Kifayat Ullah Khan
- Subjects
Information Systems and Management ,Theoretical computer science ,Universal hashing ,Dynamic perfect hashing ,02 engineering and technology ,2-choice hashing ,Hash table ,Computer Science Applications ,Theoretical Computer Science ,Locality-sensitive hashing ,K-independent hashing ,Artificial Intelligence ,Control and Systems Engineering ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Consistent hashing ,Software ,Mathematics - Abstract
Weights on the edges of a graph can show interactions among members of a social network, emails exchanged in any organization, and traffic flow on roads. However, mining hidden patterns is difficult when the size of the graph is large. Creating a compact summary is useful if it preserves the structural and edge weight information of its underlying graph. Existing work in this context provides a pairwise compression strategy to create a summary whose decompressed version has minimum difference in edge weights compared to its initial state. The resultant summary graph is compact, but the solution has quadratic time complexity due to exhaustive pairwise searching. Therefore, we present a set-based summarization approach that aggregates sets of nodes. We avoid explicit similarity computations and directly identify the required sets via Locality Sensitive Hashing (LSH). LSH accelerates the summarization process, but its hashing scheme cannot consider the edge weights. Considering the edge weight during hashing is necessary when the objective of the required summary is altered to a personalized view. Hence, we propose a non-parametric hashing scheme for LSH to generate candidate similar nodes from the weighted neighborhood of each node. We perform comparisons with state-of-the-art solutions and obtain better results using various experimental criteria.
- Published
- 2017
48. Semi-Paired Discrete Hashing: Learning Latent Hash Codes for Semi-Paired Cross-View Retrieval
- Author
-
Xiaobo Shen, Quansen Sun, Fumin Shen, Yang Yang, Heng Tao Shen, and Yun-Hao Yuan
- Subjects
Theoretical computer science ,Universal hashing ,Computer science ,business.industry ,Dynamic perfect hashing ,Hash function ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,Hash table ,Computer Science Applications ,K-independent hashing ,Human-Computer Interaction ,Control and Systems Engineering ,Locality preserving hashing ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Artificial intelligence ,Feature hashing ,Electrical and Electronic Engineering ,business ,Software ,Information Systems - Abstract
Due to the significant reduction in computational cost and storage, hashing techniques have gained increasing interests in facilitating large-scale cross-view retrieval tasks. Most cross-view hashing methods are developed by assuming that data from different views are well paired, e.g., text-image pairs. In real-world applications, however, this fully-paired multiview setting may not be practical. The more practical yet challenging semi-paired cross-view retrieval problem, where pairwise correspondences are only partially provided, has less been studied. In this paper, we propose an unsupervised hashing method for semi-paired cross-view retrieval, dubbed semi-paired discrete hashing (SPDH). In specific, SPDH explores the underlying structure of the constructed common latent subspace, where both paired and unpaired samples are well aligned. To effectively preserve the similarities of semi-paired data in the latent subspace, we construct the cross-view similarity graph with the help of anchor data pairs. SPDH jointly learns the latent features and hash codes with a factorization-based coding scheme. For the formulated objective function, we devise an efficient alternating optimization algorithm, where the key binary code learning problem is solved in a bit-by-bit manner with each bit generated with a closed-form solution. The proposed method is extensively evaluated on four benchmark datasets with both fully-paired and semi-paired settings and the results demonstrate the superiority of SPDH over several other state-of-the-art methods in term of both accuracy and scalability.
- Published
- 2017
49. Unsupervised Topic Hypergraph Hashing for Efficient Mobile Image Retrieval
- Author
-
Liang Xie, Lei Zhu, Jialie Shen, and Zhiyong Cheng
- Subjects
Hypergraph ,Theoretical computer science ,Computer science ,business.industry ,Universal hashing ,Dynamic perfect hashing ,Hash function ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,Semantics ,Computer Science Applications ,Non-negative matrix factorization ,Human-Computer Interaction ,Discriminative model ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Image retrieval ,Software ,Information Systems - Abstract
Hashing compresses high-dimensional features into compact binary codes. It is one of the promising techniques to support efficient mobile image retrieval, due to its low data transmission cost and fast retrieval response. However, most of existing hashing strategies simply rely on low-level features. Thus, they may generate hashing codes with limited discriminative capability. Moreover, many of them fail to exploit complex and high-order semantic correlations that inherently exist among images. Motivated by these observations, we propose a novel unsupervised hashing scheme, called topic hypergraph hashing (THH), to address the limitations. THH effectively mitigates the semantic shortage of hashing codes by exploiting auxiliary texts around images. In our method, relations between images and semantic topics are first discovered via robust collective non-negative matrix factorization. Afterwards, a unified topic hypergraph, where images and topics are represented with independent vertices and hyperedges, respectively, is constructed to model inherent high-order semantic correlations of images. Finally, hashing codes and functions are learned by simultaneously enforcing semantic consistence and preserving the discovered semantic relations. Experiments on publicly available datasets demonstrate that THH can achieve superior performance compared with several state-of-the-art methods, and it is more suitable for mobile image retrieval.
- Published
- 2017
50. Isometric hashing for image retrieval
- Author
-
Shanmin Pang, Xuequn Shang, and Bo Yang
- Subjects
Theoretical computer science ,Computer science ,Universal hashing ,Dynamic perfect hashing ,02 engineering and technology ,010501 environmental sciences ,Linear hashing ,01 natural sciences ,Hash table ,Locality-sensitive hashing ,Hopscotch hashing ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Feature hashing ,Electrical and Electronic Engineering ,Software ,Double hashing ,0105 earth and related environmental sciences - Abstract
Hashing has been attracting much attention in computer vision recently, since it can provide efficient similarity comparison in massive multimedia databases with fast query speed and low storage cost. Since the distance metric is an explicit description of similarity, in this paper, a novel hashing method is proposed for image retrieval, dubbed Isometric Hashing (IH). IH aims to minimize the difference between the distance in input space and the distance of the corresponding binary codes. To tackle the discrete optimization in a computationally tractable manner, IH adopts some mathematical tricks to transform the original problem into a multi-objective optimization problem. The usage of linear-projection-based hash functions enables efficient generating hash codes for unseen data points. Furthermore, utilizing different distance metrics could produce corresponding hashing algorithms, thus IH can be seen as a framework for developing new hashing methods. Extensive experiments performed on four benchmark datasets validate that IH can achieve comparable to or even better results than some state-of-the-art hashing methods.
- Published
- 2017
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.