552 results on '"Hamming space"'
Search Results
2. Cryptographic Fingerprinting for Network Devices Based on Triplet Network and Fuzzy Extractors
- Author
-
Li, Longjiang, Kang, Yajie, Liang, Yukun, Liu, Xutong, Li, Yonggang, Akan, Ozgur, Editorial Board Member, Bellavista, Paolo, Editorial Board Member, Cao, Jiannong, Editorial Board Member, Coulson, Geoffrey, Editorial Board Member, Dressler, Falko, Editorial Board Member, Ferrari, Domenico, Editorial Board Member, Gerla, Mario, Editorial Board Member, Kobayashi, Hisashi, Editorial Board Member, Palazzo, Sergio, Editorial Board Member, Sahni, Sartaj, Editorial Board Member, Shen, Xuemin, Editorial Board Member, Stan, Mircea, Editorial Board Member, Jia, Xiaohua, Editorial Board Member, Zomaya, Albert Y., Editorial Board Member, Gao, Feifei, editor, Wu, Jun, editor, Li, Yun, editor, Gao, Honghao, editor, and Wang, Shangguang, editor
- Published
- 2024
- Full Text
- View/download PDF
3. Locally standard measure algebras.
- Author
-
Bezushchak, Oksana and Oliynyk, Bogdana
- Subjects
- *
ALGEBRA , *REAL numbers , *MATRICES (Mathematics) , *BOOLEAN algebra - Abstract
We parameterize countable locally standard Boolean measure algebras by pairs of a Steinitz number and a real number greater or equal to 1. This is an analog of the theorems of Dixmier and Baranov. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. 基于三维卷积和哈希方法的视频检索算法.
- Author
-
陈汗青, 李菲菲, and 陈虬
- Subjects
- *
CONVOLUTIONAL neural networks , *FEATURE extraction , *INFORMATION retrieval , *VIDEOS - Abstract
Different from other multimedia information retrieval, video retrieval requires a large amount of computation in similarity calculation due to the large amount of information contained in videos. In addition, the temporal correlation between video frames is often ignored in feature extraction, which leads to insufficient feature extraction and affects the accuracy of video retrieval. For this problem, this study proposes a video retrieval method based on 3 D convolution and Hash method. This method constructs an end-to-end framework, uses a 3 D convolutional neural network to extract the features of the representative frames selected from the video, and then maps the features to the low-dimensional Hamming space to calculate the similarity in the Hamming space. Experimental results on two video data sets show that compared with the latest video retrieval algorithms, the proposed method has a greater improvement in accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Improved lower bound for locating-dominating codes in binary Hamming spaces.
- Author
-
Junnila, Ville, Laihonen, Tero, and Lehtilä, Tuomo
- Subjects
BINARY codes ,HAMMING codes ,DOMINATING set - Abstract
In this article, we study locating-dominating codes in binary Hamming spaces F n . Locating-dominating codes have been widely studied since their introduction in 1980s by Slater and Rall. They are dominating sets suitable for distinguishing vertices in graphs. Dominating sets as well as locating-dominating codes have been studied in Hamming spaces in multiple articles. Previously, Honkala et al. (Discret Math Theor Comput Sci 6(2):265, 2004) have presented a lower bound for locating-dominating codes in binary Hamming spaces. In this article, we improve the lower bound for all values n ≥ 10 . In particular, when n = 11 , we manage to improve the previous lower bound from 309 to 317. This value is very close to the current best known upper bound of 320. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Fast and Exact Nearest Neighbor Search in Hamming Space on Full-Text Search Engines
- Author
-
Mu, Cun (Matthew), Zhao, Jun (Raymond), Yang, Guang, Yang, Binwei, Yan, Zheng (John), Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Amato, Giuseppe, editor, Gennaro, Claudio, editor, Oria, Vincent, editor, and Radovanović, Miloš, editor
- Published
- 2019
- Full Text
- View/download PDF
7. Deep Weibull hashing with maximum mean discrepancy quantization for image retrieval.
- Author
-
Feng, Hao, Wang, Nian, and Tang, Jun
- Subjects
- *
IMAGE retrieval , *DEEP learning , *FLEXIBLE structures , *NEIGHBORHOODS - Abstract
• A flexible optimization strategy is introduced to better learn pair-based similarity. • Imposing Weibull distribution-based constraint helps to reduce neighborhood ambiguous. • A maximum mean discrepancy quantization is proposed to minimize information loss. Hashing has been a promising technology for fast nearest neighbor retrieval in large-scale datasets due to the low storage cost and fast retrieval speed. Most existing deep hashing approaches learn compact hash codes through pair-based deep metric learning such as the triplet loss. However, these methods often consider that the intra-class and inter-class similarity make the same contribution, and consequently it is difficult to assign larger weights for informative samples during the training procedure. Furthermore, only imposing relative distance constraint increases the possibility of being clustered with larger average intra-class distance for similar pairs, which is harmful to learning a high separability Hamming space. To tackle the issues, we put forward deep Weibull hashing with maximum mean discrepancy quantization (DWH), which jointly performs neighborhood structure optimization and error-minimizing quantization to learn high-quality hash codes in a unified framework. Specifically, DWH learns the desired neighborhood structure in conjunction with a flexible pair similarity optimization strategy and a Weibull distribution-based constraint between anchors and their neighbors in Hamming space. More importantly, we design a maximum mean discrepancy quantization objective function to preserve the pairwise similarity when performing binary quantization. Besides, a class-level loss is introduced to mine the semantic structural information of images by using supervision information. The encouraging experimental results on various benchmark datasets demonstrate the efficacy of the proposed DWH. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Hamming spaces and locally matrix algebras.
- Author
-
Bezushchak, Oksana and Oliynyk, Bogdana
- Subjects
- *
MATRICES (Mathematics) , *HAM , *IDEMPOTENTS - Abstract
We study an abstract class of Hamming spaces (known also as measure algebras) that generalizes standard Hamming spaces (ℤ / 2 ℤ) n . We classify countable locally standard Hamming spaces and show that each of them can be realized as the Boolean algebra of idempotents of a Cartan subalgebra of a locally matrix algebra. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Selecting Sketches for Similarity Search
- Author
-
Mic, Vladimir, Novak, David, Vadicamo, Lucia, Zezula, Pavel, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Benczúr, András, editor, Thalheim, Bernhard, editor, and Horváth, Tomáš, editor
- Published
- 2018
- Full Text
- View/download PDF
10. Reversible Computing: Review of the Problem and New Results (Fault Tolerance and Cryptography)
- Author
-
Sergey Gurov, Aleksey Zhukov, Dmitry Zakablukov, and Georgy Kormakov
- Subjects
reversible logic ,reversible logic elements ,fault-tolerant circuits ,hamming space ,information protection ,reversible circuits with “garbage collection” ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The paper considers the main provisions of reversibility as a new paradigm for the development of computer technology. The first sections are of an overview nature. The inevitability of the so-called “heat curse” while maintaining the traditional paradigm of creating means of computer engineering. The fundamentals of reversible logic are presented, the main reversible logic elements and models of reversible computations, including reversible cellular automata, are considered. Reversible pro-gramming languages are briefly reviewed. The second part addresses the basic issues of the logical synthesis of circuits from reversible elements and the physical implementation of reversible circuitry. The synthesis of fault-tolerant circuits in the paradigm of reversible circuitry is briefly described. A technique for synthesizing fault-tolerant reversible elements in a hamming space is proposed and some such schemes are described. Next, the problems of using circuits of reversible logic elements in cryp-tography are considered. The proposed general scheme for creating reversible schemes with “garbage collection” intended for cryptographic applications is described.
- Published
- 2019
- Full Text
- View/download PDF
11. Multi‐bit quantisation for similarity‐preserving hashing
- Author
-
Liang Liang Su, Jun Tang, Dong Liang, and Ming Zhu
- Subjects
multibit quantisation ,similarity-preserving hashing ,hashing-based approximate nearest neighbour search ,Big Data ,Hamming space ,MBQ method ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Computer software ,QA76.75-76.765 - Abstract
As a promising alternative to traditional search techniques, hashing‐based approximate nearest neighbour search provides an applicable solution for big data. Most existing efforts are devoted to finding better projections to preserve the neighbouring structure of original data points in Hamming space, but ignore the quantisation procedure which may lead to the breakdown of the neighbouring structure maintained in the projection stage. To address this issue, the authors propose a novel multi‐bit quantisation (MBQ) method using a Matthews correlation coefficient (MCC) term and a regularisation term. The authors' method utilises the neighbouring relationship and the distribution information of original data points instead of the projection dimension usually used in the previous MBQ methods to adaptively learn optimal quantisation thresholds, and allocates multiple bits per projection dimension in terms of the learned thresholds. Experiments on two typical image data sets demonstrate that the proposed method effectively preserves the similarity between data points in the original feature space and outperforms state‐of‐the‐art quantisation methods.
- Published
- 2018
- Full Text
- View/download PDF
12. Hamming Embedding Sensitivity Guided Fusion Network for 3D Shape Representation.
- Author
-
Gong, Biao, Yan, Chenggang, Bai, Junjie, Zou, Changqing, and Gao, Yue
- Subjects
- *
FEATURE extraction - Abstract
Three-dimensional multi-modal data are used to represent 3D objects in the real world in different ways. Features separately extracted from multimodality data are often poorly correlated. Recent solutions leveraging the attention mechanism to learn a joint-network for the fusion of multimodality features have weak generalization capability. In this paper, we propose a hamming embedding sensitivity network to address the problem of effectively fusing multimodality features. The proposed network called HamNet is the first end-to-end framework with the capacity to theoretically integrate data from all modalities with a unified architecture for 3D shape representation, which can be used for 3D shape retrieval and recognition. HamNet uses the feature concealment module to achieve effective deep feature fusion. The basic idea of the concealment module is to re-weight the features from each modality at an early stage with the hamming embedding of these modalities. The hamming embedding also provides an effective solution for fast retrieval tasks on a large scale dataset. We have evaluated the proposed method on the large-scale ModelNet40 dataset for the tasks of 3D shape classification, single modality and cross-modality retrieval. Comprehensive experiments and comparisons with state-of-the-art methods demonstrate that the proposed approach can achieve superior performance. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
13. 基于改进哈希算法的图像检索方法.
- Author
-
陆超文, 李菲菲, and 陈虬
- Abstract
The coding metliods of the traditional visiual features adopted in current image retrieval approaches lack sufficient learning ability and have no strong featiure expression ability. In addition,due to the high dimensionali¬ ty of visual features,a large amount of memory is consumed,thus reducing the performance of image retrieval. In this paper,an image retrieval algoritiim witii end - to - end training based on deep and improved hashiing metiiod was pro¬posed and designed. The proposed algorithm combined the high - level features extracted by CNNwith Hash function and learned Hashi codes witii expression ability to perform large - scale image retrieval in low - dimensional Hamming space. The experimental results on two main datasets showed that the retrieval performance of the proposed metiiod was superior to that of some state - of - the - art ones. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. Isometries of the Hamming Space and Equivalence Relations of Linear Codes Over a Finite Field
- Author
-
García-Planas, M. Isabel, Magret, M. Dolors, Formaggia, Luca, Editor-in-chief, Gerbeau, Jean-Frédéric, Series editor, Martinez-Seara Alonso, Tere, Series editor, Parés, Carlos, Series editor, Pareschi, Lorenzo, Series editor, Pedregal, Pablo, Editor-in-chief, Tosin, Andrea, Series editor, Vazquez, Elena, Series editor, Zubelli, Jorge P., Series editor, Zunino, Paolo, Series editor, Ortegón Gallego, Francisco, editor, Redondo Neble, María Victoria, editor, and Rodríguez Galván, José Rafael, editor
- Published
- 2016
- Full Text
- View/download PDF
15. Fast Nearest Neighbor Search in the Hamming Space
- Author
-
Jiang, Zhansheng, Xie, Lingxi, Deng, Xiaotie, Xu, Weiwei, Wang, Jingdong, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Tian, Qi, editor, Sebe, Nicu, editor, Qi, Guo-Jun, editor, Huet, Benoit, editor, Hong, Richang, editor, and Liu, Xueliang, editor
- Published
- 2016
- Full Text
- View/download PDF
16. Multi-organ Segmentation Using Vantage Point Forests and Binary Context Features
- Author
-
Heinrich, Mattias P., Blendowski, Maximilian, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Ourselin, Sebastien, editor, Joskowicz, Leo, editor, Sabuncu, Mert R., editor, Unal, Gozde, editor, and Wells, William, editor
- Published
- 2016
- Full Text
- View/download PDF
17. A Novel Cross Modal Hashing Algorithm Based on Multi-modal Deep Learning
- Author
-
Qu, Wen, Wang, Daling, Feng, Shi, Zhang, Yifei, Yu, Ge, Diniz Junqueira Barbosa, Simone, Series editor, Chen, Phoebe, Series editor, Du, Xiaoyong, Series editor, Filipe, Joaquim, Series editor, Kara, Orhun, Series editor, Kotenko, Igor, Series editor, Liu, Ting, Series editor, Sivalingam, Krishna M., Series editor, Washio, Takashi, Series editor, Zhang, Xichun, editor, Sun, Maosong, editor, Wang, Zhenyu, editor, and Huang, Xuanjing, editor
- Published
- 2015
- Full Text
- View/download PDF
18. Approximate Bit-Vector Algorithms for Hashing-Based Similarity Searches
- Author
-
Wang, Ling, Zhou, Tie Hua, Liu, Zhen Hong, Qu, Zhao Yang, Ryu, Keun Ho, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Huang, De-Shuang, editor, Bevilacqua, Vitoantonio, editor, and Premaratne, Prashan, editor
- Published
- 2015
- Full Text
- View/download PDF
19. Asymmetric deep hashing for person re-identifications
- Author
-
Yali Li, Yali Zhao, and Shengjin Wang
- Subjects
Multidisciplinary ,Similarity (geometry) ,Data point ,Computer science ,Hash function ,Binary code ,Pairwise comparison ,Hamming space ,Convolutional neural network ,Algorithm ,Computer Science::Cryptography and Security ,Image (mathematics) - Abstract
The person re-identification (re-ID) community has witnessed an explosion in the scale of data that it has to handle. On one hand, it is important for large-scale re-ID to provide constant or sublinear search time and dramatically reduce the storage cost for data points from the viewpoint of efficiency. On the other hand, the semantic affinity existing in the original space should be preserved because it greatly boosts the accuracy of re-ID. To this end, we use the deep hashing method, which utilizes the pairwise similarity and classification label to learn deep hash mapping functions, in order to provide discriminative representations. More importantly, considering the great advantage of asymmetric hashing over the existing symmetric one, we finally propose an asymmetric deep hashing (ADH) method for large-scale re-ID. Specifically, a two-stream asymmetric convolutional neural network is constructed to learn the similarity between image pairs. Another asymmetric pairwise loss is formulated to capture the similarity between the binary hashing codes and real-value representations derived from the deep hash mapping functions, so as to constrain the binary hash codes in the Hamming space to preserve the semantic structure existing in the original space. Then, the image labels are further explored to have a direct impact on the hash function learning through a classification loss. Furthermore, an efficient alternating algorithm is elaborately designed to jointly optimize the asymmetric deep hash functions and high-quality binary codes, by optimizing one parameter with the other parameters fixed. Experiments on the four benchmarks, i.e., DukeMTMC-reID, Market-1501, Market-1501+500k, and CUHK03 substantiate the competitive accuracy and superior efficiency of the proposed ADH over the compared state-of-the-art methods for large-scale re-ID.
- Published
- 2022
20. HYPERCONTRACTIVITY OF SPHERICAL AVERAGES IN HAMMING SPACE.
- Author
-
POLYANSKIY, YURY
- Subjects
- *
VECTOR spaces , *LINEAR operators , *FUNCTION spaces , *CHARACTERISTIC functions , *FOURIER analysis , *HYPERCUBES , *HAM - Abstract
Consider the linear space of functions on the binary hypercube and the linear operator Sδ acting by averaging a function over a Hamming sphere of radius 6n around every point. It is shown that this operator has a dimension-independent bound on the norm Lp → L2with p = 1 + (1 -- 2<δ)². This result evidently parallels a classical estimate of Bonami and Gross for Lp → Lq norms for the operator of convolution with a Bernoulli noise. The estimate for Sδ is harder to obtain since the latter is neither a part of a semigroup nor a tensor power. The result is shown by a detailed study of the eigenvalues of Sδ and Lp → L2norms of the Fourier multiplier operators na with symbol equal to a characteristic function of the Hamming sphere of radius a (in the notation common in boolean analysis Πa = f=a, where f=a is a degree-a component of function f). A sample application of the result is given: Any set A ⊂ C Fn2 with the property that A + A contains a large portion of some Hamming sphere (counted with multiplicity) must have cardinality a constant multiple of 2n. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
21. Boolean functions as points on the hypersphere in the Euclidean space.
- Author
-
Logachev, Oleg A., Fedorov, Sergey N., and Yashchenko, Valerii V.
- Subjects
- *
BOOLEAN functions , *CRYPTOGRAPHY , *LOCALIZATION (Mathematics) , *SET functions , *NONLINEAR functions , *BENT functions , *SPACE - Abstract
A new approach to the study of algebraic, combinatorial, and cryptographic properties of Boolean functions is proposed. New relations between functions have been revealed by consideration of an injective mapping of the set of Boolean functions onto the sphere in a Euclidean space. Moreover, under this mapping some classes of functions have extremely regular localizations on the sphere. We introduce the concept of curvature of a Boolean function, which characterizes its proximity (in some sense) to maximally nonlinear functions. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. List-Decodable Zero-Rate Codes.
- Author
-
Alon, Noga, Bukh, Boris, and Polyanskiy, Yury
- Subjects
- *
ERROR correction (Information theory) , *HAMMING distance , *HADAMARD matrices , *MATHEMATICS theorems , *NANOPARTICLES - Abstract
We consider list decoding in the zero-rate regime for two cases—the binary alphabet and the spherical codes in Euclidean space. Specifically, we study the maximal $\tau \in [{0,1}]$ for which there exists an arrangement of $M$ balls of relative Hamming radius $\tau $ in the binary hypercube (of arbitrary dimension) with the property that no point of the latter is covered by $L$ or more of them. As $M\to \infty $ the maximal $\tau $ decreases to a well-known critical value $\tau _{L}$. In this paper, we prove several results on the rate of this convergence. For the binary case, we show that the rate is $\Theta (M^{-1})$ when $L$ is even, thus extending the classical results of Plotkin and Levenshtein for $L=2$. For $L=3$ , the rate is shown to be $\Theta (M^{-({2}/{3})})$. For the similar question about spherical codes, we prove the rate is $\Omega (M^{-1})$ and $O(M^{-({2L}/{L^{2}-L+2})})$. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Binary Sketches for Secondary Filtering.
- Author
-
VLADIMIR MIC, NOVAK, DAVID, and ZEZULA, PAVEL
- Subjects
- *
INDEXING - Abstract
This article addresses the problem of matching the most similar data objects to a given query object. We adopt a generic model of similarity that involves the domain of objects and metric distance functions only. We examine the case of a large dataset in a complex data space, which makes this problem inherently difficult. Many indexing and searching approaches have been proposed, but they have often failed to efficiently prune complex search spaces and access large portions of the dataset when evaluating queries. We propose an approach to enhancing the existing search techniques to significantly reduce the number of accessed data objects while preserving the quality of the search results. In particular, we extend each data object with its sketch, a short binary string in Hamming space. These sketches approximate the similarity relationships in the original search space, and we use them to filter out non-relevant objects not pruned by the original search technique.We provide a probabilisticmodel to tune the parameters of the sketch-based filtering separately for each query object. Experiments conducted with different similarity search techniques and real-life datasets demonstrate that the secondary filtering can speed-up similarity search several times. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. SEQUENTIAL COARSE STRUCTURES OF TOPOLOGICAL GROUPS.
- Author
-
PROTASOV, I. V.
- Subjects
TOPOLOGICAL groups ,GEOMETRIC group theory ,DIMENSION theory (Topology) ,IDEALS (Algebra) ,ABELIAN groups - Abstract
We endow a topological group (G,τ) with a coarse structure defined by the smallest group ideal Sτ on G containing all converging sequences with their limits and denote the obtained coarse group by (G,Sτ). If G is discrete then (G,Sτ) is a finitary coarse group studding in Geometric Group Theory. The main result: if a topological abelian group (G,τ) contains a non-trivial converging sequence then asdim (G,Sτ)=∞. We study metrizability, normality and functional boundedness of sequential coarse groups and put some open questions. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. DAHP: Deep Attention-Guided Hashing With Pairwise Labels
- Author
-
Yongqiang Wang, Jiaying Chen, Xue Li, Ziyang Li, Jiong Yu, and Pengxiao Chang
- Subjects
Computer science ,business.industry ,Feature extraction ,Hash function ,Pattern recognition ,Discriminative model ,Feature (computer vision) ,Discrete optimization ,Media Technology ,Binary code ,Pairwise comparison ,Artificial intelligence ,Electrical and Electronic Engineering ,Hamming space ,business - Abstract
To address the problem of inadequate feature extraction and binary code discrete optimization faced by deep hashing methods using a relaxation-quantization strategy, a novel deep attention-guided hashing method with pairwise labels (DAHP) is proposed to enhance global feature fusion, better learn the contextual information of image features to effectively enhance the feature representation, and solve the problem of losing feature information in discrete optimization by optimizing the loss function. First, we introduce a new concept called the anchor hash code generation(AHCG) algorithm, we train the ResNet with the position attention and channel attention mechanisms with the anchor points in Hamming space as supervised information, we fit the binary code representing the picture to the vicinity of each anchor point, and finally, we use the optimized loss function to calculate the pairwise loss and the anchor loss, allowing the hash function to generate hash code with strong discriminative power. The experiments were conducted on four benchmark datasets, and the retrieval accuracy of the proposed method outperformed the retrieval accuracies of the state-of-the-art methods.
- Published
- 2022
26. Learning to hash based on angularly discriminative embedding
- Author
-
Xuelong Li, Feiping Nie, Shuzheng Hao, Rong Wang, and Zhanxuan Hu
- Subjects
Information Systems and Management ,Theoretical computer science ,Computer science ,Nearest neighbor search ,Hash function ,Computer Science Applications ,Theoretical Computer Science ,Discriminative model ,Artificial Intelligence ,Control and Systems Engineering ,Margin (machine learning) ,Embedding ,Hamming space ,Image retrieval ,Feature learning ,Software - Abstract
Hashing, a widely-studied tool to the approximate nearest neighbor search, aims to embed samples as compact binary representations . Current approaches to this issue generally seek a low-dimensional Hamming Space where representations are discrete and have smaller intra-class distance and larger inter-class distance. As a result, the performance is often limited by the discrete constraint. In this work, we propose to seek an angularly discriminative Embedding Space where representations are continuous and have smaller intra-class angular margin and larger inter-class angular margin. For our goal is to learn continuous representations rather than discrete hash codes, the problems caused by discrete constraint can be avoided. Besides, in order to further reduce the gap between Embedding Space and Hamming Space, we introduce an additional coordinate-constraint for representations. Our method is simple yet effective. Extensive experiments on the image retrieval task show that it achieves encouraging results on four benchmark datasets. Furthermore, the success of our proposed method demonstrates that leveraging the progress made in representation learning to improve hashing is promising in future.
- Published
- 2021
27. Deep Weibull hashing with maximum mean discrepancy quantization for image retrieval
- Author
-
Nian Wang, Jun Tang, and Hao Feng
- Subjects
Similarity (geometry) ,Artificial Intelligence ,Computer science ,Cognitive Neuroscience ,Hash function ,Metric (mathematics) ,Quantization (image processing) ,Hamming space ,Image retrieval ,Algorithm ,Computer Science Applications ,k-nearest neighbors algorithm ,Weibull distribution - Abstract
Hashing has been a promising technology for fast nearest neighbor retrieval in large-scale datasets due to the low storage cost and fast retrieval speed. Most existing deep hashing approaches learn compact hash codes through pair-based deep metric learning such as the triplet loss. However, these methods often consider that the intra-class and inter-class similarity make the same contribution, and consequently it is difficult to assign larger weights for informative samples during the training procedure. Furthermore, only imposing relative distance constraint increases the possibility of being clustered with larger average intra-class distance for similar pairs, which is harmful to learning a high separability Hamming space. To tackle the issues, we put forward deep Weibull hashing with maximum mean discrepancy quantization (DWH), which jointly performs neighborhood structure optimization and error-minimizing quantization to learn high-quality hash codes in a unified framework. Specifically, DWH learns the desired neighborhood structure in conjunction with a flexible pair similarity optimization strategy and a Weibull distribution-based constraint between anchors and their neighbors in Hamming space. More importantly, we design a maximum mean discrepancy quantization objective function to preserve the pairwise similarity when performing binary quantization. Besides, a class-level loss is introduced to mine the semantic structural information of images by using supervision information. The encouraging experimental results on various benchmark datasets demonstrate the efficacy of the proposed DWH.
- Published
- 2021
28. Exploiting Subspace Relation in Semantic Labels for Cross-Modal Hashing
- Author
-
Fumin Shen, Zi Huang, Richang Hong, Xing Xu, Luchen Liu, Heng Tao Shen, and Yang Yang
- Subjects
Theoretical computer science ,Relation (database) ,Computer science ,Search engine indexing ,Hash function ,02 engineering and technology ,Computer Science Applications ,Computational Theory and Mathematics ,020204 information systems ,Discrete optimization ,0202 electrical engineering, electronic engineering, information engineering ,Binary code ,Hamming space ,Hamming code ,Subspace topology ,Information Systems - Abstract
Hashing methods have been extensively applied to efficient multimedia data indexing and retrieval on account of the explosion of multimedia data. Cross-modal hashing usually learns binary codes by mapping multi-modal data into a common Hamming space. Most supervised methods utilize relation information like class labels as pairwise similarities of cross-modal data pair to narrow intra-modal and inter-modal gap. In this paper, we propose a novel supervised cross-modal hashing method dubbed Subspace Relation Learning for Cross-modal Hashing (SRLCH), which exploits relation information of labels in semantic space to make similar data from different modalities closer in the low-dimension Hamming subspace. SRLCH preserves the modality relationships, the discrete constraints and nonlinear structures, while admitting a closed-form binary codes solution, which effectively enhances the training efficiency. An iterative alternative optimization algorithm is developed to simultaneously learn both hash functions and unified binary codes. With these binary codes and hash functions, we can index multimedia data and search them in an efficient way. Evaluations in two cross-modal retrieval tasks on several widely-used datasets show that the proposed SRLCH outperforms most cross-modal hashing methods. Theoretical analysis also illustrates reasons for our method’s promotion in subspace relation learning.
- Published
- 2021
29. Joint Versus Independent Multiview Hashing for Cross-View Retrieval
- Author
-
Hongyuan Zhu, Liangli Zhen, Dezhong Peng, Jie Lin, Peng Hu, and Xi Peng
- Subjects
Computer science ,business.industry ,Nearest neighbor search ,Hash function ,020207 software engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,Autoencoder ,Computer Science Applications ,Human-Computer Interaction ,Kernel (linear algebra) ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,Hamming space ,business ,Encoder ,computer ,Software ,Decoding methods ,Information Systems - Abstract
Thanks to the low storage cost and high query speed, cross-view hashing (CVH) has been successfully used for similarity search in multimedia retrieval. However, most existing CVH methods use all views to learn a common Hamming space, thus making it difficult to handle the data with increasing views or a large number of views. To overcome these difficulties, we propose a decoupled CVH network (DCHN) approach which consists of a semantic hashing autoencoder module (SHAM) and multiple multiview hashing networks (MHNs). To be specific, SHAM adopts a hashing encoder and decoder to learn a discriminative Hamming space using either a few labels or the number of classes, that is, the so-called flexible inputs. After that, MHN independently projects all samples into the discriminative Hamming space that is treated as an alternative ground truth. In brief, the Hamming space is learned from the semantic space induced from the flexible inputs, which is further used to guide view-specific hashing in an independent fashion. Thanks to such an independent/decoupled paradigm, our method could enjoy high computational efficiency and the capacity of handling the increasing number of views by only using a few labels or the number of classes. For a newly coming view, we only need to add a view-specific network into our model and avoid retraining the entire model using the new and previous views. Extensive experiments are carried out on five widely used multiview databases compared with 15 state-of-the-art approaches. The results show that the proposed independent hashing paradigm is superior to the common joint ones while enjoying high efficiency and the capacity of handling newly coming views.
- Published
- 2021
30. Dynamics of Information and Optimal Control of Mutation in Evolutionary Systems
- Author
-
Belavkin, Roman V., Sorokin, Alexey, editor, Murphey, Robert, editor, Thai, My T., editor, and Pardalos, Panos M., editor
- Published
- 2012
- Full Text
- View/download PDF
31. DyFT: a dynamic similarity search method on integer sketches
- Author
-
Yasuo Tabei and Shunsuke Kanda
- Subjects
Computer science ,Nearest neighbor search ,Hash function ,Binary number ,Human-Computer Interaction ,Data point ,Similarity (network science) ,Artificial Intelligence ,Hardware and Architecture ,Trie ,Hamming space ,Algorithm ,Software ,Information Systems ,Integer (computer science) - Abstract
Similarity-preserving hashing is a core technique for fast similarity searches, and it randomly maps data points in a metric space to strings of discrete symbols (i.e., sketches) in the Hamming space. While traditional hashing techniques produce binary sketches, recent ones produce integer sketches for preserving various similarity measures. However, most similarity search methods are designed for binary sketches and inefficient for integer sketches. Moreover, most methods are either inapplicable or inefficient for dynamic datasets, although modern real-world datasets are updated over time. We propose dynamic filter trie (DyFT), a dynamic similarity search method for both binary and integer sketches. An extensive experimental analysis using large real-world datasets shows that DyFT performs superiorly with respect to scalability, time performance, and memory efficiency. For example, on a huge dataset of 216 million data points, DyFT performs a similarity search 6000 times faster than a state-of-the-art method while reducing to one-thirteenth in memory.
- Published
- 2021
32. Discrete matrix factorization hashing for cross-modal retrieval
- Author
-
Jiang Lin, Na Han, Xiaozhao Fang, Shaohua Teng, and Liu Zhihu
- Subjects
Theoretical computer science ,Semantic similarity ,Artificial Intelligence ,Computer science ,Pattern recognition (psychology) ,Hash function ,Benchmark (computing) ,Computational intelligence ,Computer Vision and Pattern Recognition ,Quantization (image processing) ,Hamming space ,Software ,Matrix decomposition - Abstract
Cross-modal hashing has recently attracted considerable attention in the large-scale retrieval task due to its low storage cost and high retrieval efficiency. However, the existing hashing methods still have some issues that need to be further solved. For example, most existing cross-modal hashing methods convert the original data into a common Hamming space to learn unified hash codes, which ignores the specific properties of multi-modal data. In addition, most of them relax the discrete constraint to learn hash codes, which may lead to quantization loss and suboptimal performance. In order to address the above problems, this paper proposes a novel cross-modal retrieval method, named discrete matrix factorization hashing (DMFH). DMFH is a two-stage approach. In the first stage, given training data, DMFH exploits the matrix factorization technique to learn modality-specific semantic representation for each modality, then generates the corresponding hash codes by linear projection. Meanwhile, in order to ensure that the hash codes can preserve the semantic similarity between different modalities, DMFH optimizes the hash codes by an affinity matrix constructed from the label information. During the first stage, DMFH proposes a discrete optimal algorithm to solve the discrete constraint problem in learning hash codes. In the second stage, given the hash codes learned in the first stage, DMFH utilizes kernel logistic regression to learn the nonlinear features from the unseen instance, then generates corresponding hash codes for each modality. Extensive experimental results on three public benchmark datasets show that the proposed DMFH outperforms several state-of-art cross-modal hashing methods in terms of accuracy and efficiency.
- Published
- 2021
33. Adversarial Tri-Fusion Hashing Network for Imbalanced Cross-Modal Retrieval
- Author
-
Bineng Zhong, Yiu-ming Cheung, Xin Liu, Yi He, and Zhikai Hu
- Subjects
Control and Optimization ,Computer science ,business.industry ,Process (engineering) ,Hash function ,Semantics ,Machine learning ,computer.software_genre ,Computer Science Applications ,Computational Mathematics ,Artificial Intelligence ,Benchmark (computing) ,Embedding ,Artificial intelligence ,Hamming space ,business ,Feature learning ,computer ,Semantic gap - Abstract
Cross-modal retrieval has received increasing attentions for efficient retrieval across different modalities, and hashing technique has made significant progress recently due to its low storage cost and high query speed. However, most existing cross-modal hashing works still face the challenges of narrowing down the semantic gap between different modalities and training with imbalanced multi-modal data. This article presents an efficient Adversarial Tri-Fusion Hashing Network (ATFH-N) for cross-modal retrieval, which lies among the early attempts to incorporate adversarial learning for working with imbalanced multi-modal data. Specifically, a triple fusion network associated with zero padding operation is proposed to adapt either balanced or imbalanced multi-modal training data. At the same time, an adversarial training mechanism is leveraged to maximally bridge the semantic gap of the common representations between balanced and imbalanced data. Further, a label prediction network is utilized to guide the feature learning process and promote hash code learning, while additionally embedding the manifold structure to preserve both inter-modal and intra-modal similarities. Through the joint exploitation of the above, the underlying semantic structure of multimedia data can be well preserved in Hamming space, which can benefit various cross-modal retrieval tasks. Extensive experiments on three benchmark datasets show that the proposed ATFH-N method yields the comparable performance in balanced scenario and brings substantial improvements over the state-of-the-art methods in imbalanced scenarios.
- Published
- 2021
34. Unsupervised Deep Quadruplet Hashing with Isometric Quantization for image retrieval
- Author
-
Lei Huang, Kezhen Xie, Qibing Qin, Jie Nie, Jinkui Hou, and Zhiqiang Wei
- Subjects
Information Systems and Management ,Artificial neural network ,Computer science ,business.industry ,05 social sciences ,Hash function ,050301 education ,Pattern recognition ,02 engineering and technology ,Computer Science Applications ,Theoretical Computer Science ,Semantic similarity ,Artificial Intelligence ,Control and Systems Engineering ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Binary code ,Artificial intelligence ,Quantization (image processing) ,Hamming space ,business ,0503 education ,Image retrieval ,Software - Abstract
Numerous studies have shown deep hashing can facilitate large-scale image retrieval since it employs neural networks to learn feature representations and binary codes simultaneously. Despite supervised deep hashing has made great achievements under the guidance of label information, it is hardly applicable to a real-world image retrieval application because of its reliance on extensive human-annotated data. Furthermore, the pair-wise or triplet-wise unsupervised hashing can hardly achieve satisfactory performance due to the absence of local similarity of image pairs. To solve those problems, we propose a novel unsupervised deep hashing framework to learn compact binary codes, which takes the quadruplet forms as input units, called Unsupervised Deep Quadruplet Hashing with Isometric Quantization (UDQH-IQ). Specifically, by introducing the rotation invariance of images, the novel quadruplet-based loss is designed to explore the underlying semantic similarity of image pairs, which could preserve local similarity with its neighbors in Hamming space. To decrease the quantization errors, Hamming-isometric quantization is exploited to maximize the consistency of semantic similarity between binary-like embedding and corresponding binary codes. To alleviate redundancy in different bits, an orthogonality constraint is developed to decorrelate different bits in binary codes. Experimental results on three benchmark datasets indicate that our UDQH-IQ achieves promising performance.
- Published
- 2021
35. Label Consistent Flexible Matrix Factorization Hashing for Efficient Cross-modal Retrieval
- Author
-
Jun Yu, Xiaojun Wu, and Donglin Zhang
- Subjects
Similarity (geometry) ,Theoretical computer science ,Computer Networks and Communications ,Computer science ,05 social sciences ,Hash function ,02 engineering and technology ,Matrix decomposition ,Discriminative model ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Binary code ,0509 other social sciences ,050904 information & library sciences ,Representation (mathematics) ,Hamming space ,Subspace topology - Abstract
Hashing methods have sparked a great revolution on large-scale cross-media search due to its effectiveness and efficiency. Most existing approaches learn unified hash representation in a common Hamming space to represent all multimodal data. However, the unified hash codes may not characterize the cross-modal data discriminatively, because the data may vary greatly due to its different dimensionalities, physical properties, and statistical information. In addition, most existing supervised cross-modal algorithms preserve the similarity relationship by constructing an n × n pairwise similarity matrix, which requires a large amount of calculation and loses the category information. To mitigate these issues, a novel cross-media hashing approach is proposed in this article, dubbed label flexible matrix factorization hashing (LFMH). Specifically, LFMH jointly learns the modality-specific latent subspace with similar semantic by the flexible matrix factorization. In addition, LFMH guides the hash learning by utilizing the semantic labels directly instead of the large n × n pairwise similarity matrix. LFMH transforms the heterogeneous data into modality-specific latent semantic representation. Therefore, we can obtain the hash codes by quantifying the representations, and the learned hash codes are consistent with the supervised labels of multimodal data. Then, we can obtain the similar binary codes of the corresponding modality, and the binary codes can characterize such samples flexibly. Accordingly, the derived hash codes have more discriminative power for single-modal and cross-modal retrieval tasks. Extensive experiments on eight different databases demonstrate that our model outperforms some competitive approaches.
- Published
- 2021
36. Learning discrete class-specific prototypes for deep semantic hashing
- Author
-
Jinmeng Wu, Xuan Li, Lei Ma, Zhenghua Huang, Likun Huang, and Yu Shi
- Subjects
0209 industrial biotechnology ,Class (computer programming) ,Theoretical computer science ,Computer science ,Cognitive Neuroscience ,Hash function ,Multi-task learning ,02 engineering and technology ,Construct (python library) ,Computer Science Applications ,020901 industrial engineering & automation ,Discriminative model ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Hamming space ,Image retrieval ,Computer Science::Databases ,Computer Science::Cryptography and Security ,Semantic gap - Abstract
Deep supervised hashing methods have become popular for large-scale image retrieval tasks. Recently, some deep supervised hashing methods have utilized the semantic clustering of hash codes to improve their semantic discriminative ability and polymerization. However, there exists a semantic gap between the hash codes learned from the visual features and the semantic labels, which weakens the generalization ability of these methods. In addition, the manifold structure of the hash codes in the Hamming space is ignored. In this paper, we propose a novel deep semantic hashing method by learning discrete class-specific prototypes (DCPH). Specifically, we utilize the label information to learn discrete class-specific prototypes as the intermediate semantic representations of the semantic labels, which can reduce the semantic gap between the semantic labels and the hash codes and improve the correlation between the class-specific prototypes and the hash codes. Subsequently, we construct a bipartite graph to build coarse semantic neighborhood relationship between the hash codes and the class-specific prototypes, which can preserve the manifold structural information. Moreover, we utilize the pairwise supervised information to construct a fine semantic neighborhood relationship between the hash codes. Finally, we propose a novel hashing loss based on multitask learning framework to incorporate them into an end-to-end one-stream deep neural network architecture. Experimental results on several large-scale datasets demonstrate that the proposed method can outperform state-of-the-art hashing methods.
- Published
- 2021
37. DHLBT: Efficient Cross-Modal Hashing Retrieval Method Based on Deep Learning Using Large Batch Training
- Author
-
Xuewang Zhang, Jinzhao Lin, and Yin Zhou
- Subjects
0209 industrial biotechnology ,Batch training ,Modalities ,Computer Networks and Communications ,Computer science ,business.industry ,Deep learning ,Hash function ,02 engineering and technology ,Machine learning ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,Data mapping ,020901 industrial engineering & automation ,Modal ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Hamming space ,business ,computer ,Software - Abstract
Cross-modal hashing has attracted considerable attention as it can implement rapid cross-modal retrieval through mapping data of different modalities into a common Hamming space. With the development of deep learning, more and more cross-modal hashing methods based on deep learning are proposed. However, most of these methods use a small batch to train a model. The large batch training can get better gradients and can improve training efficiency. In this paper, we propose the DHLBT method, which uses the large batch training and introduces orthogonal regularization to improve the generalization ability of the DHLBT model. Moreover, we consider the discreteness of hash codes and add the distance between hash codes and features to the objective function. Extensive experiments on three benchmarks show that our method achieves better performance than several existing hashing methods.
- Published
- 2021
38. 基于映射字典学习的跨模态哈希检索.
- Author
-
姚涛, 孔祥维, 付海燕, and TIAN Qi
- Abstract
Copyright of Acta Automatica Sinica is the property of Chinese Academy of Sciences, Institute of Automation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2018
- Full Text
- View/download PDF
39. 基于稀疏重构编码的图像检索算法.
- Author
-
胡鹏辉, 朱华平, and 王春艳
- Abstract
Copyright of Journal of Henan University of Science & Technology, Natural Science is the property of Editorial Office of Journal of Henan University of Science & Technology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2018
- Full Text
- View/download PDF
40. CHOP: An orthogonal hashing method for zero-shot cross-modal retrieval
- Author
-
Zhikui Chen, Guangze Wang, Xu Yuan, and Fangming Zhong
- Subjects
Modality (human–computer interaction) ,business.industry ,Computer science ,Hash function ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,Discriminative model ,Artificial Intelligence ,0103 physical sciences ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Binary code ,Computer Vision and Pattern Recognition ,Artificial intelligence ,010306 general physics ,Projection (set theory) ,Hamming space ,business ,computer ,Software - Abstract
Cross-modal retrieval has recently attracted much attention because it helps users retrieve data across different modalities. However, with the explosive growth of data, a large number of new emerging concepts (unseen classes) that have not been appeared in the training data (seen classes) bring great challenges to the traditional cross-modal retrieval. Nevertheless, most existing approaches mainly focus on improving cross-modal retrieval performance of seen classes, which may fail in the unseen classes. To address the challenge of zero-shot cross-modal retrieval, we propose an orthogonal method in this paper, i.e., Cross-modal Hashing with Orthogonal Projection (CHOP). It projects cross-modal features and class attributes onto a Hamming space, where each projection of cross-modal features is orthogonal to the mismatched class attributes. By so doing, the model can learn a discriminative and binary representation of each modality. In addition, the class attributes build a bridge to transfer knowledge from seen classes to unseen classes. Furthermore, the orthogonal constraint on binary codes can help to mitigate the hubness problem. Extensive experiments on three benchmark datasets show that the proposed CHOP is effective in handling zero-shot cross-modal retrieval.
- Published
- 2021
41. Deep Multi-View Enhancement Hashing for Image Retrieval
- Author
-
Biao Gong, Yue Gao, Chenggang Yan, and Yuxuan Wei
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Nearest neighbor search ,Feature extraction ,Hash function ,Computer Science - Computer Vision and Pattern Recognition ,Stability (learning theory) ,02 engineering and technology ,computer.software_genre ,Machine Learning (cs.LG) ,Artificial Intelligence ,FOS: Electrical engineering, electronic engineering, information engineering ,0202 electrical engineering, electronic engineering, information engineering ,Hamming space ,Image retrieval ,Artificial neural network ,business.industry ,Applied Mathematics ,Deep learning ,Image and Video Processing (eess.IV) ,Electrical Engineering and Systems Science - Image and Video Processing ,Computational Theory and Mathematics ,Embedding ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Data mining ,Artificial intelligence ,business ,computer ,Software - Abstract
Hashing is an efficient method for nearest neighbor search in large-scale data space by embedding high-dimensional feature descriptors into a similarity preserving Hamming space with a low dimension. However, large-scale high-speed retrieval through binary code has a certain degree of reduction in retrieval accuracy compared to traditional retrieval methods. We have noticed that multi-view methods can well preserve the diverse characteristics of data. Therefore, we try to introduce the multi-view deep neural network into the hash learning field, and design an efficient and innovative retrieval model, which has achieved a significant improvement in retrieval performance. In this paper, we propose a supervised multi-view hash model which can enhance the multi-view information through neural networks. This is a completely new hash learning method that combines multi-view and deep learning methods. The proposed method utilizes an effective view stability evaluation method to actively explore the relationship among views, which will affect the optimization direction of the entire network. We have also designed a variety of multi-data fusion methods in the Hamming space to preserve the advantages of both convolution and multi-view. In order to avoid excessive computing resources on the enhancement procedure during retrieval, we set up a separate structure called memory network which participates in training together. The proposed method is systematically evaluated on the CIFAR-10, NUS-WIDE and MS-COCO datasets, and the results show that our method significantly outperforms the state-of-the-art single-view and multi-view hashing methods.
- Published
- 2021
42. Links Between Discriminating and Identifying Codes in the Binary Hamming Space
- Author
-
Charon, Irène, Cohen, Gérard, Hudry, Olivier, Lobstein, Antoine, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Boztaş, Serdar, editor, and Lu, Hsiao-Feng (Francis), editor
- Published
- 2007
- Full Text
- View/download PDF
43. Deep Bayesian Hashing With Center Prior for Multi-Modal Neuroimage Retrieval
- Author
-
Chunfeng Lian, Pew Thian Yap, Dinggang Shen, Mingxia Liu, Erkun Yang, Dongren Yao, and Cao Bing
- Subjects
Similarity (geometry) ,Databases, Factual ,Radiological and Ultrasound Technology ,Computer science ,business.industry ,Hash function ,Bayesian probability ,Bayes Theorem ,Machine learning ,computer.software_genre ,Article ,030218 nuclear medicine & medical imaging ,Computer Science Applications ,Visualization ,03 medical and health sciences ,0302 clinical medicine ,Discriminative model ,Artificial intelligence ,Electrical and Electronic Engineering ,Hamming space ,business ,Image retrieval ,computer ,Software - Abstract
Multi-modal neuroimage retrieval has greatly facilitated the efficiency and accuracy of decision making in clinical practice by providing physicians with previous cases (with visually similar neuroimages) and corresponding treatment records. However, existing methods for image retrieval usually fail when applied directly to multi-modal neuroimage databases, since neuroimages generally have smaller inter-class variation and larger inter-modal discrepancy compared to natural images. To this end, we propose a deep Bayesian hash learning framework, called CenterHash, which can map multi-modal data into a shared Hamming space and learn discriminative hash codes from imbalanced multi-modal neuroimages. The key idea to tackle the small inter-class variation and large inter-modal discrepancy is to learn a common center representation for similar neuroimages from different modalities and encourage hash codes to be explicitly close to their corresponding center representations. Specifically, we measure the similarity between hash codes and their corresponding center representations and treat it as a center prior in the proposed Bayesian learning framework. A weighted contrastive likelihood loss function is also developed to facilitate hash learning from imbalanced neuroimage pairs. Comprehensive empirical evidence shows that our method can generate effective hash codes and yield state-of-the-art performance in cross-modal retrieval on three multi-modal neuroimage datasets.
- Published
- 2021
44. Towards Large-Scale Object Instance Search: A Multi-Block N-Ary Trie
- Author
-
Man-Gui Liang, Dong Feng, Xinfeng Zhang, Feng Gao, Yicheng Huang, and Ling-Yu Duan
- Subjects
Computer science ,Search engine indexing ,02 engineering and technology ,Object (computer science) ,Hash table ,k-nearest neighbors algorithm ,Trie ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Binary code ,Electrical and Electronic Engineering ,Hamming space ,Algorithm ,Block (data storage) - Abstract
Object instance search is a challenging task with a wide range of applications, but the fast search with high accuracy has not been well solved yet. In this paper, we investigate the object instance search from a new perspective in terms of joint precision and computational cost optimization, and propose a novel index structure i.e. , Multi-Block N-ary Trie (MBNT) to accelerate the exact $r$ -neighbor search in the Hamming space. Comprehensive studies are first carried out to reveal the performance of exact and approximate nearest neighbor (NN) algorithms for object instance search. An interesting finding that the exact search is more promising for very compact binary codes ( e.g., 64-bit and 128-bit) is analyzed. Along this vein, we introduce a Trie structure, i.e., MBNT, which is specifically designed for improving the exact NN search performance in the context of large-scale object instance search. To index the binary codes, a subset of continuous bits of a binary string, denoted as a block, is regarded as an atomic indexing element. As such, the problem of lookup misses can be addressed. Theoretical analyses are also provided to show that our MBNT scheme can incur less computational cost than other hash table-based methods. Extensive experimental results on the 100M dataset have demonstrated that our method achieves faster search speed while maintaining the promising search precision towards large-scale object instance search.
- Published
- 2021
45. Deep Fuzzy Hashing Network for Efficient Image Retrieval
- Author
-
Huimin Lu, Yujie Li, Xing Xu, Ming Zhang, and Heng Tao Shen
- Subjects
Artificial neural network ,business.industry ,Computer science ,Applied Mathematics ,Hash function ,Feature extraction ,Pattern recognition ,Hamming distance ,02 engineering and technology ,Data structure ,Fuzzy logic ,Computational Theory and Mathematics ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Hamming space ,business ,Image retrieval - Abstract
Hashing methods for efficient image retrieval aim at learning hash functions that map similar images to semantically correlated binary codes in the Hamming space with similarity well preserved. The traditional hashing methods usually represent image content by hand-crafted features. Deep hashing methods based on deep neural network (DNN) architectures can generate more effective image features and obtain better retrieval performance. However, the underlying data structure is hardly captured by existing DNN models. Moreover, the similarity (either visually or semantically) between pairwise images is ambiguous, even uncertain, to be measured in the existing deep hashing methods. In this article, we propose a novel hashing method termed deep fuzzy hashing network (DFHN) to overcome the shortcomings of existing deep hashing approaches. Our DFHN method combines the fuzzy logic technique and the DNN to learn more effective binary codes, which can leverage fuzzy rules to model the uncertainties underlying the data. Derived from fuzzy logic theory, the generalized hamming distance is devised in the convolutional layers and fully connected layers in our DFHN to model their outputs, which come from an efficient xor operation on given inputs and weights. Extensive experiments show that our DFHN method obtains competitive retrieval accuracy with highly efficient training speed compared with several state-of-the-art deep hashing approaches on two large-scale image datasets: CIFAR-10 and NUS-WIDE.
- Published
- 2021
46. Learning Efficient Hash Codes for Fast Graph-Based Data Similarity Retrieval
- Author
-
Jingkuan Song, Ling Shao, Shuo Xu, Feng Zheng, Jinbao Wang, and Ke Lu
- Subjects
Theoretical computer science ,Similarity (geometry) ,Data retrieval ,Computer science ,Hash function ,Graph (abstract data type) ,Hamming space ,Representation (mathematics) ,Computer Graphics and Computer-Aided Design ,Software ,Field (computer science) ,Data modeling - Abstract
Traditional operations, e.g. graph edit distance (GED), are no longer suitable for processing the massive quantities of graph-structured data now available, due to their irregular structures and high computational complexities. With the advent of graph neural networks (GNNs), the problems of graph representation and graph similarity search have drawn particular attention in the field of computer vision. However, GNNs have been less studied for efficient and fast retrieval after graph representation. To represent graph-based data, and maintain fast retrieval while doing so, we introduce an efficient hash model with graph neural networks (HGNN) for a newly designed task ( i.e. fast graph-based data retrieval). Due to its flexibility, HGNN can be implemented in both an unsupervised and supervised manner. Specifically, by adopting a graph neural network and hash learning algorithms, HGNN can effectively learn a similarity-preserving graph representation and compute pair-wise similarity or provide classification via low-dimensional compact hash codes. To the best of our knowledge, our model is the first to address graph hashing representation in the Hamming space. Our experimental results reach comparable prediction accuracy to full-precision methods and can even outperform traditional models in some cases. In real-world applications, using hash codes can greatly benefit systems with smaller memory capacities and accelerate the retrieval speed of graph-structured data. Hence, we believe the proposed HGNN has great potential in further research.
- Published
- 2021
47. Distributed Complementary Binary Quantization for Joint Hash Table Learning
- Author
-
Qiang Fu, Dacheng Tao, Xiao Bai, Xinyu Wu, Xianglong Liu, and Deqing Wang
- Subjects
Theoretical computer science ,Computer Networks and Communications ,Computer science ,Nearest neighbor search ,Quantization (signal processing) ,Search engine indexing ,Hash function ,02 engineering and technology ,Hash table ,Computer Science Applications ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Binary code ,Hamming space ,Software - Abstract
Building multiple hash tables serves as a very successful technique for gigantic data indexing, which can simultaneously guarantee both the search accuracy and efficiency. However, most of existing multitable indexing solutions, without informative hash codes and strong table complementarity, largely suffer from the table redundancy. To address the problem, we propose a complementary binary quantization (CBQ) method for jointly learning multiple tables and the corresponding informative hash functions in a centralized way. Based on CBQ, we further design a distributed learning algorithm (D-CBQ) to accelerate the training over the large-scale distributed data set. The proposed (D-)CBQ exploits the power of prototype-based incomplete binary coding to well align the data distributions in the original space and the Hamming space and further utilizes the nature of multi-index search to jointly reduce the quantization loss. (D-)CBQ possesses several attractive properties, including the extensibility for generating long hash codes in the product space and the scalability with linear training time. Extensive experiments on two popular large-scale tasks, including the Euclidean and semantic nearest neighbor search, demonstrate that the proposed (D-)CBQ enjoys efficient computation, informative binary quantization, and strong table complementarity, which together help significantly outperform the state of the arts, with up to 57.76% performance gains relatively.
- Published
- 2020
48. Joint Multi-View Hashing for Large-Scale Near-Duplicate Video Retrieval
- Author
-
Chen Jason Zhang, Lei Zhu, Yilong Yin, Weizhen Jing, Xiushan Nie, and Chaoran Cui
- Subjects
Scheme (programming language) ,Thesaurus (information retrieval) ,Theoretical computer science ,Computer science ,Feature extraction ,Search engine indexing ,Hash function ,Computer Science Applications ,Computational Theory and Mathematics ,Hamming space ,Joint (audio engineering) ,Time complexity ,computer ,Information Systems ,computer.programming_language - Abstract
Multi-view hashing can well support large-scale near-duplicate video retrieval, due to its desirable advantages of mutual reinforcement of multiple features, low storage cost, and fast retrieval speed. However, there are still two limitations that impede its performance. First, existing methods only consider local structures in multiple features. They ignore the global structure that is important for near-duplicate video retrieval, and cannot fully exploit the dependence and complementarity of multiple features. Second, existing works always learn hashing functions bit by bit, which unfortunately increases the time complexity of hash function learning. In this paper, we propose a supervised hashing scheme, termed as joint multi-view hashing (JMVH), to address the aforementioned problems. It jointly preserves the global and local structures of multiple features while learning hashing functions efficiently. Specially, JMVH considers features of video as items, based on which an underlying Hamming space is learned by simultaneously preserving their local and global structures. In addition, a simple but efficient multi-bit hash function learning based on generalized eigenvalue decomposition is devised to learn multiple hash functions within a single step. It can significantly reduce the time complexity of conventional hash function learning processes that sequentially learn multiple hash functions bit by bit. The proposed JMVH is evaluated on two public databases: CC_WEB_VIDEO and UQ_VIDEO. Experimental results demonstrate that the proposed JMVH achieves more than a 5 percent improvement compared to several state-of-the-art methods which indicates the superior performance of JMVH.
- Published
- 2020
49. Deep Co-Image-Label Hashing for Multi-Label Image Retrieval
- Author
-
Long Lan, Quansen Sun, Ivor W. Tsang, Yuhui Zheng, Guohua Dong, and Xiaobo Shen
- Subjects
Dependency (UML) ,Exploit ,Artificial neural network ,Computer science ,business.industry ,Hash function ,Pattern recognition ,Computer Science Applications ,Image (mathematics) ,ComputingMethodologies_PATTERNRECOGNITION ,Similarity (network science) ,Signal Processing ,Media Technology ,08 Information and Computing Sciences, 09 Engineering ,Artificial Intelligence & Image Processing ,Artificial intelligence ,Electrical and Electronic Engineering ,Hamming space ,business ,Image retrieval - Abstract
Deep supervised hashing has greatly improved retrieval performance with the powerful learning capability of deep neural network. In multi-label image retrieval, existing deep hashing simply indicates whether two images are similar by constructing a similarity matrix. However, it ignores the dependency among multiple labels that has been shown important in multi-label application. To fulfill this gap, this paper proposes Deep Co-Image-Label Hashing (DCILH) to discover label dependency. Specifically, DCILH regards image and label as two views, and maps the two views into a common deep Hamming space. DCILH proposes to learn prototype for each label, and preserve similarity among images, labels, and prototypes. To exploit label dependency, DCILH further employs the label-correlation aware loss on the predicted labels, such that predicted output on positive label is enforced to be larger than that on negative label. Extensive experiments on several multi-label benchmarks demonstrate the proposed DCILH outperforms state-of-the-art deep supervised hashing on large-scale multi-label image retrieval.
- Published
- 2022
50. Weak isometries of Hamming spaces.
- Author
-
Bruner, Ryan and De Winter, Stefaan
- Subjects
ISOMETRICS (Mathematics) ,PERMUTATIONS ,LINEAR algebra ,VECTOR spaces ,MATHEMATICAL analysis - Abstract
Consider any permutation of the elements of a (finite) metric space that preserves a specific distance p. When is such a permutation automatically an isometry of the metric space? In this note we study this problem for the Hamming spaces H(n; q) both from a linear algebraic and combinatorial point of view. We obtain some sufficient conditions for the question to have an affirmative answer, as well as pose some interesting open problems. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.