268 results on '"Liu, Lingqiao"'
Search Results
252. Discriminative Brain Effective Connectivity Analysis for Alzheimer's Disease: A Kernel Learning Approach upon Sparse Gaussian Bayesian Network
- Author
-
Zhou, Luping, primary, Wang, Lei, additional, Liu, Lingqiao, additional, Ogunbona, Philip, additional, and Shen, Dinggang, additional
- Published
- 2013
- Full Text
- View/download PDF
253. Exploring latent class information for image retrieval using the bag-of-feature model
- Author
-
Liu, Lingqiao, primary and Wang, Lei, additional
- Published
- 2011
- Full Text
- View/download PDF
254. A generalized probabilistic framework for compact codebook creation
- Author
-
Liu, Lingqiao, primary, Wang, Lei, additional, and Shen, Chunhua, additional
- Published
- 2011
- Full Text
- View/download PDF
255. Image registration based on feature extraction and voting strategy
- Author
-
Qian, Wei, primary, Fu, Zhizhong, additional, Liu, Lingqiao, additional, and Deng, Zaiqiang, additional
- Published
- 2008
- Full Text
- View/download PDF
256. Motion Estimation for Video Stabilization Based on Feature Points and Parameter Space Method
- Author
-
Liu, Lingqiao, primary, Fu, Zhizhong, additional, and Deng, Zaiqiang, additional
- Published
- 2008
- Full Text
- View/download PDF
257. Human computer interaction research and realization based on leg movement analysis.
- Author
-
Fu Zhizhong, Liu Lingqiao, Xian Haiying, and Xu Jin
- Published
- 2010
- Full Text
- View/download PDF
258. Challenges of Automating Interior Construction Progress Monitoring.
- Author
-
Zhang, Yanquan, Chang, Ruidong, Mao, Weian, Zuo, Jian, Liu, Lingqiao, and Han, Yilong
- Subjects
- *
ACQUISITION of data , *RESEARCH personnel - Abstract
Automated interior construction progress monitoring (ICPM) has gained increasing academic attention. This emerging research field faces numerous technical challenges that have been noted in previous studies but lack a holistic examination to analyze these challenges and their potential impacts. This study addresses this gap by conducting a systematic review of ICPM technical challenges, collecting related literature from Scopus, Web of Science, and ScienceDirect databases, and utilizing the Preferred Reporting Items for Systematic Reviews and Meta-Analyzes (PRISMA) framework for filtering and selecting literature. The filtration results in 44 strongly related technical papers. Alongside summarizing these challenges, the study explores their impacts on the entire ICPM automation process and proposes innovative solutions. Specifically, this study highlights the key phases of ICPM automation development, including data acquisition, 3D reconstruction, as-planned modeling, as-built modeling, progress comparison, and progress quantification, and subsequently identifies challenges for each phase, totaling 11 major challenges composed of 41 subchallenges. The data acquisition phase is found to have the most and most severe challenges, and challenges in other phases also impact the automation system performance. This review encompasses the identification of potential issues and proposes corresponding solutions, enabling future researchers to anticipate challenges and develop more advanced and user-friendly monitoring systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
259. A Causal Inspired Early-Branching Structure for Domain Generalization.
- Author
-
Chen, Liang, Zhang, Yong, Song, Yibing, Zhang, Zhen, and Liu, Lingqiao
- Subjects
- *
PROBLEM solving , *STATISTICAL sampling , *GENERALIZATION , *SIMPLICITY , *FORECASTING - Abstract
Learning domain-invariant semantic representations is crucial for achieving domain generalization (DG), where a model is required to perform well on unseen target domains. One critical challenge is that standard training often results in entangled semantic and domain-specific features. Previous works suggest formulating the problem from a causal perspective and solving the entanglement problem by enforcing marginal independence between the causal (i.e.semantic) and non-causal (i.e.domain-specific) features. Despite its simplicity, the basic marginal independent-based idea alone may be insufficient to identify the causal feature. By d-separation, we observe that the causal feature can be further characterized by being independent of the domain conditioned on the object, and we propose the following two strategies as complements for the basic framework. First, the observation implicitly implies that for the same object, the causal feature should not be associated with the non-causal feature, revealing that the common practice of obtaining the two features with a shared base feature extractor and two lightweight prediction heads might be inappropriate. To meet the constraint, we propose a simple early-branching structure, where the causal and non-causal feature obtaining branches share the first few blocks while diverging thereafter, for better structure design; Second, the observation implies that the causal feature remains invariant across different domains for the same object. To this end, we suggest that augmentation should be incorporated into the framework to better characterize the causal feature, and we further suggest an effective random domain sampling scheme to fulfill the task. Theoretical and experimental results show that the two strategies are beneficial for the basic marginal independent-based framework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
260. ReFs: A hybrid pre-training paradigm for 3D medical image segmentation.
- Author
-
Xie, Yutong, Zhang, Jianpeng, Liu, Lingqiao, Wang, Hu, Ye, Yiwen, Verjans, Johan, and Xia, Yong
- Subjects
- *
IMAGE segmentation , *THREE-dimensional imaging , *DIAGNOSTIC imaging - Abstract
Self-supervised learning (SSL) has achieved remarkable progress in medical image segmentation. The application of an SSL algorithm often follows a two-stage training process: using unlabeled data to perform label-free representation learning and fine-tuning the pre-trained model on the downstream tasks. One issue of this paradigm is that the SSL step is unaware of the downstream task, which may lead to sub-optimal feature representation for a target task. In this paper, we propose a hybrid pre-training paradigm that is driven by both self-supervised and supervised objectives. To achieve this, a supervised reference task is involved in self-supervised learning, aiming to improve the representation quality. Specifically, we employ the off-the-shelf medical image segmentation task as reference, and encourage learning a representation that (1) incurs low prediction loss on both SSL and reference tasks and (2) leads to a similar gradient when updating the feature extractor from either task. In this way, the reference task pilots SSL in the direction beneficial for the downstream segmentation. To this end, we propose a simple but effective gradient matching method to optimize the model towards a consistent direction, thus improving the compatibility of both SSL and supervised reference tasks. We call this hybrid pre-training paradigm reference-guided self-supervised learning (ReFs), and perform it on a large-scale unlabeled dataset and an additional reference dataset. The experimental results demonstrate its effectiveness on seven downstream medical image segmentation benchmarks. • The performance of self-supervised learning (SSL) on downstream tasks is hindered by a gap in representation. • ReFs uses reference tasks to steer SSL towards downstream-friendly directions. • Gradient matching enhances SSL and reference task compatibility, boosting ReFs. • ReFs outperforms SSL in downstream segmentation tasks, as shown by experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
261. Center Prediction Loss for Re-identification.
- Author
-
Yang, Lu, Wang, Yunlong, Liu, Lingqiao, Wang, Peng, and Zhang, Yanning
- Subjects
- *
CALL centers , *FORECASTING - Abstract
• We propose a new intra-class loss called Center Prediction Loss (CPL). To the best of our knowledge, it is the first attempt to use the property of center predictivity as the loss function. • We show that the CPL allows more freedom for choosing the intra-class distribution family and can naturally preserving the discrimination between samples from different classes. • Extensive experiments on various ReID benchmarks show that the proposed loss can achieve superior performance and can also be complementary to existing losses. We also achieve new state-of-the-art performance on multiple ReID benchmarks. [Display omitted] The training loss function that enforces certain training sample distribution patterns plays a critical role in building a re-identification (ReID) system. Besides the basic requirement of discrimination, i.e. , the features corresponding to different identities should not be mixed, additional intra-class distribution constraints, such as features from the same identities should be close to their centers, have been adopted to construct losses. Despite the advances of various new loss functions, it is still challenging to strike the balance between the need of reducing the intra-class variation and allowing certain distribution freedom. Traditional intra-class losses try to shrink samples of the same class into one point in the feature space and may easily drop their intra-class similarity structure. In this paper, we propose a new loss based on center predictivity, that is, a sample must be positioned in a location of the feature space such that from it we can roughly predict the location of the center of same-class samples. The prediction error is then regarded as a loss called Center Prediction Loss (CPL). Unlike most existing metric learning loss functions, CPL involves learnable parameters, i.e. , the center predictor, which brings a remarkable change in the properties of the loss. In particular, it allows higher freedom in intra-class distributions. And the parameters in CPL will be discarded after training. Extensive experiments on various real-world ReID datasets show that the proposed loss can achieve superior performance and can also be complementary to existing losses. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
262. Privileged multi-task learning for attribute-aware aesthetic assessment.
- Author
-
Shu, Yangyang, Li, Qian, Liu, Lingqiao, and Xu, Guandong
- Subjects
- *
DEEP learning , *AESTHETICS - Abstract
• We propose the first unified approach to model the multiple complex dependencies in a photo for aesthetic assessment. • We employ privileged information during training and incorporate auxiliary aesthetics photo features to assist aesthetics prediction within a deep learning architecture. • We propose to employ adversarial learning to serves as an additional view to refine the final aesthetics assessment performance. Aesthetic attributes are crucial for aesthetics because they explicitly present some photo quality cues that a human expert might use to evaluate a photo's aesthetic quality. However, the aesthetic attributes have not been largely and sufficiently exploited for photo aesthetic assessment. In this paper, we propose a novel approach to photo aesthetic assessment with the help of aesthetic attributes. The aesthetic attributes are used as privileged information (PI), which is often available during training phase but unavailable in prediction phase due to the high collection expense. The proposed framework consists of a deep multi-task network as generator and a fully connected network as discriminator. Deep multi-task network learns the aesthetic attributes and score simultaneously to capture their dependencies and extract better feature representations. Specifically, we use ranking constraint in the label space, similarity constraint and prior probabilities loss in the privileged information space to make the output of multi-task network converge to that of ground truth. Adversarial loss is used to identify and distinguish the predicted privileged information of a deep multi-task network from the ground truth PI distribution. Experimental results on two benchmark databases demonstrate the superiority of the proposed method to state-of-the-art. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
263. Structured Binary Neural Networks for Image Recognition.
- Author
-
Zhuang, Bohan, Shen, Chunhua, Tan, Mingkui, Chen, Peng, Liu, Lingqiao, and Reid, Ian
- Subjects
- *
DEEP learning , *IMAGE recognition (Computer vision) , *OBJECT recognition (Computer vision) , *CONVOLUTIONAL neural networks , *MOBILE learning - Abstract
In this paper, we propose to train binarized convolutional neural networks (CNNs) that are of significant importance for deploying deep learning to mobile devices with limited power capacity and computing resources. Previous works on quantizing CNNs often seek to approximate the floating-point information of weights and/or activations using a set of discrete values. Such methods, termed value approximation here, typically are built on the same network architecture of the full-precision counterpart. Instead, we take a new "structured approximation" view for network quantization — it is possible and valuable to exploit flexible architecture transformation when learning low-bit networks, which can achieve even better performance than the original networks in some cases. In particular, we propose a "group decomposition" strategy, termed GroupNet, which divides a network into desired groups. Interestingly, with our GroupNet strategy, each full-precision group can be effectively reconstructed by aggregating a set of homogeneous binary branches. We also propose to learn effective connections among groups to improve the representation capability. To improve the model capacity, we propose to dynamically execute sparse binary branches conditioned on input features while preserving the computational cost. More importantly, the proposed GroupNet shows strong flexibility for a few vision tasks. For instance, we extend the GroupNet for accurate semantic segmentation by embedding the rich context into the binary structure. The proposed GroupNet also shows strong performance on object detection. Experiments on image classification, semantic segmentation, and object detection tasks demonstrate the superior performance of the proposed methods over various quantized networks in the literature. Moreover, the speedup and runtime memory cost evaluation comparing with related quantization strategies is analyzed on GPU platforms, which serves as a strong benchmark for further research. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
264. Simultaneous determination of sex hormones and bile acids in rat plasma using a liquid chromatography-tandem mass spectrometry method.
- Author
-
Li, Yun, Zhang, Dan, Mo, Yan, Zeng, Teng, Wu, Tongzhi, Liu, Lingqiao, Zhang, Hua, and Chen, Chang
- Subjects
- *
LIQUID chromatography-mass spectrometry , *BILE acids , *SEX hormones , *GLUCOSE tolerance tests , *RATS - Abstract
Endogenous steroids, including sex hormones and bile acids, are a group of essential compounds with various biological functions. In this study, we developed an LC-MS method that simultaneously measures 14 sex hormones and metabolites (SH) and 32 bile acids (BA) in rat plasma. Multiple innovative approaches were applied to increase the sensitivity and specificity, including optimization of the mobile phases, gradients, and dynamic multiple reaction monitoring (DMRM) transitions. The method was validated and applied on plasma samples from pregnant rats before and 0.5 h after oral glucose tolerance test (OGTT) at gestational days 0.5 and 18.5. Results showed that the method was applicable, and 9 SH and 30 BA were measurable in the samples. In summary, this method is applicable in studies on SH and BA in rat plasma, and may also be used on other matrix and species. • The method analyzes 14 sex hormones and metabolites and 32 bile acids. • Novel liquid chromatograph and mass spectrometry strategies were applied. • The sex hormones and metabolites and bile acids in rat plasma were measured. • Dehydroepiandrosterone was found correlated with bile acids in rat plasma. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
265. Boundarymix: Generating pseudo-training images for improving segmentation with scribble annotations.
- Author
-
Lu, Wanxuan, Gong, Dong, Fu, Kun, Sun, Xian, Diao, Wenhui, and Liu, Lingqiao
- Subjects
- *
IMAGE segmentation , *ANNOTATIONS , *REMOTE sensing - Abstract
• BoundaryMix is proposed for scribble-supervised semantic segmentation. • BoundaryMix supplements the missing boundary information of scribble annotations by generating pseudo training images and annotations. • Scribble are used to annotate remote sensing images and show that scribble annotation is also suitable for different scenarios. • Experiments on PASCAL VOC and POTSDAM datasets show that BoundaryMix almost closes the gap between weakly-supervised and fully-supervised semantic segmentation. Weakly-supervised semantic segmentation, as a promising solution to alleviate the burden of collecting per-pixel annotations, aims to train a segmentation model from partial weak annotations. Scribble on the object is one of the commonly used weak annotations and has shown to be sufficient for learning a decent segmentation model. Despite being effective, scribble-based weakly-supervised learning methods often lead to imprecise segmentation on object boundaries. This is mainly because the scribble annotations usually locate inside the objects and the dataset lacks annotations close to the semantic boundaries. To alleviate this issue, this paper proposes a simple-but-effective solution, i.e. , BoundaryMix, which generates pseudo training image-annotation pairs from the original images to supplement the missing semantic boundaries. Specifically, given a prediction of segmentation, we cut off the regions around the estimated boundaries, which are error-prone and replace them with the contents from another image, which in effect creates new samples with less ambiguity around semantic boundaries. With training on scribbles and the on-the-fly generated pseudo annotations, the network acquires better prediction capability around the boundary region and thus improves the overall segmentation performance. By conducting experiments on PASCAL VOC 2012 dataset and POTSDAM dataset with only scribble annotations, we demonstrate the excellent performance of the proposed method and the almost closed gap between scribble-supervised and fully-supervised image segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
266. Density maximization for improving graph matching with its applications.
- Author
-
Wang C, Wang L, and Liu L
- Abstract
Graph matching has been widely used in both image processing and computer vision domain due to its powerful performance for structural pattern representation. However, it poses three challenges to image sparse feature matching: 1) the combinatorial nature limits the size of the possible matches; 2) it is sensitive to outliers because its objective function prefers more matches; and 3) it works poorly when handling many-to-many object correspondences, due to its assumption of one single cluster of true matches. In this paper, we address these challenges with a unified framework called density maximization (DM), which maximizes the values of a proposed graph density estimator both locally and globally. DM leads to the integration of feature matching, outlier elimination, and cluster detection. Experimental evaluation demonstrates that it significantly boosts the true matches and enables graph matching to handle both outliers and many-to-many object correspondences. We also extend it to dense correspondence estimation and obtain large improvement over the state-of-the-art methods. We further demonstrate the usefulness of our methods using three applications: 1) instance-level image retrieval; 2) mask transfer; and 3) image enhancement.
- Published
- 2015
- Full Text
- View/download PDF
267. A hierarchical word-merging algorithm with class separability measure.
- Author
-
Wang L, Zhou L, Shen C, Liu L, and Liu H
- Abstract
In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.
- Published
- 2014
- Full Text
- View/download PDF
268. An Adaptive Approach to Learning Optimal Neighborhood Kernels.
- Author
-
Liu X, Yin J, Wang L, Liu L, Liu J, Hou C, and Zhang J
- Abstract
Learning an optimal kernel plays a pivotal role in kernel-based methods. Recently, an approach called optimal neighborhood kernel learning (ONKL) has been proposed, showing promising classification performance. It assumes that the optimal kernel will reside in the neighborhood of a "pre-specified" kernel. Nevertheless, how to specify such a kernel in a principled way remains unclear. To solve this issue, this paper treats the pre-specified kernel as an extra variable and jointly learns it with the optimal neighborhood kernel and the structure parameters of support vector machines. To avoid trivial solutions, we constrain the pre-specified kernel with a parameterized model. We first discuss the characteristics of our approach and in particular highlight its adaptivity. After that, two instantiations are demonstrated by modeling the pre-specified kernel as a common Gaussian radial basis function kernel and a linear combination of a set of base kernels in the way of multiple kernel learning (MKL), respectively. We show that the optimization in our approach is a min-max problem and can be efficiently solved by employing the extended level method and Nesterov's method. Also, we give the probabilistic interpretation for our approach and apply it to explain the existing kernel learning methods, providing another perspective for their commonness and differences. Comprehensive experimental results on 13 UCI data sets and another two real-world data sets show that via the joint learning process, our approach not only adaptively identifies the pre-specified kernel, but also achieves superior classification performance to the original ONKL and the related MKL algorithms.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.