144 results on '"Junhui Hou"'
Search Results
2. Exploiting Manifold Feature Representation for Efficient Classification of 3D Point Clouds
- Author
-
Dinghao Yang, Wei Gao, Ge Li, Hui Yuan, Junhui Hou, and Sam Kwong
- Subjects
Computer Networks and Communications ,Hardware and Architecture - Abstract
In this paper, we propose an efficient point cloud classification method via manifold learning based feature representation. Different from conventional methods, we use manifold learning algorithms to embed point cloud features for better considering the geometric continuity on the surface. Then, the nature of point cloud can be acquired in low dimensional space, and after being concatenated with features in the original three-dimensional (3D) space, both the capability of feature representation and the classification network performance can be improved. We explore three traditional manifold algorithms (i.e., Isomap, Locally-Linear Embedding, and Laplacian eigenmaps) in detail, and finally, we select the Locally-Linear Embedding (LLE) algorithm due to its low complexity and locality consistency preservation. Furthermore, we propose a neural network based manifold learning (NNML) method to implement manifold learning based non-linear projection. Experiments demonstrate that the proposed two manifold learning methods can obtain better performances than the state-of-the-art methods, and the obtained mean class accuracy (mA) and overall accuracy (oA) can reach 91.4% and 94.4%, respectively. Moreover, because of the improved feature learning capability, the proposed NNML method can also have better classification accuracy on models with prominent geometric shapes. To further demonstrate the advantages of PointManifold, we extend it as a plug and play method for point cloud classification task, which can be directly used with existing methods and gain a significant improvement.
- Published
- 2023
- Full Text
- View/download PDF
3. t-Linear Tensor Subspace Learning for Robust Feature Extraction of Hyperspectral Images
- Author
-
Yang-Jun Deng, Heng-Chao Li, Si-Qiao Tan, Junhui Hou, Qian Du, and Antonio Plaza
- Subjects
General Earth and Planetary Sciences ,Electrical and Electronic Engineering - Published
- 2023
- Full Text
- View/download PDF
4. RegGeoNet: Learning Regular Representations for Large-Scale 3D Point Clouds
- Author
-
Qijian Zhang, Junhui Hou, Yue Qian, Antoni B. Chan, Juyong Zhang, and Ying He
- Subjects
Artificial Intelligence ,Computer Vision and Pattern Recognition ,Software - Published
- 2022
- Full Text
- View/download PDF
5. Finding Stars From Fireworks: Improving Non-Cooperative Iris Tracking
- Author
-
Chengdong Lin, Xinlin Li, Zhenjiang Li, and Junhui Hou
- Subjects
Media Technology ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
6. Semisupervised Affinity Matrix Learning via Dual-Channel Information Recovery
- Author
-
Sam Kwong, IHui Liu, Yuheng Jia, Junhui Hou, and Qingfu Zhang
- Subjects
Similarity (geometry) ,Computer science ,Dimensionality reduction ,Constrained clustering ,02 engineering and technology ,021001 nanoscience & nanotechnology ,01 natural sciences ,Computer Science Applications ,010309 optics ,Human-Computer Interaction ,Control and Systems Engineering ,0103 physical sciences ,Convex optimization ,Outlier ,Benchmark (computing) ,Cluster Analysis ,Supervised Machine Learning ,Electrical and Electronic Engineering ,0210 nano-technology ,Cluster analysis ,Algorithm ,Algorithms ,Software ,Information Systems - Abstract
This article explores the problem of semisupervised affinity matrix learning, that is, learning an affinity matrix of data samples under the supervision of a small number of pairwise constraints (PCs). By observing that both the matrix encoding PCs, called pairwise constraint matrix (PCM) and the empirically constructed affinity matrix (EAM), express the similarity between samples, we assume that both of them are generated from a latent affinity matrix (LAM) that can depict the ideal pairwise relation between samples. Specifically, the PCM can be thought of as a partial observation of the LAM, while the EAM is a fully observed one but corrupted with noise/outliers. To this end, we innovatively cast the semisupervised affinity matrix learning as the recovery of the LAM guided by the PCM and EAM, which is technically formulated as a convex optimization problem. We also provide an efficient algorithm for solving the resulting model numerically. Extensive experiments on benchmark datasets demonstrate the significant superiority of our method over state-of-the-art ones when used for constrained clustering and dimensionality reduction. The code is publicly available at https://github.com/jyh-learning/LAM.
- Published
- 2022
- Full Text
- View/download PDF
7. Learning Low-Rank Graph With Enhanced Supervision
- Author
-
Qingfu Zhang, Yuheng Jia, Junhui Hou, and Hui Liu
- Subjects
Similarity (network science) ,Computer science ,Iterative method ,Media Technology ,Benchmark (computing) ,Graph (abstract data type) ,Rank (graph theory) ,Sample (statistics) ,Pairwise comparison ,Semi-supervised learning ,Electrical and Electronic Engineering ,Algorithm - Abstract
In this paper, we propose a new semi-supervised graph construction method, which is capable of adaptively learning the similarity relationship between data samples by fully exploiting the potential of pairwise constraints, a kind of weakly supervisory information. Specifically, to adaptively learn the similarity relationship, we linearly approximate each sample with others under the regularization of the low-rankness of the matrix formed by the approximation coefficient vectors of all the samples. In the meanwhile, by taking advantage of the underlying local geometric structure of data samples that is empirically obtained, we enhance the dissimilarity information of the available pairwise constraints via propagation. We seamlessly combine the two adversarial learning processes to achieve mutual guidance. We cast our method as a constrained optimization problem and provide an efficient alternating iterative algorithm to solve it. Experimental results on five commonly-used benchmark datasets demonstrate that our method produces much higher classification accuracy than state-of-the-art methods, while running faster.
- Published
- 2022
- Full Text
- View/download PDF
8. Global-Local Balanced Low-Rank Approximation of Hyperspectral Images for Classification
- Author
-
Yuheng Jia, Qingfu Zhang, Hui Liu, and Junhui Hou
- Subjects
Optimization problem ,Pixel ,business.industry ,Iterative method ,Computer science ,Hyperspectral imaging ,Low-rank approximation ,Pattern recognition ,Discriminative model ,Media Technology ,Benchmark (computing) ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Spatial analysis - Abstract
This paper explores the problem of recovering the discriminative representation of a hyperspectral remote sensing image (HRSI), which suffers from spectral variations, to boost its classification accuracy. To tackle this challenge, we propose a new method, namely local-global balanced low-rank approximation (GLB-LRA), which can increase the similarity between pixels belonging to an identical category while promoting the discriminability between pixels of different categories. Specifically, by taking advantage of the particular structural spatial information of HRSIs, we exploit the low-rankness of an HRSI robustly in both spatial and spectral domains from the perspective of local and global balance. We mathematically formulate GLB-LRA as an explicit optimization problem and propose an iterative algorithm to solve it efficiently. Experimental results over three commonly-used benchmark datasets demonstrate the significant superiority of our method over state-of-the-art methods.
- Published
- 2022
- Full Text
- View/download PDF
9. Deep Coarse-to-Fine Dense Light Field Reconstruction With Flexible Sampling and Geometry-Aware Fusion
- Author
-
Jie Chen, Sam Kwong, Jing Jin, Jingyi Yu, Junhui Hou, and Huanqiang Zeng
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition ,Geometry ,02 engineering and technology ,Iterative reconstruction ,Artificial Intelligence ,FOS: Electrical engineering, electronic engineering, information engineering ,0202 electrical engineering, electronic engineering, information engineering ,Angular resolution ,Image resolution ,business.industry ,Applied Mathematics ,Deep learning ,Image and Video Processing (eess.IV) ,Sampling (statistics) ,Electrical Engineering and Systems Science - Image and Video Processing ,Image-based modeling and rendering ,Computational Theory and Mathematics ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Parallax ,business ,Software ,Light field - Abstract
A densely-sampled light field (LF) is highly desirable in various applications, such as 3-D reconstruction, post-capture refocusing and virtual reality. However, it is costly to acquire such data. Although many computational methods have been proposed to reconstruct a densely-sampled LF from a sparsely-sampled one, they still suffer from either low reconstruction quality, low computational efficiency, or the restriction on the regularity of the sampling pattern. To this end, we propose a novel learning-based method, which accepts sparsely-sampled LFs with irregular structures, and produces densely-sampled LFs with arbitrary angular resolution accurately and efficiently. We also propose a simple yet effective method for optimizing the sampling pattern. Our proposed method, an end-to-end trainable network, reconstructs a densely-sampled LF in a coarse-to-fine manner. Specifically, the coarse sub-aperture image (SAI) synthesis module first explores the scene geometry from an unstructured sparsely-sampled LF and leverages it to independently synthesize novel SAIs, in which a confidence-based blending strategy is proposed to fuse the information from different input SAIs, giving an intermediate densely-sampled LF. Then, the efficient LF refinement module learns the angular relationship within the intermediate result to recover the LF parallax structure. Comprehensive experimental evaluations demonstrate the superiority of our method on both real-world and synthetic LF images when compared with state-of-the-art methods. In addition, we illustrate the benefits and advantages of the proposed approach when applied in various LF-based applications, including image-based rendering and depth estimation enhancement., 17 pages, 11 figures, 10 tables
- Published
- 2022
- Full Text
- View/download PDF
10. A Hybrid Compression Framework for Color Attributes of Static 3D Point Clouds
- Author
-
Sam Kwong, Hao Liu, Hui Yuan, Huanqiang Zeng, Junhui Hou, and Qi Liu
- Subjects
Rate–distortion optimization ,Computer science ,Media Technology ,Discrete cosine transform ,Redundancy (engineering) ,Point cloud ,Graph (abstract data type) ,Sparse approximation ,Electrical and Electronic Engineering ,Algorithm ,Block (data storage) ,Volume (compression) - Abstract
The emergence of 3D point clouds (3DPCs) is promoting the rapid development of immersive communication, autonomous driving, and so on. Due to the huge data volume, the compression of 3DPCs is becoming more and more attractive. We propose a novel and efficient color attribute compression method for static 3DPCs. First, a 3DPC is partitioned into several sub-point clouds by color distribution analysis. Each sub-point cloud is then decomposed into a lot of 3D blocks by an improved k-d tree-based decomposition algorithm. Afterwards, a novel virtual adaptive sampling-based sparse representation strategy is proposed for each 3D block to remove the redundancy among points, in which the bases of the graph transform (GT) and the discrete cosine transform (DCT) are used as candidates of the complete dictionary. Experimental results over 10 common 3DPCs demonstrate that the proposed method can achieve superior or comparable coding performance when compared with the current state-of-the-art methods.
- Published
- 2022
- Full Text
- View/download PDF
11. A Spatial and Geometry Feature-Based Quality Assessment Model for the Light Field Images
- Author
-
Hailiang Huang, Huanqiang Zeng, Junhui Hou, Jing Chen, Jianqing Zhu, and Kai-Kuang Ma
- Subjects
Computer Graphics and Computer-Aided Design ,Software - Abstract
This paper proposes a new full-reference image quality assessment (IQA) model for performing perceptual quality evaluation on light field (LF) images, called the spatial and geometry feature-based model (SGFM). Considering that the LF image describe both spatial and geometry information of the scene, the spatial features are extracted over the sub-aperture images (SAIs) by using contourlet transform and then exploited to reflect the spatial quality degradation of the LF images, while the geometry features are extracted across the adjacent SAIs based on 3D-Gabor filter and then explored to describe the viewing consistency loss of the LF images. These schemes are motivated and designed based on the fact that the human eyes are more interested in the scale, direction, contour from the spatial perspective and viewing angle variations from the geometry perspective. These operations are applied to the reference and distorted LF images independently. The degree of similarity can be computed based on the above-measured quantities for jointly arriving at the final IQA score of the distorted LF image. Experimental results on three commonly-used LF IQA datasets show that the proposed SGFM is more in line with the quality assessment of the LF images perceived by the human visual system (HVS), compared with multiple classical and state-of-the-art IQA models.
- Published
- 2022
- Full Text
- View/download PDF
12. Point Cloud Quality Assessment via 3D Edge Similarity Measurement
- Author
-
Zian Lu, Hailiang Huang, Huanqiang Zeng, Junhui Hou, and Kai-Kuang Ma
- Subjects
Applied Mathematics ,Signal Processing ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
13. Occlusion-Aware Unsupervised Learning of Depth From 4-D Light Fields
- Author
-
Jing JIN and Junhui Hou
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Computer Graphics and Computer-Aided Design ,Software - Abstract
Depth estimation is a fundamental issue in 4-D light field processing and analysis. Although recent supervised learning-based light field depth estimation methods have significantly improved the accuracy and efficiency of traditional optimization-based ones, these methods rely on the training over light field data with ground-truth depth maps which are challenging to obtain or even unavailable for real-world light field data. Besides, due to the inevitable gap (or domain difference) between real-world and synthetic data, they may suffer from serious performance degradation when generalizing the models trained with synthetic data to real-world data. By contrast, we propose an unsupervised learning-based method, which does not require ground-truth depth as supervision during training. Specifically, based on the basic knowledge of the unique geometry structure of light field data, we present an occlusion-aware strategy to improve the accuracy on occlusion areas, in which we explore the angular coherence among subsets of the light field views to estimate initial depth maps, and utilize a constrained unsupervised loss to learn their corresponding reliability for final depth prediction. Additionally, we adopt a multi-scale network with a weighted smoothness loss to handle the textureless areas. Experimental results on synthetic data show that our method can significantly shrink the performance gap between the previous unsupervised method and supervised ones, and produce depth maps with comparable accuracy to traditional methods with obviously reduced computational cost. Moreover, experiments on real-world datasets show that our method can avoid the domain shift problem presented in supervised methods, demonstrating the great potential of our method., Comment: 13 pages, 10 figures
- Published
- 2022
- Full Text
- View/download PDF
14. Knowledge mapping and current trends of global research on snoRNA in the field of cancer
- Author
-
Runsen Xu, Junhui Hou, Xia Wang, Yuan Wang, and Kefeng Wang
- Abstract
Backgroud: Cancer is a major hazard to human health. Recently, small nucleolar RNA (snoRNA) has been found to be involved in the occurrence and development of cancer, which has potential diagnostic, prognostic and therapeutic value. The purpose of this study is to use the bibliometrics method to sort out and study the previous published papers. Methods We collected articles from the Web of Science Core Collection database in the field of snoRNA and cancer. Then, we used VOSviewer, Citespace, WPS and other software to visualize authors, Finally, we interpreted the data and analyzed the hotspots and frontiers of the research. Results The number of articles in this field was low in the early period, but exploded since 2008. According to the calculation of Prince's law, we believed that a stable cooperative group had been formed in this field. Chu, Liang and Montanaro, Lorenzo published the most papers, while Jiang, Feng were cited the most times. Three institutions published the most articles, namely Wuhan Univ, China Med Univ and Guangxi Med Univ. The journal with the most articles was Oncotarget. Through the analysis of countries/regions, it was found that the country with the most published articles was China. The analysis of keywords and burst words indicated that early studies mainly focused on the molecular mechanisms, but in recent years, it has gradually shifted to the direction of diagnosis, prognosis and therapy. Conclusion The research of snoRNA and cancer was a hot topic in recent years. Through analysis, we found that snoRNA was involved in the molecular mechanism of cancer development and can be used as a biomarker for clinical diagnosis and prognosis.
- Published
- 2023
- Full Text
- View/download PDF
15. Differentiable Deformation Graph-Based Neural Non-rigid Registration
- Author
-
Wanquan Feng, Hongrui Cai, Junhui Hou, Bailin Deng, and Juyong Zhang
- Subjects
Statistics and Probability ,Computational Mathematics ,Applied Mathematics - Published
- 2023
- Full Text
- View/download PDF
16. Functions and mechanisms of lncRNA MALAT1 in cancer chemotherapy resistance
- Author
-
Junhui Hou, Gong Zhang, Xia Wang, Yuan Wang, and Kefeng Wang
- Subjects
Biochemistry (medical) ,Clinical Biochemistry ,Molecular Medicine - Abstract
Chemotherapy is one of the most important treatments for cancer therapy. However, chemotherapy resistance is a big challenge in cancer treatment. Due to chemotherapy resistance, drugs become less effective or no longer effective at all. In recent years, long non-coding RNA metastasis-associated lung adenocarcinoma transcript 1 (MALAT1) has been found to be associated with the development of chemotherapy resistance, suggesting that MALAT1 may be an important target to overcome chemotherapy resistance. In this review, we introduced the main mechanisms of chemotherapy resistance associated with MALAT1, which may provide new approaches for cancer treatment.
- Published
- 2023
- Full Text
- View/download PDF
17. Immune-related basement membrane genes predict the immunotherapy and prognosis of ccRCC patients
- Author
-
Junhui Hou, Chunming Zhu, Yuan Wang, Xia Wang, Xiaonan Chen, and Kefeng Wang
- Abstract
Background Basement membrane (BM) genes are an important factor in the process of clear cell renal cell carcinoma (ccRCC). Thus, identifying BM genes with prognostic values in ccRCC is critical. Methods The samples from TCGA were separated randomly into 2 cohorts, the training cohort, and the validation cohort. Univariate and multivariate Cox regression analyses were applied to identify prognostic BM genes. Then, nomogram was applied to predict prognosis at different clinicopathological stages and risk scores. GO and KEGG analyses were applied to identify the differentially expressed genes. Moreover, CIBERSORT and ESTIMATE scores were calculated between the high- and low-risk cohort. Results A prognostic risk model of four BM genes, including ADAMTS14, COL7A1, HSPG2, and TIMP3, was constructed. There were also significant differences in survival time between the high- and low-risk cohort for the validation cohort and the entire cohort. The risk model was validated as a new independent prognostic factor for ccRCC by univariate and multivariate Cox regression together with clinicopathological characteristics. The model can also analyze the possibility of immune escape and response to immunotherapy in ccRCC patients. In addition, the results of a pan-cancer analysis showed that these four model genes were associated with immune-related genes in a variety of cancers. Conclusion The signature of four BM genes had a significant prognostic value for ccRCC. They may be promising targets for therapy, especially immune therapy.
- Published
- 2023
- Full Text
- View/download PDF
18. Knowledge-map analysis of bladder cancer immunotherapy
- Author
-
Zongwei Lv, Junhui Hou, Yuan Wang, Xia Wang, Yibing Wang, and Kefeng Wang
- Abstract
Background This study aims to conduct the bibliometric and visual analyses in the field of bladder cancer (BC) immunotherapy, and explore the research trends, hotspots and frontiers from 2000 to 2021. Methods Data were obtained from the Web of Science core collection database, which collected 2,022 papers related to BC immunotherapy around the world from 1 January 2000 to 31 December 2021. VOSviewer software was used to comprehensively analyze the collaborative relationships between authors, institutions, countries/regions, journals through citation, co-authorship, co-citation, etc., so as to identify research hotspots and frontiers in this research field. Results The trend of literature publication was relatively flat from 2000 to 2015, and since 2015, the literature publication showed an overall upward trend. The United States of America has published 643 papers with 27,241 citations, ranked first among the top 10 most active countries, and has the most extensive collaboration with other countries. The University of Texas MD Anderson CANC CTR has published 62 articles, making it the most published articles and active collaborative research institution. Kamat AM and Lamm DL were the most active and co-cited authors with 27 papers and 1,039 co-citations, respectively. Chang yuan and Xu le ranked first with 145 total link strength, becoming the most active collaborative authors. J UROLOGY was the most active and frequently co-cited journal, with 106 papers and 6,764 co-citations. Studies of BC immunotherapy can be divided into three categories: “basic research”, “clinical trial” and “prognosis”. Conclusions Our findings provide a comprehensive overview of the research priorities and future directions of BC immunotherapy. Tumor microenvironment and immune checkpoint inhibitors (ICIs) of BC, as well as the combination of ICIs and other drugs may become the main direction of future research.
- Published
- 2023
- Full Text
- View/download PDF
19. Knowledge-map analysis of percutaneous nephrolithotomy (PNL) for urolithiasis
- Author
-
Junhui Hou, Zongwei Lv, Yuan Wang, Xia Wang, Yibing Wang, and Kefeng Wang
- Subjects
Urology - Abstract
Percutaneous nephrolithotomy (PNL) has been used in the treatment of urolithiasis for more than 20 years. However, bibliometric analysis of the global use of PNL for urolithiasis is rare. We retrieved the literatures on PNL and urolithiasis from Web of science core collection database. VOSviewer was used to analyze keywords, citations, publications, co-authorship, themes, and trend topics. A total of 3103 articles were analyzed, most of which were original ones. The most common keywords were “percutaneous nephrology” and “urolithiasis”, both of which were closely related to “ureteroscopy”. Journal of Urology and Zeng Guohua from the First Affiliated Hospital of Guangzhou Medical University were the most published journal and author in this field. The most productive country was the United States, and its closest partners were Canada, China, and Italy. The five hot topics were the specific application methods and means, risk factors of urolithiasis, the development of treatment technology of urolithiasis, the characteristics, composition, and properties of stones, and the evaluation of curative effect. This study aimed to provide a new perspective for PNL treatment of urolithiasis and provided valuable information for urologic researchers to understand their research hotspots, cooperative institutions, and research frontiers.
- Published
- 2023
- Full Text
- View/download PDF
20. PQA-Net: Deep No Reference Point Cloud Quality Assessment via Multi-View Projection
- Author
-
Huan Yang, Hui Yuan, Hao Liu, Honglei Su, Qi Liu, Junhui Hou, and Yu Wang
- Subjects
business.industry ,Computer science ,Deep learning ,Feature extraction ,Point cloud ,Multi-task learning ,Communications system ,computer.software_genre ,Identification (information) ,Feature (computer vision) ,Media Technology ,Data mining ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Projection (set theory) ,computer - Abstract
Recently, 3D point cloud is becoming popular due to its capability to represent the real world for advanced content modality in modern communication systems. In view of its wide applications, especially for immersive communication towards human perception, quality metrics for point clouds are essential. Existing point cloud quality evaluations rely on a full or certain portion of the original point cloud, which severely limits their applications. To overcome this problem, we propose a novel deep learning-based no reference point cloud quality assessment method, namely PQA-Net. Specifically, the PQA-Net consists of a multi-view-based joint feature extraction and fusion (MVFEF) module, a distortion type identification (DTI) module, and a quality vector prediction (QVP) module. The DTI and QVP modules share the feature generated from the MVFEF module. By using the distortion type labels, the DTI and the MVFEF modules are first pre-trained to initialize the network parameters, based on which the whole network is then jointly trained to finally evaluate the point cloud quality. Experimental results on the Waterloo Point Cloud dataset show that PQA-Net achieves better or equivalent performance comparing with the state-of-the-art quality assessment methods. The code of the proposed model will be made publicly available to facilitate reproducible research https://github.com/qdushl/PQA-Net.
- Published
- 2021
- Full Text
- View/download PDF
21. Subjective Quality Database and Objective Study of Compressed Point Clouds With 6DoF Head-Mounted Display
- Author
-
Yun Zhang, Sam Kwong, Chunling Fan, Junhui Hou, and Xinju Wu
- Subjects
Database ,Computer science ,Distortion (optics) ,Image and Video Processing (eess.IV) ,Point cloud ,Optical head-mounted display ,Electrical Engineering and Systems Science - Image and Video Processing ,Virtual reality ,computer.software_genre ,FOS: Electrical engineering, electronic engineering, information engineering ,Media Technology ,Electrical and Electronic Engineering ,Focus (optics) ,Quantization (image processing) ,Projection (set theory) ,computer ,Encoder - Abstract
In this paper, we focus on subjective and objective Point Cloud Quality Assessment (PCQA) in an immersive environment and study the effect of geometry and texture attributes in compression distortion. Using a Head-Mounted Display (HMD) with six degrees of freedom, we establish a subjective PCQA database, named SIAT Point Cloud Quality Database (SIAT-PCQD). Our database consists of 340 distorted point clouds compressed by the MPEG point cloud encoder with the combination of 20 sequences and 17 pairs of geometry and texture quantization parameters. The impact of distorted geometry and texture attributes is further discussed in this paper. Then, we propose two projection-based objective quality evaluation methods, i.e., a weighted view projection based model and a patch projection based model. Our subjective database and findings can be used in point cloud processing, transmission, and coding, especially for virtual reality applications. The subjective dataset has been released in the public repository., Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
- Published
- 2021
- Full Text
- View/download PDF
22. Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation
- Author
-
Yuheng Jia, Junhui Hou, Qingfu Zhang, Sam Kwong, and Hui Liu
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Rank (linear algebra) ,Basis (linear algebra) ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (stat.ML) ,Spectral clustering ,Machine Learning (cs.LG) ,Constraint (information theory) ,Statistics - Machine Learning ,Norm (mathematics) ,Tensor (intrinsic definition) ,Media Technology ,Tensor ,Electrical and Electronic Engineering ,Representation (mathematics) ,Cluster analysis ,Algorithm - Abstract
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling. Unlike the existing methods that all adopt an off-the-shelf tensor low-rank norm without considering the special characteristics of the tensor in MVSC, we design a novel structured tensor low-rank norm tailored to MVSC. Specifically, we explicitly impose a symmetric low-rank constraint and a structured sparse low-rank constraint on the frontal and horizontal slices of the tensor to characterize the intra-view and inter-view relationships, respectively. Moreover, the two constraints could be jointly optimized to achieve mutual refinement. On basis of the novel tensor low-rank norm, we formulate MVSC as a convex low-rank tensor recovery problem, which is then efficiently solved with an augmented Lagrange multiplier-based method iteratively. Extensive experimental results on seven commonly used benchmark datasets show that the proposed method outperforms state-of-the-art methods to a significant extent. Impressively, our method is able to produce perfect clustering. In addition, the parameters of our method can be easily tuned, and the proposed model is robust to different datasets, demonstrating its potential in practice. The code is available at https://github.com/jyh-learning/MVSC-TLRR.
- Published
- 2021
- Full Text
- View/download PDF
23. Categorical Matrix Completion With Active Learning for High-Throughput Screening
- Author
-
Ka-Chun Wong, Junyi Chen, and Junhui Hou
- Subjects
Matrix completion ,Active learning (machine learning) ,business.industry ,Computer science ,Applied Mathematics ,Computational Biology ,Proteins ,Sampling (statistics) ,Machine learning ,computer.software_genre ,Automation ,High-Throughput Screening Assays ,Data modeling ,Margin (machine learning) ,Genetics ,Computer Simulation ,Supervised Machine Learning ,Artificial intelligence ,business ,computer ,Categorical variable ,Algorithms ,Biotechnology ,Sparse matrix - Abstract
The recent advances in wet-lab automation enable high-throughput experiments to be conducted seamlessly. In particular, the exhaustive enumeration of all possible conditions is always involved in high-throughput screening. Nonetheless, such a screening strategy is hardly believed to be optimal and cost-effective. By incorporating artificial intelligence, we design an open-source model based on categorical matrix completion and active machine learning to guide high throughput screening experiments. Specifically, we narrow our scope to the high-throughput screening for chemical compound effects on diverse protein sub-cellular locations. In the proposed model, we believe that exploration is more important than the exploitation in the long-run of high-throughput screening experiment, Therefore, we design several innovations to circumvent the existing limitations. In particular, categorical matrix completion is designed to accurately impute the missing experiments while margin sampling is also implemented for uncertainty estimation. The model is systematically tested on both simulated and real data. The simulation results reflect that our model can be robust to diverse scenarios, while the real data results demonstrate the wet-lab applicability of our model for high-throughput screening experiments. Lastly, we attribute the model success to its exploration ability by revealing the related matrix ranks and distinct experiment coverage comparisons.
- Published
- 2021
- Full Text
- View/download PDF
24. Static analysis of single truss string and dynamic response of broken cable
- Author
-
Lulu Qian, Hongming Li, Baijian Tang, Junhui Hou, and Dan Xie
- Published
- 2022
- Full Text
- View/download PDF
25. Basement membrane genes can predict the prognosis of patients with clear cell renal cell carcinoma (ccRCC) and are correlated with immune status
- Author
-
Junhui Hou, Zongwei Lv, Yuan Wang, Xia Wang, Xiaonan Chen, and Kefeng Wang
- Abstract
Background: Basement membrane (BM) genes are an important factor in the process of clear cell renal cell carcinoma (ccRCC). Thus, identifying BMs with prognostic values in ccRCC is critical. Methods: The samples from TCGA were separated randomly into 2 cohorts, the training cohort, and the validation cohort. For the training cohort univariate Cox, Lasso, and multivariate Cox regression analyses were applied to identify prognostic BM genes and then construct a prognostic BM-genes’ signature. The nomogram was applied to predict prognosis at different clinicopathological stages and risk scores. GO and KEGG analyses were applied to the differentially expressed genes. Moreover, the CIBERSORT and ESTIMATE scores were calculated and compared between the high-risk cohort and the low-risk cohort. Results: A prognostic risk model of four BM genes, including ADAMTS14, COL7A1, HSPG2, and TIMP3, was constructed. There were also significant differences in survival time between the high-risk and low-risk groups for the validation cohort and the entire cohort. The risk model was validated as a new independent prognostic factor for ccRCC by univariate and multivariate Cox regression together with clinicopathological characteristics. In addition, a nomogram showed good prediction. The model can also analyze the possibility of immune escape and response to immunotherapy in ccRCC patients. In addition, the results of a pan-cancer analysis showed that these four model genes were associated with immune-related genes in a variety of cancers. Conclusion: The signature of four BM genes had a significant prognostic value for ccRCC. They may be promising targets for therapy, especially immune therapy.
- Published
- 2022
- Full Text
- View/download PDF
26. AEDNet: Asynchronous Event Denoising with Spatial-Temporal Correlation among Irregular Data
- Author
-
Huachen Fang, Jinjian Wu, Leida Li, Junhui Hou, Weisheng Dong, and Guangming Shi
- Published
- 2022
- Full Text
- View/download PDF
27. Occlusion-Resistant instance segmentation of piglets in farrowing pens using center clustering network
- Author
-
Endai Huang, Axiu Mao, Junhui Hou, Yongjian Wu, Weitao Xu, Maria Camila Ceballos, Thomas D. Parsons, and Kai Liu
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Forestry ,Horticulture ,Agronomy and Crop Science ,Computer Science Applications - Abstract
Computer vision enables the development of new approaches to monitor the behavior, health, and welfare of animals. Instance segmentation is a high-precision method in computer vision for detecting individual animals of interest. This method can be used for in-depth analysis of animals, such as examining their subtle interactive behaviors, from videos and images. However, existing deep-learning-based instance segmentation methods have been mostly developed based on public datasets, which largely omit heavy occlusion problems; therefore, these methods have limitations in real-world applications involving object occlusions, such as farrowing pen systems used on pig farms in which the farrowing crates often impede the sow and piglets. In this paper, we adapt a Center Clustering Network originally designed for counting to achieve instance segmentation, dubbed as CClusnet-Inseg. Specifically, CClusnet-Inseg uses each pixel to predict object centers and trace these centers to form masks based on clustering results, which consists of a network for segmentation and center offset vector map, Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm, Centers-to-Mask (C2M), and Remain-Centers-to-Mask (RC2M) algorithms. In all, 4,600 images were extracted from six videos collected from three closed and three half-open farrowing crates to train and validate our method. CClusnet-Inseg achieves a mean average precision (mAP) of 84.1 and outperforms all other methods compared in this study. We conduct comprehensive ablation studies to demonstrate the advantages and effectiveness of core modules of our method. In addition, we apply CClusnet-Inseg to multi-object tracking for animal monitoring, and the predicted object center that is a conjunct output could serve as an occlusion-resistant representation of the location of an object.
- Published
- 2023
- Full Text
- View/download PDF
28. Joint Optimization for Pairwise Constraint Propagation
- Author
-
Sam Kwong, Yuheng Jia, Ran Wang, Wenhui Wu, and Junhui Hou
- Subjects
Matrix completion ,Computer Networks and Communications ,Computer science ,Symmetric graph ,02 engineering and technology ,Spectral clustering ,Computer Science Applications ,Matrix decomposition ,Artificial Intelligence ,Bounded function ,0202 electrical engineering, electronic engineering, information engineering ,Local consistency ,Symmetric matrix ,020201 artificial intelligence & image processing ,Pairwise comparison ,Algorithm ,Software - Abstract
Constrained spectral clustering (SC) based on pairwise constraint propagation has attracted much attention due to the good performance. All the existing methods could be generally cast as the following two steps, i.e., a small number of pairwise constraints are first propagated to the whole data under the guidance of a predefined affinity matrix, and the affinity matrix is then refined in accordance with the resulting propagation and finally adopted for SC. Such a stepwise manner, however, overlooks the fact that the two steps indeed depend on each other, i.e., the two steps form a "chicken-egg" problem, leading to suboptimal performance. To this end, we propose a joint PCP model for constrained SC by simultaneously learning a propagation matrix and an affinity matrix. Especially, it is formulated as a bounded symmetric graph regularized low-rank matrix completion problem. We also show that the optimized affinity matrix by our model exhibits an ideal appearance under some conditions. Extensive experimental results in terms of constrained SC, semisupervised classification, and propagation behavior validate the superior performance of our model compared with state-of-the-art methods.
- Published
- 2021
- Full Text
- View/download PDF
29. Screen Content Video Quality Assessment Model Using Hybrid Spatiotemporal Features
- Author
-
Huanqiang Zeng, Hailiang Huang, Junhui Hou, Jiuwen Cao, Yongtao Wang, and Kai-Kuang Ma
- Subjects
Databases, Factual ,Video Recording ,Humans ,Computer Graphics and Computer-Aided Design ,Software ,Algorithms - Abstract
In this paper, a full-reference video quality assessment (VQA) model is designed for the perceptual quality assessment of the screen content videos (SCVs), called the hybrid spatiotemporal feature-based model (HSFM). The SCVs are of hybrid structure including screen and natural scenes, which are perceived by the human visual system (HVS) with different visual effects. With this consideration, the three dimensional Laplacian of Gaussian (3D-LOG) filter and three dimensional Natural Scene Statistics (3D-NSS) are exploited to extract the screen and natural spatiotemporal features, based on the reference and distorted SCV sequences separately. The similarities of these extracted features are then computed independently, followed by generating the distorted screen and natural quality scores for screen and natural scenes. After that, an adaptive screen and natural quality fusion scheme through the local video activity is developed to combine them for arriving at the final VQA score of the distorted SCV under evaluation. The experimental results on the Screen Content Video Database (SCVD) and Compressed Screen Content Video Quality (CSCVQ) databases have shown that the proposed HSFM is more in line with the perceptual quality assessment of the SCVs perceived by the HVS, compared with a variety of classic and latest IQA/VQA models.
- Published
- 2022
30. Deep Posterior Distribution-Based Embedding for Hyperspectral Image Super-Resolution
- Author
-
Jinhui Hou, Zhiyu Zhu, Junhui Hou, Huanqiang Zeng, Jinjian Wu, and Jiantao Zhou
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Computer Graphics and Computer-Aided Design ,Software - Abstract
In this paper, we investigate the problem of hyperspectral (HS) image spatial super-resolution via deep learning. Particularly, we focus on how to embed the high-dimensional spatial-spectral information of HS images efficiently and effectively. Specifically, in contrast to existing methods adopting empirically-designed network modules, we formulate HS embedding as an approximation of the posterior distribution of a set of carefully-defined HS embedding events, including layer-wise spatial-spectral feature extraction and network-level feature aggregation. Then, we incorporate the proposed feature embedding scheme into a source-consistent super-resolution framework that is physically-interpretable, producing lightweight PDE-Net, in which high-resolution (HR) HS images are iteratively refined from the residuals between input low-resolution (LR) HS images and pseudo-LR-HS images degenerated from reconstructed HR-HS images via probability-inspired HS embedding. Extensive experiments over three common benchmark datasets demonstrate that PDE-Net achieves superior performance over state-of-the-art methods. Besides, the probabilistic characteristic of this kind of networks can provide the epistemic uncertainty of the network outputs, which may bring additional benefits when used for other HS image-based applications. The code will be publicly available at https://github.com/jinnh/PDE-Net., Accepted by IEEE Transactions on Image Processing
- Published
- 2022
31. A Light Field Image Quality Assessment Model Based on Symmetry and Depth Features
- Author
-
Huanqiang Zeng, Yu Tian, Kai-Kuang Ma, Jing Chen, Junhui Hou, and Jianqing Zhu
- Subjects
Similarity (geometry) ,business.industry ,Computer science ,Machine vision ,Image quality ,Distortion (optics) ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,02 engineering and technology ,Luminance ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,Symmetry (geometry) ,business ,Light field - Abstract
This paper presents a new full-reference image quality assessment (IQA) method for conducting the perceptual quality evaluation of the light field (LF) images, called the symmetry and depth feature-based model (SDFM). Specifically, the radial symmetry transform is first employed on the luminance components of the reference and distorted LF images to extract their symmetry features for capturing the spatial quality of each view of an LF image. Second, the depth feature extraction scheme is designed to explore the geometry information inherited in an LF image for modeling its LF structural consistency across views. The similarity measurements are subsequently conducted on the comparison of their symmetry and depth features separately, which are further combined to achieve the quality score for the distorted LF image. Note that the proposed SDFM that explores the symmetry and depth features is conformable to the human vision system, which identifies the objects by sensing their structures and geometries. Extensive simulation results on the dense light fields dataset have clearly shown that the proposed SDFM outperforms multiple classical and recently developed IQA algorithms on quality evaluation of the LF images.
- Published
- 2021
- Full Text
- View/download PDF
32. Patch Based Video Summarization With Block Sparse Representation
- Author
-
Shuai Wan, Zhiyong Wang, Shaohui Mei, Mingyang Ma, Junhui Hou, and David Dagan Feng
- Subjects
business.industry ,Computer science ,Deep learning ,Feature extraction ,Pattern recognition ,02 engineering and technology ,Sparse approximation ,Partition (database) ,Automatic summarization ,Matching pursuit ,Computer Science Applications ,Visualization ,Histogram ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
In recent years, sparse representation has been successfully utilized for video summarization (VS). However, most of the sparse representation based VS methods characterize each video frame with global features. As a result, some important local details could be neglected by global features, which may compromise the performance of summarization. In this paper, we propose to partition each video frame into a number of patches and characterize each patch with global features. Instead of concatenating the features of each patch and utilizing conventional sparse representation, we formulate the VS problem with such video frame representation as block sparse representation by considering each video frame as a block containing a number of patches. By taking the reconstruction constraint into account, we devise a simultaneous version of block-based OMP (Orthogonal Matching Pursuit) algorithm, namely SBOMP, to solve the proposed model. The proposed model is further extended to a neighborhood based model which considers temporally adjacent frames as a super block. This is one of the first sparse representation based VS methods taking both spatial and temporal contexts into account with blocks. Experimental results on two widely used VS datasets have demonstrated that our proposed methods present clear superiority over existing sparse representation based VS methods and are highly comparable to some deep learning ones requiring supervision information for extra model training.
- Published
- 2021
- Full Text
- View/download PDF
33. Reduced Reference Perceptual Quality Model With Application to Rate Control for Video-Based Point Cloud Compression
- Author
-
Hui Yuan, Honglei Su, Huan Yang, Raouf Hamzaoui, Qi Liu, and Junhui Hou
- Subjects
Computer science ,Subjective test ,media_common.quotation_subject ,Mean opinion score ,Point cloud ,Brute-force search ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,Color quantization ,Point cloud compression ,Rate–distortion optimization ,Rate-distortion optimization ,Metric (mathematics) ,Feature extraction ,Quality (business) ,Data mining ,Perceptual quality metric ,computer ,Encoder ,Software ,media_common - Abstract
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. In rate-distortion optimization, the encoder settings are determined by maximizing a reconstruction quality measure subject to a constraint on the bitrate. One of the main challenges of this approach is to define a quality measure that can be computed with low computational cost and which correlates well with the perceptual quality. While several quality measures that fulfil these two criteria have been developed for images and videos, no such one exists for point clouds. We address this limitation for the video-based point cloud compression (V-PCC) standard by proposing a linear perceptual quality model whose variables are the V-PCC geometry and color quantization step sizes and whose coefficients can easily be computed from two features extracted from the original point cloud. Subjective quality tests with 400 compressed point clouds show that the proposed model correlates well with the mean opinion score, outperforming state-of-the-art full reference objective measures in terms of Spearman rank-order and Pearson linear correlation coefficient. Moreover, we show that for the same target bitrate, rate-distortion optimization based on the proposed model offers higher perceptual quality than rate-distortion optimization based on exhaustive search with a point-to-point objective quality metric. Our datasets are publicly available at https://github.com/qdushl/Waterloo-Point- Cloud-Database-2.0.
- Published
- 2021
- Full Text
- View/download PDF
34. miR-338-3p Plays a Significant Role in Casticin-Induced Suppression of Acute Myeloid Leukemia via Targeting PI3K/Akt Pathway
- Author
-
Kewei Yu, Juan Wang, Junhui Hou, Lei Zhang, and Hui Liang
- Subjects
Flavonoids ,General Immunology and Microbiology ,Article Subject ,Core Binding Factor Alpha 1 Subunit ,General Medicine ,General Biochemistry, Genetics and Molecular Biology ,Leukemia, Myeloid, Acute ,Mice ,MicroRNAs ,Phosphatidylinositol 3-Kinases ,Cell Line, Tumor ,Animals ,Heterografts ,Humans ,Proto-Oncogene Proteins c-akt ,Signal Transduction - Abstract
Objective. Casticin is generally used in traditional herbal medicine for its anti-inflammatory and anticarcinogenic pharmacological properties. Also, microRNAs are indispensable oncogenes or cancer suppressors being dysregulated in various diseases. In this study, we aimed to elucidate the mechanisms underlying effects of casticin on the progression of acute myeloid leukemia (AML). Methods. CCK-8 and flow cytometry were utilized to measure the proliferation and apoptosis of AML cell lines, respectively, after treatment with different concentrations of casticin. The alteration of several microRNA expressions in response to casticin treatment was detected by performing qRT-PCR, and the activity of PI3K/Akt pathways was evaluated through immunoblotting. Afterwards, the potential target gene of miR-338-3p was investigated by dual-luciferase reporter assay. In order to evaluate the role of miR-338-3p in the casticin-induced cellular phenotype changes, AML cells were transfected with miR-338-3p mimics or inhibitor and then subjected to proliferation and apoptosis analysis. Finally, a mouse xenograft model system was employed to investigate the role of casticin in AML progression in vivo. Results. Suppressed cellular proliferation and enhanced apoptosis were observed in HL-60 and THP-1 cells after exposure to casticin, accompanied by remarkable upregulation of the miR-338-3p expression as well as a decline in the phosphorylation of PI3K and Akt proteins. RUNX2 was identified as a direct target molecular of miR-338-3p, which might account for the findings that miR-338-3p knockdown enhanced the PI3K/Akt pathway activity, whereas the miR-338-3p overexpression inactivated this signaling pathway. In addition, the inhibition of the miR-338-3p expression attenuated severe cell apoptosis and suppressions of PI3K/Akt pathway induced by casticin. Furthermore, casticin treatment retarded tumor growth rate in mouse models, whilst elevating miR-338 expression and repressing the activity of PI3K/Akt pathway in vivo. However, miR-338-3p depletion could also abolish the phenotypic alterations caused by casticin treatment. Conclusion. Casticin promotes AML cell apoptosis but inhibits AML cell proliferation in vitro and tumor growth in vivo by upregulating miR-338-3p, which targets RUNX2 and thereafter inactivates PI3K-Akt signaling pathway. Our results provide insights into the mechanisms underlying the action of casticin in the control of AML progression.
- Published
- 2022
35. Correlation Filter Tracking via Distractor-Aware Learning and Multi-Anchor Detection
- Author
-
Junhui Hou, Guochun Chen, Wenxiong Kang, Feiqi Deng, Gengzheng Pan, and Yongxin Zhou
- Subjects
Computer science ,business.industry ,Process (computing) ,Boundary (topology) ,Context (language use) ,02 engineering and technology ,Tracking (particle physics) ,Filter (video) ,Video tracking ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
Correlation filter has demonstrated the power in object tracking, benefiting from its superior speed and competitive performance. However, existing correlation filter based trackers (CFTs) are fragile for some inherent defects caused by the boundary effect. To address this issue, we propose a novel correlation filter based tracking framework by integrating three highly collaborative components, including a fast target proposal module, a distractor-aware filter, and a correlation filter based refiner. Specifically, the target proposal aims at determining some target-like regions in contexts efficiently, which provides target-like patches to learn a distractor-aware filter and detect. Multi-region strategy enlarges space fields for learning and prediction. The filter learned from both target and distractors enhances its ability to identify background. Therefore, our method is capable of evaluating multiple candidates in wider context with less risk of drifting to distractors, namely multi-anchor detection. Besides, the proposed Proposal-Detect-Refine hierarchical searching process progressively achieves data alignment between testing and training samples, which benefits for reliable model prediction. A refiner is used to fine-tune positions after multi-anchor detection for lessening error accumulation and preventing model from drifting. Comprehensive experiments on five challenging datasets, i.e. OTB2013, OTB2015, VOT2017, VOT19, and TC128, demonstrate that the proposed method achieves superior performance against the state-of-the-art methods.
- Published
- 2020
- Full Text
- View/download PDF
36. Long Non-Coding RNA DARS-AS1 Contributes to Prostate Cancer Progression Through Regulating the MicroRNA-628-5p/MTDH Axis
- Author
-
Siqing Liu, Junhui Hou, Haitao Fan, Zuomin Xiao, and Jia Cui
- Subjects
0301 basic medicine ,Gene knockdown ,Competing endogenous RNA ,Cell growth ,fungi ,MTDH ,Biology ,medicine.disease_cause ,Non-coding RNA ,Antisense RNA ,Cell biology ,03 medical and health sciences ,030104 developmental biology ,0302 clinical medicine ,Oncology ,030220 oncology & carcinogenesis ,parasitic diseases ,microRNA ,medicine ,Carcinogenesis - Abstract
Purpose DARS antisense RNA 1 (DARS-AS1) is a long non-coding RNA that has been validated as a critical regulator in several human cancer types. Our study aimed to determine the expression profile of DARS-AS1 in prostate cancer (PCa) tissues and cell lines. Functional experiments were conducted to explore the detailed roles of DARS-AS1 in regulating PCa carcinogenesis. Furthermore, the detailed mechanisms by which DARS-AS1 regulates the oncogenicity of PCa cells were uncovered. Methods Reverse transcription quantitative polymerase chain reaction was performed to analyze DARS-AS1 expression in PCa tissues and cell lines. Cell Counting Kit-8 assays, flow cytometry analyses, Transwell assays, and tumor xenograft experiments were conducted to determine the regulatory effects of DARS-AS1 knockdown on the malignant phenotype of PCa cells. Bioinformatics analysis was performed to identify putative microRNAs (miRNAs) targeting DARS-AS1, and the direct interaction between DARS-AS1 and miR-628-5p was verified using RNA immunoprecipitation and luciferase reporter assays. Results DARS-AS1 was highly expressed in PCa tissues and cell lines. In vitro functional experiments demonstrated that DARS-AS1 depletion suppressed PCa cell proliferation, promoted cell apoptosis, and restricted cell migration and invasion. In vivo studies revealed that the downregulation of DARS-AS1 inhibited PCa tumor growth in nude mice. Mechanistic investigation verified that DARS-AS1 functioned as an endogenous miR-628-5p sponge in PCa cells and consequently promoted the expression of metadherin (MTDH). Furthermore, the involvement of miR-628-5p/MTDH axis in DARS-AS1-mediated regulatory actions in PCa cells was verified using rescue experiments. Conclusion DARS-AS1 functioned as a competing endogenous RNA in PCa by adsorbing miR-628-5p and thereby increasing the expression of MTDH, resulting in enhanced PCa progression. The identification of a novel DARS-AS1/miR-628-5p/MTDH regulatory network in PCa cells may offer a new theoretical basis for the development of promising therapeutic targets.
- Published
- 2020
- Full Text
- View/download PDF
37. Hyperspectral Image Classification via Sparse Representation With Incremental Dictionaries
- Author
-
Yuheng Jia, Shaohui Mei, Qian Du, Shujun Yang, and Junhui Hou
- Subjects
Pixel ,business.industry ,Computer science ,Hyperspectral imaging ,Pattern recognition ,Sparse approximation ,Geotechnical Engineering and Engineering Geology ,Discriminative model ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Representation (mathematics) ,Spatial analysis ,Sparse matrix - Abstract
In this letter, we propose a new sparse representation (SR)-based method for hyperspectral image (HSI) classification, namely SR with incremental dictionaries (SRID). Our SRID boosts existing SR-based HSI classification methods significantly, especially when used for the task with extremely limited training samples. Specifically, by exploiting unlabeled pixels with spatial information and multiple-feature-based SR classifiers, we select and add some of them to dictionaries in an iterative manner, such that the representation abilities of the dictionaries are progressively augmented, and likewise more discriminative representations. In addition, to deal with large-scale data sets, we use a certainty sampling strategy to control the sizes of the dictionaries, such that the computational complexity is well balanced. Experiments over two benchmark data sets show that our proposed method achieves higher classification accuracy than the state-of-the-art methods, i.e., the overall classification accuracy can improve more than 4%.
- Published
- 2020
- Full Text
- View/download PDF
38. Going From RGB to RGBD Saliency: A Depth-Guided Transformation Model
- Author
-
Runmin Cong, Junhui Hou, Qingming Huang, Jianjun Lei, Sam Kwong, and Huazhu Fu
- Subjects
Computer science ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Initialization ,02 engineering and technology ,Depth map ,Salience (neuroscience) ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Electrical and Electronic Engineering ,ComputingMethodologies_COMPUTERGRAPHICS ,business.industry ,020206 networking & telecommunications ,Object detection ,Computer Science Applications ,Human-Computer Interaction ,Transformation (function) ,Control and Systems Engineering ,Feature (computer vision) ,RGB color model ,020201 artificial intelligence & image processing ,Artificial intelligence ,Focus (optics) ,business ,Software ,Information Systems - Abstract
Depth information has been demonstrated to be useful for saliency detection. However, the existing methods for RGBD saliency detection mainly focus on designing straightforward and comprehensive models, while ignoring the transferable ability of the existing RGB saliency detection models. In this article, we propose a novel depth-guided transformation model (DTM) going from RGB saliency to RGBD saliency. The proposed model includes three components, that is: 1) multilevel RGBD saliency initialization; 2) depth-guided saliency refinement; and 3) saliency optimization with depth constraints. The explicit depth feature is first utilized in the multilevel RGBD saliency model to initialize the RGBD saliency by combining the global compactness saliency cue and local geodesic saliency cue. The depth-guided saliency refinement is used to further highlight the salient objects and suppress the background regions by introducing the prior depth domain knowledge and prior refined depth shape. Benefiting from the consistency of the entire object in the depth map, we formulate an optimization model to attain more consistent and accurate saliency results via an energy function, which integrates the unary data term, color smooth term, and depth consistency term. Experiments on three public RGBD saliency detection benchmarks demonstrate the effectiveness and performance improvement of the proposed DTM from RGB to RGBD saliency.
- Published
- 2020
- Full Text
- View/download PDF
39. Video summarization via block sparse dictionary selection
- Author
-
Zhiyong Wang, Shaohui Mei, Junhui Hou, David Dagan Feng, Mingyang Ma, and Shuai Wan
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Cognitive Neuroscience ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,02 engineering and technology ,Video processing ,Sparse approximation ,Matching pursuit ,Automatic summarization ,Computer Science Applications ,020901 industrial engineering & automation ,Artificial Intelligence ,Robustness (computer science) ,Outlier ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Video browsing ,Artificial intelligence ,business - Abstract
The explosive growth of video data has raised new challenges for many video processing tasks such as video browsing and retrieval, hence, effective and efficient video summarization (VS) is urgently demanded to automatically summarize a video into a succinct version. Recent years have witnessed the advancements of sparse representation based approaches for VS. However, video frames are analyzed individually for keyframe selection in existing methods, which could lead to redundancy among selected keyframes and poor robustness to outlier frames. Due to that adjacent frames are visually similar, candidate keyframes often occur in temporal blocks, in addition to sparse presence. Therefore, in this paper, the block-sparsity of candidate keyframes is taken into consideration, by which the VS problem is formulated as a block sparse dictionary selection model. Moreover, a simultaneous block version of Orthogonal Matching Pursuit (SBOMP) algorithm is designed for model optimization. Two keyframe selection strategies are also explored for each block. Experimental results on two benchmark datasets, namely VSumm and TVSum datasets, demonstrate that the proposed SBOMP based VS method clearly outperforms several state-of-the-art sparse representation based methods in terms of F-score, redundancy among keyframes and robustness to outlier frames.
- Published
- 2020
- Full Text
- View/download PDF
40. 3D Point Cloud Attribute Compression Using Geometry-Guided Sparse Representation
- Author
-
Kai-Kuang Ma, Huanqiang Zeng, Hui Yuan, Junhui Hou, Shuai Gu, and School of Electrical and Electronic Engineering
- Subjects
Optimization problem ,Sparse Representation ,Computer science ,Point cloud ,3D Point Cloud ,02 engineering and technology ,Sparse approximation ,Computer Graphics and Computer-Aided Design ,Redundancy (information theory) ,Compression (functional analysis) ,Electrical and electronic engineering [Engineering] ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Entropy encoding ,Algorithm ,Software ,Block (data storage) ,Data compression - Abstract
3D point clouds associated with attributes are considered as a promising paradigm for immersive communication. However, the corresponding compression schemes for this media are still in the infant stage. Moreover, in contrast to conventional image/video compression, it is a more challenging task to compress 3D point cloud data, arising from the irregular structure. In this paper, we propose a novel and effective compression scheme for the attributes of voxelized 3D point clouds. In the first stage, an input voxelized 3D point cloud is divided into blocks of equal size. Then, to deal with the irregular structure of 3D point clouds, a geometry-guided sparse representation (GSR) is proposed to eliminate the redundancy within each block, which is formulated as an ℓ0-norm regularized optimization problem. Also, an inter-block prediction scheme is applied to remove the redundancy between blocks. Finally, by quantitatively analyzing the characteristics of the resulting transform coefficients by GSR, an effective entropy coding strategy that is tailored to our GSR is developed to generate the bitstream. Experimental results over various benchmark datasets show that the proposed compression scheme is able to achieve better rate-distortion performance and visual quality, compared with state-of-the-art methods. This work was supported in part by the National Natural Science Foundation of China under Grant 61871434, Grant 61871342, and Grant 61571274, in part by the Natural Science Foundation for Outstanding Young Scholars of Fujian Province under Grant 2019J06017, in part by the Hong Kong RGC Early Career Scheme Funds under Grant 9048123, in part by the Shandong Provincial Key Research and Development Plan under Grant 2017CXGC150, in part by the Fujian-100 Talented People Program, in part by the High-level Talent Innovation Program of Quanzhou City under Grant 2017G027, in part by the Promotion Program for Young and Middle-aged Teacher in Science and Technology Research of Huaqiao University under Grant ZQN-YX403, and in part by the High-Level Talent Project Foundation of Huaqiao University under Grant 14BS201 and Grant14BS204. Part of this article was presented at the IEEE ICASSP2017
- Published
- 2020
- Full Text
- View/download PDF
41. Non-Negative Transfer Learning With Consistent Inter-Domain Distribution
- Author
-
Yuheng Jia, Junhui Hou, and Zhihao Peng
- Subjects
Optimization problem ,Inter-domain ,Iterative method ,Computer science ,Applied Mathematics ,Negative transfer ,020206 networking & telecommunications ,02 engineering and technology ,Kernel (linear algebra) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Transfer of learning ,Algorithm ,MNIST database - Abstract
In this letter, we propose a novel transfer learning approach, which simultaneously exploits the intra-domain differentiation and inter-domain correlation to comprehensively solve the drawbacks many existing transfer learning methods suffer from, i.e., they either are unable to handle the negative samples or have strict assumptions on the distribution. Specifically, the sample selection strategy is introduced to handle negative samples by using the local geometry structure and the label information of source samples. Furthermore, the pseudo target label is imposed to slack the assumption on the inter-domain distribution for considering the inter-domain correlation. Then, an efficient alternating iterative algorithm is proposed to solve the formulated optimization problem with multiple constraints. The extensive experiments conducted on eleven real-world datasets show the superiority of our method over state-of-the-art approaches, i.e., our method achieves 11.23% improvement on the MNIST dataset.
- Published
- 2020
- Full Text
- View/download PDF
42. 3D Point Cloud Attribute Compression via Graph Prediction
- Author
-
Hui Yuan, Shuai Gu, Huanqiang Zeng, and Junhui Hou
- Subjects
Computer science ,Applied Mathematics ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Point cloud ,Entropy (information theory) ,020206 networking & telecommunications ,02 engineering and technology ,Electrical and Electronic Engineering ,External Data Representation ,Algorithm ,Graph - Abstract
3D point clouds associated with attributes are considered as a promising data representation for immersive communication. The large amount of data, however, poses great challenges to the subsequent transmission and storage processes. In this letter, we propose a new compression scheme for the color attribute of static voxelized 3D point clouds. Specifically, we first partition the colors of a 3D point cloud into clusters by applying k-d tree to the geometry information, which are then successively encoded. To eliminate the redundancy, we propose a novel prediction module, namely graph prediction, in which a small number of representative points selected from previously encoded clusters are used to predict the points to be encoded by exploring the underlying graph structure constructed from the geometry information. Furthermore, the prediction residuals are transformed with the graph transform, and the resulting transform coefficients are finally uniformly quantified and entropy encoded. Experimental results show that the proposed compression scheme is able to achieve better rate-distortion performance at a lower computational cost when compared with state-of-the-art methods.
- Published
- 2020
- Full Text
- View/download PDF
43. Fire Resistance Analysis of Prestressed Steel Structure Based on the Study of High Temperature Mechanical Properties of Steel Strands
- Author
-
Junhui Hou, Hongming Li, Dan Xie, Lulu Qian, and Baijian Tang
- Subjects
Physics::Classical Physics - Abstract
On the basis of the existing test data of high temperature mechanical properties of 1860 grade steel strands, a regression model of high temperature performance of steel strands that can be used for theoretical analysis and numerical calculation is obtained. Based on the non-stationary temperature field model in the fire of tall and large space buildings, the non-linear finite element numerical analysis method considering the time integral effect is used to establish the fire resistance numerical model of the large-span prestressed steel structure. Through the fire-resistance calculation example of Beam String Structure, the influence of different fire source positions on the fire-resistance of prestressed steel structures is discussed.
- Published
- 2022
- Full Text
- View/download PDF
44. WarpingGAN: Warping Multiple Uniform Priors for Adversarial 3D Point Cloud Generation
- Author
-
Yingzhi Tang, Yue Qian, Qijian Zhang, Yiming Zeng, Junhui Hou, and Xuefei Zhe
- Subjects
FOS: Computer and information sciences ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
We propose WarpingGAN, an effective and efficient 3D point cloud generation network. Unlike existing methods that generate point clouds by directly learning the mapping functions between latent codes and 3D shapes, Warping-GAN learns a unified local-warping function to warp multiple identical pre-defined priors (i.e., sets of points uniformly distributed on regular 3D grids) into 3D shapes driven by local structure-aware semantics. In addition, we also ingeniously utilize the principle of the discriminator and tailor a stitching loss to eliminate the gaps between different partitions of a generated shape corresponding to different priors for boosting quality. Owing to the novel generating mechanism, WarpingGAN, a single lightweight network after one-time training, is capable of efficiently generating uniformly distributed 3D point clouds with various resolutions. Extensive experimental results demonstrate the superiority of our WarpingGAN over state-of-the-art methods in terms of quantitative metrics, visual quality, and efficiency. The source code is publicly available at https://github.com/yztang4/WarpingGAN.git., Comment: This paper has been accepted by CVPR 2022
- Published
- 2022
- Full Text
- View/download PDF
45. Content-aware Warping for View Synthesis
- Author
-
Mantang Guo, Junhui Hou, Jing Jin, Hui Liu, Huanqiang Zeng, and Jiwen Lu
- Subjects
FOS: Computer and information sciences ,Computational Theory and Mathematics ,Artificial Intelligence ,Applied Mathematics ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Computer Vision and Pattern Recognition ,Software - Abstract
Existing image-based rendering methods usually adopt depth-based image warping operation to synthesize novel views. In this paper, we reason the essential limitations of the traditional warping operation to be the limited neighborhood and only distance-based interpolation weights. To this end, we propose content-aware warping, which adaptively learns the interpolation weights for pixels of a relatively large neighborhood from their contextual information via a lightweight neural network. Based on this learnable warping module, we propose a new end-to-end learning-based framework for novel view synthesis from a set of input source views, in which two additional modules, namely confidence-based blending and feature-assistant spatial refinement, are naturally proposed to handle the occlusion issue and capture the spatial correlation among pixels of the synthesized view, respectively. Besides, we also propose a weight-smoothness loss term to regularize the network. Experimental results on light field datasets with wide baselines and multi-view datasets show that the proposed method significantly outperforms state-of-the-art methods both quantitatively and visually. The source code will be publicly available at https://github.com/MantangGuo/CW4VS., Comment: arXiv admin note: text overlap with arXiv:2108.07408
- Published
- 2022
- Full Text
- View/download PDF
46. Effect of circular RNAs and N6-methyladenosine (m6A) modification on cancer biology
- Author
-
Gong Zhang, Junhui Hou, Chenxue Mei, Xia Wang, Yuan Wang, and Kefeng Wang
- Subjects
Pharmacology ,General Medicine - Published
- 2023
- Full Text
- View/download PDF
47. Semi-supervised adaptive kernel concept factorization
- Author
-
Wenhui Wu, Junhui Hou, Shiqi Wang, Sam Kwong, and Yu Zhou
- Subjects
Artificial Intelligence ,Signal Processing ,Computer Vision and Pattern Recognition ,Software - Published
- 2023
- Full Text
- View/download PDF
48. Task-Oriented Compact Representation of 3D Point Clouds via A Matrix Optimization-Driven Network
- Author
-
Yue Qian, Junhui Hou, Qijian Zhang, Yiming Zeng, Sam Kwong, and Ying He
- Subjects
Media Technology ,Electrical and Electronic Engineering - Published
- 2023
- Full Text
- View/download PDF
49. Attention-driven Graph Clustering Network
- Author
-
Junhui Hou, Hui Liu, Zhihao Peng, and Yuheng Jia
- Subjects
FOS: Computer and information sciences ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,Node (networking) ,Computer Science - Computer Vision and Pattern Recognition ,Pattern recognition ,Topological graph ,Multimedia (cs.MM) ,Discriminative model ,Feature (computer vision) ,Graph (abstract data type) ,Artificial intelligence ,business ,Cluster analysis ,Feature learning ,Computer Science - Multimedia ,Clustering coefficient - Abstract
The combination of the traditional convolutional network (i.e., an auto-encoder) and the graph convolutional network has attracted much attention in clustering, in which the auto-encoder extracts the node attribute feature and the graph convolutional network captures the topological graph feature. However, the existing works (i) lack a flexible combination mechanism to adaptively fuse those two kinds of features for learning the discriminative representation and (ii) overlook the multi-scale information embedded at different layers for subsequent cluster assignment, leading to inferior clustering results. To this end, we propose a novel deep clustering method named Attention-driven Graph Clustering Network (AGCN). Specifically, AGCN exploits a heterogeneity-wise fusion module to dynamically fuse the node attribute feature and the topological graph feature. Moreover, AGCN develops a scale-wise fusion module to adaptively aggregate the multi-scale features embedded at different layers. Based on a unified optimization framework, AGCN can jointly perform feature learning and cluster assignment in an unsupervised fashion. Compared with the existing deep clustering methods, our method is more flexible and effective since it comprehensively considers the numerous and discriminative information embedded in the network and directly produces the clustering results. Extensive quantitative and qualitative results on commonly used benchmark datasets validate that our AGCN consistently outperforms state-of-the-art methods.
- Published
- 2021
- Full Text
- View/download PDF
50. Learning Spatial-angular Fusion for Compressive Light Field Imaging in a Cycle-consistent Framework
- Author
-
Jing Jin, Junhui Hou, Xianqiang Lyu, Mantang Guo, Zhiyu Zhu, and Huanqiang Zeng
- Subjects
Fusion ,Computer science ,business.industry ,Deep learning ,Feature extraction ,Posterior probability ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Inverse problem ,Set (abstract data type) ,Artificial intelligence ,Coded aperture ,business ,Algorithm ,Light field - Abstract
This paper investigates the 4-D light field (LF) reconstruction from 2-D measurements captured by the coded aperture camera. To tackle such an ill-posed inverse problem, we propose a cycle-consistent reconstruction network (CR-Net). To be specific, based on the intrinsic linear imaging model of the coded aperture, CR-Net reconstructs an LF through progressively eliminating the residuals between the projected measurements from the reconstructed LF and input measurements. Moreover, to address the crucial issue of extracting representative features from high-dimensional LF data efficiently and effectively, we formulate the problem in a probability space and propose to approximate a posterior distribution of a set of carefully-defined LF processing events, including both layer-wise spatial-angular feature extraction and network-level feature aggregation. Through droppath from a densely-connected template network, we derive an adaptively learned spatial-angular fusion strategy, which is sharply contrasted with existing manners that combine spatial and angular features empirically. Extensive experiments on both simulated measurements and measurements by a real coded aperture camera demonstrate the significant advantage of our method over state-of-the-art ones, i.e., our method improves the reconstruction quality by 4.5 dB.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.