35 results on '"Re-parameterization"'
Search Results
2. Residual trio feature network for efficient super-resolution.
- Author
-
Chen, Junfeng, Mao, Mao, Guan, Azhu, and Ayush, Altangerel
- Abstract
Deep learning-based approaches have demonstrated impressive performance in single-image super-resolution (SISR). Efficient super-resolution compromises the reconstructed image’s quality to have fewer parameters and Flops. Ensured efficiency in image reconstruction and improved reconstruction quality of the model are significant challenges. This paper proposes a trio branch module (TBM) based on structural reparameterization. TBM achieves equivalence transformation through structural reparameterization operations, which use a complex network structure in the training phase and convert it to a more lightweight structure in the inference, achieving efficient inference while maintaining accuracy. Based on the TBM, we further design a lightweight version of the enhanced spatial attention mini (ESA-mini) and the residual trio feature block (RTFB). Moreover, the multiple RTFBs are combined to construct the residual trio network (RTFN). Finally, we introduce a localized contrast loss for better applicability to the super-resolution task, which enhances the reconstruction quality of the super-resolution model. Experiments show that the RTFN framework proposed in this paper outperforms other state-of-the-art efficient super-resolution methods in terms of inference speed and reconstruction quality. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
3. Re-Parameterization After Pruning: Lightweight Algorithm Based on UAV Remote Sensing Target Detection.
- Author
-
Yang, Yang, Song, Pinde, Wang, Yongchao, and Cao, Lijia
- Abstract
Lightweight object detection algorithms play a paramount role in unmanned aerial vehicles (UAVs) remote sensing. However, UAV remote sensing requires target detection algorithms to have higher inference speeds and greater accuracy in detection. At present, most lightweight object detection algorithms have achieved fast inference speed, but their detection precision is not satisfactory. Consequently, this paper presents a refined iteration of the lightweight object detection algorithm to address the above issues. The MobileNetV3 based on the efficient channel attention (ECA) module is used as the backbone network of the model. In addition, the focal and efficient intersection over union (FocalEIoU) is used to improve the regression performance of the algorithm and reduce the false-negative rate. Furthermore, the entire model is pruned using the convolution kernel pruning method. After pruning, model parameters and floating-point operations (FLOPs) on VisDrone and DIOR datasets are reduced to 1.2 M and 1.5 M and 6.2 G and 6.5 G, respectively. The pruned model achieves 49 frames per second (FPS) and 44 FPS inference speeds on Jetson AGX Xavier for VisDrone and DIOR datasets, respectively. To fully exploit the performance of the pruned model, a plug-and-play structural re-parameterization fine-tuning method is proposed. The experimental results show that this fine-tuned method improves mAP@0.5 and mAP@0.5:0.95 by 0.4% on the VisDrone dataset and increases mAP@0.5:0.95 by 0.5% on the DIOR dataset. The proposed algorithm outperforms other mainstream lightweight object detection algorithms (except for FLOPs higher than SSDLite and mAP@0.5 Below YOLOv7 Tiny) in terms of parameters, FLOPs, mAP@0.5, and mAP@0.5:0.95. Furthermore, practical validation tests have also demonstrated that the proposed algorithm significantly reduces instances of missed detection and duplicate detection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Residual trio feature network for efficient super-resolution
- Author
-
Junfeng Chen, Mao Mao, Azhu Guan, and Altangerel Ayush
- Subjects
Image inpainting ,Image super-resolution ,Re-parameterization ,Electronic computers. Computer science ,QA75.5-76.95 ,Information technology ,T58.5-58.64 - Abstract
Abstract Deep learning-based approaches have demonstrated impressive performance in single-image super-resolution (SISR). Efficient super-resolution compromises the reconstructed image’s quality to have fewer parameters and Flops. Ensured efficiency in image reconstruction and improved reconstruction quality of the model are significant challenges. This paper proposes a trio branch module (TBM) based on structural reparameterization. TBM achieves equivalence transformation through structural reparameterization operations, which use a complex network structure in the training phase and convert it to a more lightweight structure in the inference, achieving efficient inference while maintaining accuracy. Based on the TBM, we further design a lightweight version of the enhanced spatial attention mini (ESA-mini) and the residual trio feature block (RTFB). Moreover, the multiple RTFBs are combined to construct the residual trio network (RTFN). Finally, we introduce a localized contrast loss for better applicability to the super-resolution task, which enhances the reconstruction quality of the super-resolution model. Experiments show that the RTFN framework proposed in this paper outperforms other state-of-the-art efficient super-resolution methods in terms of inference speed and reconstruction quality.
- Published
- 2024
- Full Text
- View/download PDF
5. Lightweight detection model for coal gangue identification based on improved YOLOv5s.
- Author
-
Shang, Deyong, Lv, Zhibin, Gao, Zehua, and Li, Yuntao
- Abstract
Focusing on the issues of complex models, high computational cost, and low identification speed of existing coal gangue image identification object detection algorithms, an optimized YOLOv5s lightweight detection model for coal gangue is proposed. Using ShuffleNetV2 as the backbone network, a convolution pooling module is used at the input end instead of the original convolution module. Combining the re-parameterization idea of RepVGG and introducing depthwise separable convolution, a neck feature fusion network is constructed. And using the WIoU function as the loss function. The experimental findings indicate that the improved model maintains the same accuracy, the number of parameters is only 5.1% of the original, the computational effort is reduced to 6.3 % of the original, and the identification speed is improved by 30.9% on GPU and 4 times on CPU. This method significantly reduces model complexity and improves detection speed while maintaining detection accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. RepDNet: A re-parameterization despeckling network for autonomous underwater side-scan sonar imaging with prior-knowledge customized convolution
- Author
-
Zhuoyi Li, Zhisen Wang, Deshan Chen, Tsz Leung Yip, and Angelo P. Teixeira
- Subjects
Side-scan sonar ,Sonar image despeckling ,Domain knowledge ,Re-parameterization ,Military Science - Abstract
Side-scan sonar (SSS) is now a prevalent instrument for large-scale seafloor topography measurements, deployable on an autonomous underwater vehicle (AUV) to execute fully automated underwater acoustic scanning imaging along a predetermined trajectory. However, SSS images often suffer from speckle noise caused by mutual interference between echoes, and limited AUV computational resources further hinder noise suppression. Existing approaches for SSS image processing and speckle noise reduction rely heavily on complex network structures and fail to combine the benefits of deep learning and domain knowledge. To address the problem, RepDNet, a novel and effective despeckling convolutional neural network is proposed. RepDNet introduces two re-parameterized blocks: the Pixel Smoothing Block (PSB) and Edge Enhancement Block (EEB), preserving edge information while attenuating speckle noise. During training, PSB and EEB manifest as double-layered multi-branch structures, integrating first-order and second-order derivatives and smoothing functions. During inference, the branches are re-parameterized into a 3 × 3 convolution, enabling efficient inference without sacrificing accuracy. RepDNet comprises three computational operations: 3 × 3 convolution, element-wise summation and Rectified Linear Unit activation. Evaluations on benchmark datasets, a real SSS dataset and Data collected at Lake Mulan aestablish RepDNet as a well-balanced network, meeting the AUV computational constraints in terms of performance and latency.
- Published
- 2024
- Full Text
- View/download PDF
7. Image rectangling network based on reparameterized transformer and assisted learning
- Author
-
Lichun Yang, Bin Tian, Tianyin Zhang, Jiu Yong, and Jianwu Dang
- Subjects
Image rectangling ,Single wrap ,Re-parameterization ,Assisted learning ,Medicine ,Science - Abstract
Abstract Stitched images can offer a broader field of view, but their boundaries can be irregular and unpleasant. To address this issue, current methods for rectangling images start by distorting local grids multiple times to obtain rectangular images with regular boundaries. However, these methods can result in content distortion and missing boundary information. We have developed an image rectangling solution using the reparameterized transformer structure, focusing on single distortion. Additionally, we have designed an assisted learning network to aid in the process of the image rectangling network. To improve the network’s parallel efficiency, we have introduced a local thin-plate spline Transform strategy to achieve efficient local deformation. Ultimately, the proposed method achieves state-of-the-art performance in stitched image rectangling with a low number of parameters while maintaining high content fidelity. The code is available at https://github.com/MelodYanglc/TransRectangling .
- Published
- 2024
- Full Text
- View/download PDF
8. Image rectangling network based on reparameterized transformer and assisted learning.
- Author
-
Yang, Lichun, Tian, Bin, Zhang, Tianyin, Yong, Jiu, and Dang, Jianwu
- Subjects
TEACHING aids - Abstract
Stitched images can offer a broader field of view, but their boundaries can be irregular and unpleasant. To address this issue, current methods for rectangling images start by distorting local grids multiple times to obtain rectangular images with regular boundaries. However, these methods can result in content distortion and missing boundary information. We have developed an image rectangling solution using the reparameterized transformer structure, focusing on single distortion. Additionally, we have designed an assisted learning network to aid in the process of the image rectangling network. To improve the network's parallel efficiency, we have introduced a local thin-plate spline Transform strategy to achieve efficient local deformation. Ultimately, the proposed method achieves state-of-the-art performance in stitched image rectangling with a low number of parameters while maintaining high content fidelity. The code is available at https://github.com/MelodYanglc/TransRectangling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. RepDDNet: a fast and accurate deforestation detection model with high-resolution remote sensing image
- Author
-
Zhipan Wang, Zhongwu Wang, Dongmei Yan, Zewen Mo, Hua Zhang, and Qingling Zhang
- Subjects
carbon neutral ,deforestation detection ,high-resolution remote sensing image ,deep learning ,re-parameterization ,Mathematical geography. Cartography ,GA1-1776 - Abstract
Forest is the largest carbon reservoir and carbon absorber on earth. Thus, mapping forest cover change accurately is of great significance to achieving the global carbon neutrality goal. Accurate forest change information could be acquired by deep learning methods using high-resolution remote sensing images. However, deforestation detection based on deep learning on a large-scale region with high-resolution images required huge computational resources. Therefore, there was an urgent need for a fast and accurate deforestation detection model. In this study, we proposed an interesting but effective re-parameterization deforestation detection model, named RepDDNet. Unlike other existing models designed for deforestation detection, the main feature of RepDDNet was its decoupling feature, which means that it allowed the multi-branch structure in the training stages to be converted into a plain structure in the inference stage, thus the computation efficiency can be significantly improved in the inference stage while maintaining the accuracy unchanged. A large-scale experiment was carried out in Ankang city with 2-meter high-resolution remote sensing images (the total area of it was over 20,000 square kilometers), and the result indicated that the model computation efficiency could be improved by nearly 30% compared with the model without re-parameterization. Additionally, compared with other lightweight models, RepDDNet also displayed a trade-off between accuracy and computation efficiency.
- Published
- 2023
- Full Text
- View/download PDF
10. Texture-Enhanced Framework by Differential Filter-Based Re-parameterization for Super-Resolution on PC/Mobile.
- Author
-
Liu, Yongxu, Fu, Xiaoyan, Zhou, Lijuan, and Li, ChuanZhong
- Subjects
PARAMETERIZATION ,MOBILE apps ,IMAGE reconstruction algorithms ,HIGH resolution imaging ,BLOCK designs - Abstract
In this paper, we aim to improve the imaging quality of super-resolution (SR) without increasing the inference time to address the difficulty of trading off between quality and inference time in many existing methods and design a deployment-friendly, lightweight model for mobile devices. Specifically, we propose a general RepDFSR framework to enhance the textures of SR images while avoiding additional inference time overhead, which can be applied to existing SR networks. It incorporates innovative convolutional block design, loss function design, and the re-parameterizable technique. In RepDFSR, we propose a re-parameterizable texture-enhanced convolution based on differential filters. It extracts texture information more advantageous and efficient in training than regular convolution. Secondly, we propose a DF loss function to compel the model to super-resolve the gradient mappings with high variance, thus reconstructing images with sharper textures. Moreover, we propose a TELNet network for mobile devices based on the RepDFSR framework to validate the effectiveness of RepDFSR and our thoughts on practical applications of SR on mobile devices. The experimental results demonstrate the successful integration of the RepDFSR framework with the existing SR methods. Additionally, the TELNet meets the requirements of mobile devices for hardware and quantization limitations, showcasing superior SR performance compared to classic and state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. TMS: Temporal multi-scale in time-delay neural network for speaker verification.
- Author
-
Zhang, Ruiteng, Wei, Jianguo, Lu, Xugang, Lu, Wenhuan, Jin, Di, Zhang, Lin, Xu, Junhai, and Dang, Jianwu
- Subjects
DELAY lines ,COMPUTATIONAL complexity ,VIDEO coding ,AUTOMATIC speech recognition ,TOPOLOGY - Abstract
The speaker encoder is an important front-end module that explores discriminative speaker features for many speech applications requiring speaker information. Current speaker encoders aggregate multi-scale features from utterances using multi-branch network architectures. However, naively adding many branches through a fully convolutional operation cannot efficiently improve its capability to capture multi-scale features due to the problem of rapid increase of model parameters and computational complexity. Therefore, in current network architectures, only a few branches corresponding to a limited number of temporal scales are designed for capturing speaker features. To address this problem, this paper proposes an effective temporal multi-scale (TMS) model where multi-scale branches could be efficiently designed in a speaker encoder while negligibly increasing computational costs. The TMS model is based on a time-delay neural network (TDNN), where the network architecture is separated into channel-modeling and temporal multi-branch modeling operators. In the TMS model, adding temporal multi-scale elements in the temporal multi-branch operator only slightly increases the model's parameters, thus saving more of the computational budget to add branches with large temporal scales. After model training, we further develop a systemic re-parameterization method to convert the multi-branch network topology into a single-path-based topology to increase the inference speed.We conducted automatic speaker verification (ASV) experiments under in-domain (VoxCeleb) and out-of-domain (CNCeleb) conditions to investigate the proposed TMS model's performance.Experimental results show that the TMS-method-based model outperformed state-of-the-art ASV models (e.g., ECAPA-TDNN) and improved robustness. Moreover, the proposed model achieved a 29%–46% increase in the inference speed compared to ECAPA-TDNN. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. A Fast and Robust Lane Detection via Online Re-Parameterization and Hybrid Attention.
- Author
-
Xie, Tao, Yin, Mingfeng, Zhu, Xinyu, Sun, Jin, Meng, Cheng, and Bei, Shaoyi
- Subjects
- *
FEATURE extraction , *PARAMETERIZATION , *HYBRID systems , *TRAFFIC safety - Abstract
Lane detection is a vital component of intelligent driving systems, offering indispensable functionality to keep the vehicle within its designated lane, thereby reducing the risk of lane departure. However, the complexity of the traffic environment, coupled with the rapid movement of vehicles, creates many challenges for detection tasks. Current lane detection methods suffer from issues such as low feature extraction capability, poor real-time detection, and inadequate robustness. Addressing these issues, this paper proposes a lane detection algorithm that combines an online re-parameterization ResNet with a hybrid attention mechanism. Firstly, we replaced standard convolution with online re-parameterization convolution, simplifying the convolutional operations during the inference phase and subsequently reducing the detection time. In an effort to enhance the performance of the model, a hybrid attention module is incorporated to enhance the ability to focus on elongated targets. Finally, a row anchor lane detection method is introduced to analyze the existence and location of lane lines row by row in the image and output the predicted lane positions. The experimental outcomes illustrate that the model achieves F1 scores of 96.84% and 75.60% on the publicly available TuSimple and CULane lane datasets, respectively. Moreover, the inference speed reaches a notable 304 frames per second (FPS). The overall performance outperforms other detection models and fulfills the requirements of real-time responsiveness and robustness for lane detection tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Construction and verification of machine vision algorithm model based on apple leaf disease images.
- Author
-
Gao Ang, Ren Han, Song Yuepeng, Ren Longlong, Zhang Yue, and Han Xiang
- Subjects
COMPUTER vision ,LEAF anatomy ,FRUIT quality ,ALGORITHMS ,APPLES ,DEEP learning ,FRUIT yield - Abstract
Apple leaf diseases without timely control will affect fruit quality and yield, intelligent detection of apple leaf diseases was especially important. So this paper mainly focuses on apple leaf disease detection problem, proposes a machine vision algorithm model for fast apple leaf disease detection called LALNet (High-speed apple leaf network). First, an efficient sacked module for apple leaf detection, known as EALD (efficient apple leaf detection stacking module), was designed by utilizing the multi-branch structure and depthseparable modules. In the backbone network of LALNet, (High-speed apple leaf network) four layers of EALD modules were superimposed and an SE (Squeeze-and-Excitation) module was added in the last layer of the model to improve the attention of the model to important features. A structural reparameterization technique was used to combine the outputs of two layers of deeply separable convolutions in branch during the inference phase to improve the model’s operational speed. The results show that in the test set, the detection accuracy of the model was 96.07%. The total precision was 95.79%, the total recall was 96.05%, the total F1 was 96.06%, the model size was 6.61 MB, and the detection speed of a single image was 6.68 ms. Therefore, the model ensures both high detection accuracy and fast execution speed, making it suitable for deployment on embedded devices. It supports precision spraying for the prevention and control of apple leaf disease. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. A High-Precision Plant Disease Detection Method Based on a Dynamic Pruning Gate Friendly to Low-Computing Platforms.
- Author
-
Liu, Yufei, Liu, Jingxin, Cheng, Wei, Chen, Zizhi, Zhou, Junyu, Cheng, Haolan, and Lv, Chunli
- Subjects
PLANT diseases ,PATTERN recognition systems ,CONVOLUTIONAL neural networks ,COMPUTER vision ,DATA augmentation ,AGRICULTURAL technology - Abstract
Simple Summary: Achieving automatic detection of plant diseases in real agricultural scenarios where low-computing-power platforms are deployed is a significant research topic. As fine-grained agriculture continues to expand and farming methods deepen, traditional manual detection methods demand a high labor intensity. In recent years, the rapid advancement of computer network vision has greatly enhanced the computer-processing capabilities for pattern recognition problems across various industries. Consequently, a deep neural network based on an automatic pruning mechanism is proposed to enable high-accuracy plant disease detection even under limited computational power. Furthermore, an application is developed based on this method to expedite the translation of theoretical results into practical application scenarios. Timely and accurate detection of plant diseases is a crucial research topic. A dynamic-pruning-based method for automatic detection of plant diseases in low-computing situations is proposed. The main contributions of this research work include the following: (1) the collection of datasets for four crops with a total of 12 diseases over a three-year history; (2) the proposition of a re-parameterization method to improve the boosting accuracy of convolutional neural networks; (3) the introduction of a dynamic pruning gate to dynamically control the network structure, enabling operation on hardware platforms with widely varying computational power; (4) the implementation of the theoretical model based on this paper and the development of the associated application. Experimental results demonstrate that the model can run on various computing platforms, including high-performance GPU platforms and low-power mobile terminal platforms, with an inference speed of 58 FPS, outperforming other mainstream models. In terms of model accuracy, subclasses with a low detection accuracy are enhanced through data augmentation and validated by ablation experiments. The model ultimately achieves an accuracy of 0.94. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. An Explainable Brain Tumor Detection Framework for MRI Analysis.
- Author
-
Yan, Fei, Chen, Yunqing, Xia, Yiwen, Wang, Zhiliang, and Xiao, Ruoxiu
- Subjects
BRAIN tumors ,IMAGE analysis ,MAGNETIC resonance imaging ,DIAGNOSTIC imaging ,TUMOR diagnosis - Abstract
Explainability in medical images analysis plays an important role in the accurate diagnosis and treatment of tumors, which can help medical professionals better understand the images analysis results based on deep models. This paper proposes an explainable brain tumor detection framework that can complete the tasks of segmentation, classification, and explainability. The re-parameterization method is applied to our classification network, and the effect of explainable heatmaps is improved by modifying the network architecture. Our classification model also has the advantage of post-hoc explainability. We used the BraTS-2018 dataset for training and verification. Experimental results show that our simplified framework has excellent performance and high calculation speed. The comparison of results by segmentation and explainable neural networks helps researchers better understand the process of the black box method, increase the trust of the deep model output, and make more accurate judgments in disease identification and diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Three-Dimensional Modeling of Heart Soft Tissue Motion.
- Author
-
Liu, Mingzhe, Zhang, Xuan, Yang, Bo, Yin, Zhengtong, Liu, Shan, Yin, Lirong, and Zheng, Wenfeng
- Subjects
THREE-dimensional modeling ,DEFORMATION of surfaces ,GEOMETRIC modeling ,TISSUES ,BIOLOGICAL models ,HEART - Abstract
The modeling and simulation of biological tissue is the core part of a virtual surgery system. In this study, the geometric and physical methods related to soft tissue modeling were investigated. Regarding geometric modeling, the problem of repeated inverse calculations of control points in the Bezier method was solved via re-parameterization, which improved the calculation speed. The base surface superposition method based on prior information was proposed to make the deformation model not only have the advantages of the Bezier method but also have the ability to fit local irregular deformation surfaces. Regarding physical modeling, the fitting ability of the particle spring model to the anisotropy of soft tissue was improved by optimizing the topological structure of the particle spring model. Then, the particle spring model had a more extensive nonlinear fitting ability through the dynamic elastic coefficient parameter. Finally, the secondary modeling of the elastic coefficient based on the virtual body spring enabled the model to fit the creep and relaxation characteristics of biological tissue according to the elongation of the virtual body spring. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Crack instance segmentation using splittable transformer and position coordinates.
- Author
-
Zhao, Yuanlin, Li, Wei, Ding, Jiangang, Wang, Yansong, Pei, Lili, and Tian, Aojia
- Subjects
- *
NECK - Abstract
Vehicle and drone-mounted surveillance equipment face severe computational constraints, posing significant challenges for real-time, accurate crack segmentation. This paper introduces the crack location segmentation transformer (CLST) to address these issues. Images are processed to better resemble patches associated with cracks, enabling precise segmentation while significantly reducing the model's computational load. To handle varying segmentation challenges, a range of models with different computational demands has been designed to suit diverse needs. The most lightweight model can be deployed for real-time use on edge devices. A module in the neck of the pipeline encodes crack coordinate information, and end-to-end training has resulted in state-of-the-art performance across multiple datasets. • Crack location segmentation transformer (CLST) is introduced. • Focusing the field of view on the cracks reduces the amount of computation. • Rep-Crackformer can select modes to cope with different scenarios. • CLocation can encode crack spatial information with fewer parametric quantities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. TRRHA: A two-stream re-parameterized refocusing hybrid attention network for synthesized view quality enhancement.
- Author
-
Cao, Ziyi, Li, Tiansong, Wang, Guofen, Yin, Haibing, Wang, Hongkui, and Yu, Li
- Subjects
- *
SOURCE code , *PYRAMIDS , *VIDEO coding , *VIDEOS - Abstract
In multi-view video systems, the decoded texture video and its corresponding depth video are utilized to synthesize virtual views from different perspectives using the depth-image-based rendering (DIBR) technology in 3D-high efficiency video coding (3D-HEVC). However, the distortion of the compressed multi-view video and the disocclusion problem in DIBR can easily cause obvious holes and cracks in the synthesized views, degrading the visual quality of the synthesized views. To address this problem, a novel two-stream re-parameterized refocusing hybrid attention (TRRHA) network is proposed to significantly improve the quality of synthesized views. Firstly, a global multi-scale residual information stream is applied to extract the global context information by using refocusing attention module (RAM), and the RAM can detect the contextual feature and adaptively learn channel and spatial attention feature to selectively focus on different areas. Secondly, a local feature pyramid attention information stream is used to fully capture complex local texture details by using re-parameterized refocusing attention module (RRAM). The RRAM can effectively capture multi-scale texture details with different receptive fields, and adaptively adjust channel and spatial weights to adapt to information transformation at different sizes and levels. Finally, an efficient feature fusion module is proposed to effectively fuse the extracted global and local information streams. Extensive experimental results show that the proposed TRRHA achieves significantly better performance than the state-of-the-art methods. The source code will be available at https://github.com/647-bei/TRRHA. • A two-stream re-parameterization refocusing hybrid attention network (TRRHA) for SVQE. • Design includes global multi-scale residual (GMR) and local feature pyramid attention (LFPA). • Proposed re-parameterized refocusing attention module (RRAM) for local multi-scale texture. • Captures multi-scale features with re-parameterized convolution (RC) branches. • Efficient feature fusion module (EFFM) significantly enhances SVQE performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. RepDDNet: a fast and accurate deforestation detection model with high-resolution remote sensing image.
- Author
-
Wang, Zhipan, Wang, Zhongwu, Yan, Dongmei, Mo, Zewen, Zhang, Hua, and Zhang, Qingling
- Subjects
DEFORESTATION ,CARBON offsetting ,DEEP learning ,FOREST mapping ,REMOTE sensing - Abstract
Forest is the largest carbon reservoir and carbon absorber on earth. Thus, mapping forest cover change accurately is of great significance to achieving the global carbon neutrality goal. Accurate forest change information could be acquired by deep learning methods using high-resolution remote sensing images. However, deforestation detection based on deep learning on a large-scale region with high-resolution images required huge computational resources. Therefore, there was an urgent need for a fast and accurate deforestation detection model. In this study, we proposed an interesting but effective re-parameterization deforestation detection model, named RepDDNet. Unlike other existing models designed for deforestation detection, the main feature of RepDDNet was its decoupling feature, which means that it allowed the multi-branch structure in the training stages to be converted into a plain structure in the inference stage, thus the computation efficiency can be significantly improved in the inference stage while maintaining the accuracy unchanged. A large-scale experiment was carried out in Ankang city with 2-meter high-resolution remote sensing images (the total area of it was over 20,000 square kilometers), and the result indicated that the model computation efficiency could be improved by nearly 30% compared with the model without re-parameterization. Additionally, compared with other lightweight models, RepDDNet also displayed a trade-off between accuracy and computation efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. A Novel Approach to Maritime Image Dehazing Based on a Large Kernel Encoder–Decoder Network with Multihead Pyramids.
- Author
-
Yang, Wei, Gao, Hongwei, Jiang, Yueqiu, and Zhang, Xin
- Subjects
SEA stories ,GENERATIVE adversarial networks ,PYRAMIDS ,DIGITAL twins ,CONVOLUTIONAL neural networks ,REMOTELY piloted vehicles - Abstract
With the continuous increase in human–robot integration, battlefield formation is experiencing a revolutionary change. Unmanned aerial vehicles, unmanned surface vessels, combat robots, and other new intelligent weapons and equipment will play an essential role on future battlefields by performing various tasks, including situational reconnaissance, monitoring, attack, and communication relay. Real-time monitoring of maritime scenes is the basis of battle-situation and threat estimation in naval battlegrounds. However, images of maritime scenes are usually accompanied by haze, clouds, and other disturbances, which blur the images and diminish the validity of their contents. This will have a severe adverse impact on many downstream tasks. A novel large kernel encoder–decoder network with multihead pyramids (LKEDN-MHP) is proposed to address some maritime image dehazing-related issues. The LKEDN-MHP adopts a multihead pyramid approach to form a hybrid representation space comprising reflection, shading, and semanteme. Unlike standard convolutional neural networks (CNNs), the LKEDN-MHP uses many kernels with a 7 × 7 or larger scale to extract features. To reduce the computational burden, depthwise (DW) convolution combined with re-parameterization is adopted to form a hybrid model stacked by a large number of different receptive fields, further enhancing the hybrid receptive fields. To restore the natural hazy maritime scenes as much as possible, we apply digital twin technology to build a simulation system in virtual space. The final experimental results based on the evaluation metrics of the peak signal-to-noise ratio, structural similarity index measure, Jaccard index, and Dice coefficient show that our LKEDN-MHP significantly enhances dehazing and real-time performance compared with those of state-of-the-art approaches based on vision transformers (ViTs) and generative adversarial networks (GANs). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. A High-Precision Plant Disease Detection Method Based on a Dynamic Pruning Gate Friendly to Low-Computing Platforms
- Author
-
Yufei Liu, Jingxin Liu, Wei Cheng, Zizhi Chen, Junyu Zhou, Haolan Cheng, and Chunli Lv
- Subjects
dynamic pruning ,low-computing-platform friendly ,re-parameterization ,deep learning ,Botany ,QK1-989 - Abstract
Timely and accurate detection of plant diseases is a crucial research topic. A dynamic-pruning-based method for automatic detection of plant diseases in low-computing situations is proposed. The main contributions of this research work include the following: (1) the collection of datasets for four crops with a total of 12 diseases over a three-year history; (2) the proposition of a re-parameterization method to improve the boosting accuracy of convolutional neural networks; (3) the introduction of a dynamic pruning gate to dynamically control the network structure, enabling operation on hardware platforms with widely varying computational power; (4) the implementation of the theoretical model based on this paper and the development of the associated application. Experimental results demonstrate that the model can run on various computing platforms, including high-performance GPU platforms and low-power mobile terminal platforms, with an inference speed of 58 FPS, outperforming other mainstream models. In terms of model accuracy, subclasses with a low detection accuracy are enhanced through data augmentation and validated by ablation experiments. The model ultimately achieves an accuracy of 0.94.
- Published
- 2023
- Full Text
- View/download PDF
22. A Deep Learning Quantification Algorithm for HER2 Scoring of Gastric Cancer.
- Author
-
Han, Zixin, Lan, Junlin, Wang, Tao, Hu, Ziwei, Huang, Yuxiu, Deng, Yanglin, Zhang, Hejun, Wang, Jianchao, Chen, Musheng, Jiang, Haiyan, Lee, Ren-Guey, Gao, Qinquan, Du, Ming, Tong, Tong, and Chen, Gang
- Subjects
STOMACH cancer ,MACHINE learning ,DEEP learning ,EPIDERMAL growth factor receptors ,SIGNAL convolution ,COMPUTER-aided diagnosis - Abstract
Gastric cancer is the third most common cause of cancer-related death in the world. Human epidermal growth factor receptor 2 (HER2) positive is an important subtype of gastric cancer, which can provide significant diagnostic information for gastric cancer pathologists. However, pathologists usually use a semi-quantitative assessment method to assign HER2 scores for gastric cancer by repeatedly comparing hematoxylin and eosin (H&E) whole slide images (WSIs) with their HER2 immunohistochemical WSIs one by one under the microscope. It is a repetitive, tedious, and highly subjective process. Additionally, WSIs have billions of pixels in an image, which poses computational challenges to Computer-Aided Diagnosis (CAD) systems. This study proposed a deep learning algorithm for HER2 quantification evaluation of gastric cancer. Different from other studies that use convolutional neural networks for extracting feature maps or pre-processing on WSIs, we proposed a novel automatic HER2 scoring framework in this study. In order to accelerate the computational process, we proposed to use the re-parameterization scheme to separate the training model from the deployment model, which significantly speedup the inference process. To the best of our knowledge, this is the first study to provide a deep learning quantification algorithm for HER2 scoring of gastric cancer to assist the pathologist's diagnosis. Experiment results have demonstrated the effectiveness of our proposed method with an accuracy of 0.94 for the HER2 scoring prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. A Deep Learning Quantification Algorithm for HER2 Scoring of Gastric Cancer
- Author
-
Zixin Han, Junlin Lan, Tao Wang, Ziwei Hu, Yuxiu Huang, Yanglin Deng, Hejun Zhang, Jianchao Wang, Musheng Chen, Haiyan Jiang, Ren-Guey Lee, Qinquan Gao, Ming Du, Tong Tong, and Gang Chen
- Subjects
CNN ,deep learning ,gastric cancer ,HER2 score prediction ,re-parameterization ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Gastric cancer is the third most common cause of cancer-related death in the world. Human epidermal growth factor receptor 2 (HER2) positive is an important subtype of gastric cancer, which can provide significant diagnostic information for gastric cancer pathologists. However, pathologists usually use a semi-quantitative assessment method to assign HER2 scores for gastric cancer by repeatedly comparing hematoxylin and eosin (H&E) whole slide images (WSIs) with their HER2 immunohistochemical WSIs one by one under the microscope. It is a repetitive, tedious, and highly subjective process. Additionally, WSIs have billions of pixels in an image, which poses computational challenges to Computer-Aided Diagnosis (CAD) systems. This study proposed a deep learning algorithm for HER2 quantification evaluation of gastric cancer. Different from other studies that use convolutional neural networks for extracting feature maps or pre-processing on WSIs, we proposed a novel automatic HER2 scoring framework in this study. In order to accelerate the computational process, we proposed to use the re-parameterization scheme to separate the training model from the deployment model, which significantly speedup the inference process. To the best of our knowledge, this is the first study to provide a deep learning quantification algorithm for HER2 scoring of gastric cancer to assist the pathologist's diagnosis. Experiment results have demonstrated the effectiveness of our proposed method with an accuracy of 0.94 for the HER2 scoring prediction.
- Published
- 2022
- Full Text
- View/download PDF
24. A Saliency Prediction Model Based on Re-Parameterization and Channel Attention Mechanism.
- Author
-
Yan, Fei, Wang, Zhiliang, Qi, Siyu, and Xiao, Ruoxiu
- Subjects
PREDICTION models ,MACHINE learning - Abstract
Deep saliency models can effectively imitate the attention mechanism of human vision, and they perform considerably better than classical models that rely on handcrafted features. However, deep models also require higher-level information, such as context or emotional content, to further approach human performance. Therefore, this study proposes a multilevel saliency prediction network that aims to use a combination of spatial and channel information to find possible high-level features, further improving the performance of a saliency model. Firstly, we use a VGG style network with an identity block as the primary network architecture. With the help of re-parameterization, we can obtain rich features similar to multiscale networks and effectively reduce computational cost. Secondly, a subnetwork with a channel attention mechanism is designed to find potential saliency regions and possible high-level semantic information in an image. Finally, image spatial features and a channel enhancement vector are combined after quantization to improve the overall performance of the model. Compared with classical models and other deep models, our model exhibits superior overall performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. An Explainable Brain Tumor Detection Framework for MRI Analysis
- Author
-
Fei Yan, Yunqing Chen, Yiwen Xia, Zhiliang Wang, and Ruoxiu Xiao
- Subjects
explainable AI ,brain tumor detection ,deep learning ,MRI ,re-parameterization ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Explainability in medical images analysis plays an important role in the accurate diagnosis and treatment of tumors, which can help medical professionals better understand the images analysis results based on deep models. This paper proposes an explainable brain tumor detection framework that can complete the tasks of segmentation, classification, and explainability. The re-parameterization method is applied to our classification network, and the effect of explainable heatmaps is improved by modifying the network architecture. Our classification model also has the advantage of post-hoc explainability. We used the BraTS-2018 dataset for training and verification. Experimental results show that our simplified framework has excellent performance and high calculation speed. The comparison of results by segmentation and explainable neural networks helps researchers better understand the process of the black box method, increase the trust of the deep model output, and make more accurate judgments in disease identification and diagnosis.
- Published
- 2023
- Full Text
- View/download PDF
26. Three-Dimensional Modeling of Heart Soft Tissue Motion
- Author
-
Mingzhe Liu, Xuan Zhang, Bo Yang, Zhengtong Yin, Shan Liu, Lirong Yin, and Wenfeng Zheng
- Subjects
soft tissue modeling ,geometric modeling ,re-parameterization ,Bezier method ,mass-spring model ,virtual body spring ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
The modeling and simulation of biological tissue is the core part of a virtual surgery system. In this study, the geometric and physical methods related to soft tissue modeling were investigated. Regarding geometric modeling, the problem of repeated inverse calculations of control points in the Bezier method was solved via re-parameterization, which improved the calculation speed. The base surface superposition method based on prior information was proposed to make the deformation model not only have the advantages of the Bezier method but also have the ability to fit local irregular deformation surfaces. Regarding physical modeling, the fitting ability of the particle spring model to the anisotropy of soft tissue was improved by optimizing the topological structure of the particle spring model. Then, the particle spring model had a more extensive nonlinear fitting ability through the dynamic elastic coefficient parameter. Finally, the secondary modeling of the elastic coefficient based on the virtual body spring enabled the model to fit the creep and relaxation characteristics of biological tissue according to the elongation of the virtual body spring.
- Published
- 2023
- Full Text
- View/download PDF
27. Rep-MCA-former: An efficient multi-scale convolution attention encoder for text-independent speaker verification.
- Author
-
Liu, Xiaohu, Chen, Defu, Wang, Xianbao, Xiang, Sheng, and Zhou, Xuwen
- Subjects
- *
AUTOMATIC speech recognition , *MULTI-factor authentication , *DATA extraction , *ARTIFICIAL neural networks , *NATURAL language processing , *PARAMETERIZATION - Abstract
In many speaker verification tasks, the quality of speaker embedding is an important factor in affecting speaker verification systems. Advanced speaker embedding extraction networks aim to capture richer speaker features through the multi-branch network architecture. Recently, speaker verification systems based on transformer encoders have received much attention, and many satisfactory results have been achieved because transformer encoders can efficiently extract the global features of the speaker (e.g., MFA-Conformer). However, the large number of model parameters and computational latency are common problems faced by the above approaches, which make them difficult to apply to resource-constrained edge terminals. To address this issue, this paper proposes an effective, lightweight transformer model (MCA-former) with multi-scale convolutional self-attention (MCA), which can perform multi-scale modeling and channel modeling in the temporal direction of the input with low computational cost. In addition, in the inference phase of the model, we further develop a systematic re-parameterization method to convert the multi-branch network structure into the single-path topology, effectively improving the inference speed. We investigate the performance of the MCA-former for speaker verification under the VoxCeleb1 test set. The results show that the MCA-based transformer model is more advantageous in terms of the number of parameters and inference efficiency. By applying the re-parameterization, the inference speed of the model is increased by about 30%, and the memory consumption is significantly improved. • Designing a lightweight multi-scale convolutional self-attention module. • An efficient transformer encoder for speaker verification. • Using the re-parameterization method improves the model's inference efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. A re-formulation of generalized linear mixed models to fit family data in genetic association studies
- Author
-
Tao eWang, Peng eHe, Kwang Woo eAhn, Xujing eWang, Soumitra eGhosh, and Purushottam eLaud
- Subjects
genetic correlation ,family data ,genetic variance components ,Cholesky decomposition ,Re-parameterization ,Bayesian methods. ,Genetics ,QH426-470 - Abstract
The generalized linear mixed model (GLMM) is a useful tool for modeling genetic correlation among family data in genetic association studies. However, when dealing with families of varied sizes and diverse genetic relatedness, the GLMM has a special correlation structure which often makes it difficult to be specified using standard statistical software. In this study, we propose a Cholesky decomposition based re-formulation of the GLMM so that the re-formulated GLMM can be specified conveniently via `proc nlmixed' and `proc glimmix' in SAS, or OpenBUGS via R package BRugs. Performances of these procedures in fitting the re-formulated GLMM are examined through simulation studies. We also apply this re-formulated GLMM to analyze a real data set from Type 1 Diabetes Genetics Consortium (T1DGC).
- Published
- 2015
- Full Text
- View/download PDF
29. A Novel Approach to Maritime Image Dehazing Based on a Large Kernel Encoder–Decoder Network with Multihead Pyramids
- Author
-
Wei Yang, Hongwei Gao, Yueqiu Jiang, and Xin Zhang
- Subjects
image dehazing ,large kernel encoder–decoder network ,multihead pyramids ,re-parameterization ,digital twin ,Computer Networks and Communications ,Hardware and Architecture ,Control and Systems Engineering ,Signal Processing ,Electrical and Electronic Engineering - Abstract
With the continuous increase in human–robot integration, battlefield formation is experiencing a revolutionary change. Unmanned aerial vehicles, unmanned surface vessels, combat robots, and other new intelligent weapons and equipment will play an essential role on future battlefields by performing various tasks, including situational reconnaissance, monitoring, attack, and communication relay. Real-time monitoring of maritime scenes is the basis of battle-situation and threat estimation in naval battlegrounds. However, images of maritime scenes are usually accompanied by haze, clouds, and other disturbances, which blur the images and diminish the validity of their contents. This will have a severe adverse impact on many downstream tasks. A novel large kernel encoder–decoder network with multihead pyramids (LKEDN-MHP) is proposed to address some maritime image dehazing-related issues. The LKEDN-MHP adopts a multihead pyramid approach to form a hybrid representation space comprising reflection, shading, and semanteme. Unlike standard convolutional neural networks (CNNs), the LKEDN-MHP uses many kernels with a 7 × 7 or larger scale to extract features. To reduce the computational burden, depthwise (DW) convolution combined with re-parameterization is adopted to form a hybrid model stacked by a large number of different receptive fields, further enhancing the hybrid receptive fields. To restore the natural hazy maritime scenes as much as possible, we apply digital twin technology to build a simulation system in virtual space. The final experimental results based on the evaluation metrics of the peak signal-to-noise ratio, structural similarity index measure, Jaccard index, and Dice coefficient show that our LKEDN-MHP significantly enhances dehazing and real-time performance compared with those of state-of-the-art approaches based on vision transformers (ViTs) and generative adversarial networks (GANs).
- Published
- 2022
- Full Text
- View/download PDF
30. Mapped B-spline basis functions for shape design and isogeometric analysis over an arbitrary parameterization.
- Author
-
Yuan, Xiaoyun and Ma, Weiyin
- Subjects
- *
SPLINE theory , *ISOGEOMETRIC analysis , *ARBITRARY constants , *PARAMETERIZATION , *QUADRILATERALS , *TOPOLOGICAL spaces - Abstract
Highlights: [•] Presents a novel method for both shape design and isogeometric analysis using mapped basis functions. [•] The space spanned by mapped B-spline basis functions is an extension of uniform B-splines over an arbitrary parameterization. [•] The continuity of the resulting surfaces can be arbitrary higher order, including at extraordinary points. [•] The proposed method can be further extended to other basis functions as well as to non-quadrilateral meshes. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
31. A Saliency Prediction Model Based on Re-Parameterization and Channel Attention Mechanism
- Author
-
Fei Yan, Zhiliang Wang, Siyu Qi, and Ruoxiu Xiao
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Control and Systems Engineering ,Signal Processing ,Electrical and Electronic Engineering ,visual attention ,visual saliency ,saliency prediction ,deep learning ,re-parameterization - Abstract
Deep saliency models can effectively imitate the attention mechanism of human vision, and they perform considerably better than classical models that rely on handcrafted features. However, deep models also require higher-level information, such as context or emotional content, to further approach human performance. Therefore, this study proposes a multilevel saliency prediction network that aims to use a combination of spatial and channel information to find possible high-level features, further improving the performance of a saliency model. Firstly, we use a VGG style network with an identity block as the primary network architecture. With the help of re-parameterization, we can obtain rich features similar to multiscale networks and effectively reduce computational cost. Secondly, a subnetwork with a channel attention mechanism is designed to find potential saliency regions and possible high-level semantic information in an image. Finally, image spatial features and a channel enhancement vector are combined after quantization to improve the overall performance of the model. Compared with classical models and other deep models, our model exhibits superior overall performance.
- Published
- 2022
- Full Text
- View/download PDF
32. Re-parameterization reduces irreducible geometric constraint systems
- Author
-
Hichem Barki, Dominique Michelucci, Lincong Fang, Sebti Foufou, CSE Department, College of Engineering, Qatar University, PO BOX 2713, Doha, Qatar, Qatar University, School of Information Technology, Zhejiang University of Finance & Economics, 310018 Hangzhou, China, Zhejiang University, Laboratoire Electronique, Informatique et Image [UMR6306] (Le2i), Université de Bourgogne (UB)-École Nationale Supérieure d'Arts et Métiers (ENSAM), Arts et Métiers Sciences et Technologies, HESAM Université (HESAM)-HESAM Université (HESAM)-Arts et Métiers Sciences et Technologies, HESAM Université (HESAM)-HESAM Université (HESAM)-AgroSup Dijon - Institut National Supérieur des Sciences Agronomiques, de l'Alimentation et de l'Environnement-Centre National de la Recherche Scientifique (CNRS), Zheijiang University, Laboratoire Electronique, Informatique et Image ( Le2i ), Université de Bourgogne ( UB ) -AgroSup Dijon - Institut National Supérieur des Sciences Agronomiques, de l'Alimentation et de l'Environnement-Centre National de la Recherche Scientifique ( CNRS ), Université de Bourgogne (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Arts et Métiers (ENSAM), and HESAM Université (HESAM)-HESAM Université (HESAM)-AgroSup Dijon - Institut National Supérieur des Sciences Agronomiques, de l'Alimentation et de l'Environnement
- Subjects
[ INFO ] Computer Science [cs] ,Reduction (recursion theory) ,Minor (linear algebra) ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Industrial and Manufacturing Engineering ,Square (algebra) ,symbols.namesake ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,0202 electrical engineering, electronic engineering, information engineering ,[INFO]Computer Science [cs] ,Limit (mathematics) ,0101 mathematics ,Newton's method ,Mathematics ,Reduction ,Discrete mathematics ,Decomposition ,Geometric modeling with constraints ,Homotopy ,020207 software engineering ,[ MATH.MATH-NA ] Mathematics [math]/Numerical Analysis [math.NA] ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Constraint (information theory) ,Linear algebra ,symbols ,Geometric constraints solving ,Re-parameterization ,[MATH.MATH-NA]Mathematics [math]/Numerical Analysis [math.NA] - Abstract
You recklessly told your boss that solving a non-linear system of size n ( n unknowns and n equations) requires a time proportional to n , as you were not very attentive during algorithmic complexity lectures. So now, you have only one night to solve a problem of big size (e.g., 1000 equations/unknowns), otherwise you will be fired in the next morning. The system is well-constrained and structurally irreducible: it does not contain any strictly smaller well-constrained subsystems. Its size is big, so the Newton-Raphson method is too slow and impractical. The most frustrating thing is that if you knew the values of a small number k ? n of key unknowns, then the system would be reducible to small square subsystems and easily solved. You wonder if it would be possible to exploit this reducibility, even without knowing the values of these few key unknowns. This article shows that it is indeed possible. This is done at the lowest level, at the linear algebra routines level, so that numerous solvers (Newton-Raphson, homotopy, and also p -adic methods relying on Hensel lifting) widely involved in geometric constraint solving and CAD applications can benefit from this decomposition with minor modifications. For instance, with k ? n key unknowns, the cost of a Newton iteration becomes O ( k n 2 ) instead of O ( n 3 ) . Several experiments showing a significant performance gain of our re-parameterization technique are reported in this paper to consolidate our theoretical findings and to motivate its practical usage for bigger systems. A new re-parameterization for reducing and unlocking irreducible geometric systems.No need for the values of the key unknowns and no limit on their number.Enabling the usage of decomposition methods on irreducible re-parameterized systems.Usage at the lowest linear Algebra level and significant performance improvement.Benefits for numerous solvers (Newton-Raphson, homotopy, p -adic methods, etc.)
- Published
- 2016
- Full Text
- View/download PDF
33. A re-formulation of generalized linear mixed models to fit family data in genetic association studies
- Author
-
Kwang Woo Ahn, Xujing Wang, Tao Wang, Soumitra Ghosh, Purushottam W. Laud, and Peng He
- Subjects
genetic variance components ,Bayesian methods ,generalized linear mixed models (GLMM) ,lcsh:QH426-470 ,Computer science ,Bayesian probability ,family data ,computer.software_genre ,01 natural sciences ,Genetic correlation ,Generalized linear mixed model ,Correlation ,re-parameterization ,010104 statistics & probability ,03 medical and health sciences ,Genetics ,0101 mathematics ,Genetics (clinical) ,random genetic effects ,030304 developmental biology ,Genetic association ,Original Research ,0303 health sciences ,genetic correlation ,Data set ,R package ,lcsh:Genetics ,Molecular Medicine ,Data mining ,computer ,Cholesky decomposition - Abstract
The generalized linear mixed model (GLMM) is a useful tool for modeling genetic correlation among family data in genetic association studies. However, when dealing with families of varied sizes and diverse genetic relatedness, the GLMM has a special correlation structure which often makes it difficult to be specified using standard statistical software. In this study, we propose a Cholesky decomposition based re-formulation of the GLMM so that the re-formulated GLMM can be specified conveniently via `proc nlmixed' and `proc glimmix' in SAS, or OpenBUGS via R package BRugs. Performances of these procedures in fitting the re-formulated GLMM are examined through simulation studies. We also apply this re-formulated GLMM to analyze a real data set from Type 1 Diabetes Genetics Consortium (T1DGC).
- Published
- 2014
34. Multi-scale reservoir data integration and uncertainty quantification
- Author
-
Gentilhomme, Théophile, UL, Thèses, GeoRessources, Institut national des sciences de l'Univers (INSU - CNRS)-Centre de recherches sur la géologie des matières premières minérales et énergétiques (CREGU)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS), Université de Lorraine, Guillaume Caumon, and Jean-Jacques Royer
- Subjects
Inverse problems ,Optimization ,[SDU.STU]Sciences of the Universe [physics]/Earth Sciences ,Caractérisation des réservoirs ,Seismic ,Lifting scheme ,Production data ,Optimisation ,Multiple points geostatistics ,History-Matching ,Multi-échelles ,Données de production ,Gisements pétrolifères ,Multi-scale ,Sismique ,Ondelettes de seconde génération ,Géostatistiques multipoints ,Re-paramétrisation ,Étude des ,Calage historique ,Analyse multiéchelle ,Problèmes inverses ,Second generation wavelets ,[SDU.STU] Sciences of the Universe [physics]/Earth Sciences ,Schéma de lifting ,Re-parameterization ,Reservoir characterization - Abstract
In this work, we propose to follow a multi-scale approach for spatial reservoir properties characterization using direct (well observations) and indirect (seismic and production history) data at different resolutions. Two decompositions are used to parameterize the problem: the wavelets and the Gaussian pyramids. Using these parameterizations, we show the advantages of the multi-scale approach with two uncertainty quantification problems based on minimization. The first one concerns the simulation of property fields from a multiple points geostatistics algorithm. It is shown that the multi-scale approach based on Gaussian pyramids improves the quality of the output realizations, the match of the conditioning data and the computational time compared to the standard approach. The second problem concerns the preservation of the prior models during the assimilation of the production history. In order to re-parameterize the problem, we develop a new 3D grid adaptive wavelet transform, which can be used on complex reservoir grids containing dead or zero volume cells. An ensemble-based optimization method is integrated in the multi-scale history matching approach, so that an estimation of the uncertainty is obtained at the end of the optimization. This method is applied on several application examples where we observe that the final realizations better preserve the spatial distribution of the prior models and are less noisy than the realizations updated using a standard approach, while matching the production data equally well., Dans ce travail, nous proposons de suivre une approche multi-échelles pour simuler des propriétés spatiales des réservoirs, permettant d'intégrer des données directes (observation de puits) ou indirectes (sismique et données de production) de résolutions différentes. Deux paramétrisations sont utilisées pour résoudre ce problème: les ondelettes et les pyramides gaussiennes. A l'aide de ces paramétrisations, nous démontrons les avantages de l'approche multi-échelles sur deux types de problèmes d'estimations des incertitudes basés sur la minimisation d'une distance. Le premier problème traite de la simulation de propriétés à partir d'un algorithme de géostatistique multipoints. Il est montré que l'approche multi-échelles basée sur les pyramides gaussiennes améliore la qualité des réalisations générées, respecte davantage les données et réduit les temps de calculs par rapport à l'approche standard. Le second problème traite de la préservation des modèles a priori lors de l'assimilation des données d'historique de production. Pour re-paramétriser le problème, nous développons une transformée en ondelette 3D applicable à des grilles stratigraphiques complexes de réservoir, possédant des cellules mortes ou de volume négligeable. Afin d'estimer les incertitudes liées à l'aspect mal posé du problème inverse, une méthode d'optimisation basée ensemble est intégrée dans l'approche multi-échelles de calage historique. A l'aide de plusieurs exemples d'applications, nous montrons que l'inversion multi-échelles permet de mieux préserver les modèles a priori et est moins assujettie au bruit que les approches standards, tout en respectant aussi bien les données de conditionnement.
- Published
- 2014
35. Estimating m-regimes STAR-GARCH model using QMLE with parameter transformation
- Author
-
Chan, Felix, Theoharakis, Billy, Chan, Felix, and Theoharakis, Billy
- Abstract
It is well known in the literature that obtaining the parameter estimates for the Smooth Transition Autoregressive-Generalized Autoregressive Conditional Heteroskedasticity (STAR-GARCH) can be problematic due to computational difficulties. Conventional optimization algorithms do not seem to perform well in locating the global optimum of the associated likelihood function. This makes Quasi-Maximum Likelihood Estimator (QMLE) difficult to obtain for STAR-GARCH models in practice. Curiously, there has been very little research investigating the cause of the numerical difficulties in obtaining the parameter estimates for STAR-GARCH using QMLE. The aim of the paper is to investigate the nature of the numerical difficulties using Monte Carlo Simulation. By examining the surface of the log-likelihood function based on simulated data, the results provide several insights into the difficulties in obtaining QMLE for STAR-GARCH models. Based on the findings, the paper also proposes a simple transformation on the parameters to alleviate these difficulties. Monte Carlo simulation results show promising signs for the proposed transform. The asymptotic and robust variance–covariance matrices of the original parameter estimates are derived as a function of the transformed parameter estimates, which greatly facilitates inferences on the original parameters.
- Published
- 2011
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.