1,787 results on '"Ren, Hongliang"'
Search Results
52. Motor-free telerobotic endomicroscopy for steerable and programmable imaging in complex curved and localized areas
- Author
-
Yuan, Sishen, Xu, Chao, Cui, Beilei, Zhang, Tinghua, Liang, Baijia, Yuan, Wu, and Ren, Hongliang
- Published
- 2024
- Full Text
- View/download PDF
53. Bioinspired soft robotics: How do we learn from creatures?
- Author
-
Yang, Yang, He, Zhiguo, Jiao, Pengcheng, and Ren, Hongliang
- Subjects
Computer Science - Robotics - Abstract
Soft robotics has opened a unique path to flexibility and environmental adaptability, learning from nature and reproducing biological behaviors. Nature implies answers for how to apply robots to real life. To find out how we learn from creatures to design and apply soft robots, in this Review, we propose a classification method to summarize soft robots based on different functions of biological systems: self-growing, self-healing, self-responsive, and self-circulatory. The bio-function based classification logic is presented to explain why we learn from creatures. State-of-art technologies, characteristics, pros, cons, challenges, and potential applications of these categories are analyzed to illustrate what we learned from creatures. By intersecting these categories, the existing and potential bio-inspired applications are overviewed and outlooked to finally find the answer, that is, how we learn from creatures., Comment: 15 pages, 6 figures, journal article IEEE Reviews in biomedical engineering(2022), Early Access
- Published
- 2023
- Full Text
- View/download PDF
54. Paced-Curriculum Distillation with Prediction and Label Uncertainty for Image Segmentation
- Author
-
Islam, Mobarakol, Seenivasan, Lalithkumar, Sharan, S. P., Viekash, V. K., Gupta, Bhavesh, Glocker, Ben, and Ren, Hongliang
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Purpose: In curriculum learning, the idea is to train on easier samples first and gradually increase the difficulty, while in self-paced learning, a pacing function defines the speed to adapt the training progress. While both methods heavily rely on the ability to score the difficulty of data samples, an optimal scoring function is still under exploration. Methodology: Distillation is a knowledge transfer approach where a teacher network guides a student network by feeding a sequence of random samples. We argue that guiding student networks with an efficient curriculum strategy can improve model generalization and robustness. For this purpose, we design an uncertainty-based paced curriculum learning in self distillation for medical image segmentation. We fuse the prediction uncertainty and annotation boundary uncertainty to develop a novel paced-curriculum distillation (PCD). We utilize the teacher model to obtain prediction uncertainty and spatially varying label smoothing with Gaussian kernel to generate segmentation boundary uncertainty from the annotation. We also investigate the robustness of our method by applying various types and severity of image perturbation and corruption. Results: The proposed technique is validated on two medical datasets of breast ultrasound image segmentation and robotassisted surgical scene segmentation and achieved significantly better performance in terms of segmentation and robustness. Conclusion: P-CD improves the performance and obtains better generalization and robustness over the dataset shift. While curriculum learning requires extensive tuning of hyper-parameters for pacing function, the level of performance improvement suppresses this limitation., Comment: 15 pages
- Published
- 2023
55. Confidence-Aware Paced-Curriculum Learning by Label Smoothing for Surgical Scene Understanding
- Author
-
Xu, Mengya, Islam, Mobarakol, Glocker, Ben, and Ren, Hongliang
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Curriculum learning and self-paced learning are the training strategies that gradually feed the samples from easy to more complex. They have captivated increasing attention due to their excellent performance in robotic vision. Most recent works focus on designing curricula based on difficulty levels in input samples or smoothing the feature maps. However, smoothing labels to control the learning utility in a curriculum manner is still unexplored. In this work, we design a paced curriculum by label smoothing (P-CBLS) using paced learning with uniform label smoothing (ULS) for classification tasks and fuse uniform and spatially varying label smoothing (SVLS) for semantic segmentation tasks in a curriculum manner. In ULS and SVLS, a bigger smoothing factor value enforces a heavy smoothing penalty in the true label and limits learning less information. Therefore, we design the curriculum by label smoothing (CBLS). We set a bigger smoothing value at the beginning of training and gradually decreased it to zero to control the model learning utility from lower to higher. We also designed a confidence-aware pacing function and combined it with our CBLS to investigate the benefits of various curricula. The proposed techniques are validated on four robotic surgery datasets of multi-class, multi-label classification, captioning, and segmentation tasks. We also investigate the robustness of our method by corrupting validation data into different severity levels. Our extensive analysis shows that the proposed method improves prediction accuracy and robustness., Comment: 12 pages
- Published
- 2022
56. Two-stage Contextual Transformer-based Convolutional Neural Network for Airway Extraction from CT Images
- Author
-
Wu, Yanan, Zhao, Shuiqing, Qi, Shouliang, Feng, Jie, Pang, Haowen, Chang, Runsheng, Bai, Long, Li, Mengqi, Xia, Shuyue, Qian, Wei, and Ren, Hongliang
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
Accurate airway extraction from computed tomography (CT) images is a critical step for planning navigation bronchoscopy and quantitative assessment of airway-related chronic obstructive pulmonary disease (COPD). The existing methods are challenging to sufficiently segment the airway, especially the high-generation airway, with the constraint of the limited label and cannot meet the clinical use in COPD. We propose a novel two-stage 3D contextual transformer-based U-Net for airway segmentation using CT images. The method consists of two stages, performing initial and refined airway segmentation. The two-stage model shares the same subnetwork with different airway masks as input. Contextual transformer block is performed both in the encoder and decoder path of the subnetwork to finish high-quality airway segmentation effectively. In the first stage, the total airway mask and CT images are provided to the subnetwork, and the intrapulmonary airway mask and corresponding CT scans to the subnetwork in the second stage. Then the predictions of the two-stage method are merged as the final prediction. Extensive experiments were performed on in-house and multiple public datasets. Quantitative and qualitative analysis demonstrate that our proposed method extracted much more branches and lengths of the tree while accomplishing state-of-the-art airway segmentation performance. The code is available at https://github.com/zhaozsq/airway_segmentation.
- Published
- 2022
57. Task-Aware Asynchronous Multi-Task Model with Class Incremental Contrastive Learning for Surgical Scene Understanding
- Author
-
Seenivasan, Lalithkumar, Islam, Mobarakol, Xu, Mengya, Lim, Chwee Ming, and Ren, Hongliang
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Robotics ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Purpose: Surgery scene understanding with tool-tissue interaction recognition and automatic report generation can play an important role in intra-operative guidance, decision-making and postoperative analysis in robotic surgery. However, domain shifts between different surgeries with inter and intra-patient variation and novel instruments' appearance degrade the performance of model prediction. Moreover, it requires output from multiple models, which can be computationally expensive and affect real-time performance. Methodology: A multi-task learning (MTL) model is proposed for surgical report generation and tool-tissue interaction prediction that deals with domain shift problems. The model forms of shared feature extractor, mesh-transformer branch for captioning and graph attention branch for tool-tissue interaction prediction. The shared feature extractor employs class incremental contrastive learning (CICL) to tackle intensity shift and novel class appearance in the target domain. We design Laplacian of Gaussian (LoG) based curriculum learning into both shared and task-specific branches to enhance model learning. We incorporate a task-aware asynchronous MTL optimization technique to fine-tune the shared weights and converge both tasks optimally. Results: The proposed MTL model trained using task-aware optimization and fine-tuning techniques reported a balanced performance (BLEU score of 0.4049 for scene captioning and accuracy of 0.3508 for interaction detection) for both tasks on the target domain and performed on-par with single-task models in domain adaptation. Conclusion: The proposed multi-task model was able to adapt to domain shifts, incorporate novel instruments in the target domain, and perform tool-tissue interaction detection and report generation on par with single-task models., Comment: Manuscript accepted in the International Journal of Computer Assisted Radiology and Surgery. codes available: https://github.com/lalithjets/Domain-adaptation-in-MTL
- Published
- 2022
58. A Miniature 3-DoF Flexible Parallel Robotic Wrist Using NiTi Wires for Gastrointestinal Endoscopic Surgery
- Author
-
Gao, Huxin, Xiao, Xiao, Yang, Xiaoxiao, Zhang, Tao, Zuo, Xiuli, Li, Yanqing, and Ren, Hongliang
- Subjects
Computer Science - Robotics - Abstract
Gastrointestinal endoscopic surgery (GES) has high requirements for instruments' size and distal dexterity, because of the narrow endoscopic channel and long, tortuous human gastrointestinal tract. This paper utilized Nickel-Titanium (NiTi) wires to develop a miniature 3-DoF (pitch-yaw-translation) flexible parallel robotic wrist (FPRW). Additionally, we assembled an electric knife on the wrist's connection interface and then teleoperated it to perform an endoscopic submucosal dissection (ESD) on porcine stomachs. The effective performance in each ESD workflow proves that the designed FPRW has sufficient workspace, high distal dexterity, and high positioning accuracy., Comment: IEEE International Conference on Robotics and Automation (ICRA) 2022 workshop: Frontiers of Endoluminal Intervention: Clinical opportunities and technical challenges
- Published
- 2022
59. Transformer-based 3D U-Net for pulmonary vessel segmentation and artery-vein separation from CT images
- Author
-
Wu, Yanan, Qi, Shouliang, Wang, Meihuan, Zhao, Shuiqing, Pang, Haowen, Xu, Jiaxuan, Bai, Long, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
60. Rethinking Surgical Captioning: End-to-End Window-Based MLP Transformer Using Patches
- Author
-
Xu, Mengya, Islam, Mobarakol, and Ren, Hongliang
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Surgical captioning plays an important role in surgical instruction prediction and report generation. However, the majority of captioning models still rely on the heavy computational object detector or feature extractor to extract regional features. In addition, the detection model requires additional bounding box annotation which is costly and needs skilled annotators. These lead to inference delay and limit the captioning model to deploy in real-time robotic surgery. For this purpose, we design an end-to-end detector and feature extractor-free captioning model by utilizing the patch-based shifted window technique. We propose Shifted Window-Based Multi-Layer Perceptrons Transformer Captioning model (SwinMLP-TranCAP) with faster inference speed and less computation. SwinMLP-TranCAP replaces the multi-head attention module with window-based multi-head MLP. Such deployments primarily focus on image understanding tasks, but very few works investigate the caption generation task. SwinMLP-TranCAP is also extended into a video version for video captioning tasks using 3D patches and windows. Compared with previous detector-based or feature extractor-based models, our models greatly simplify the architecture design while maintaining performance on two surgical datasets. The code is publicly available at https://github.com/XuMengyaAmy/SwinMLP_TranCAP., Comment: 10 pages
- Published
- 2022
61. Rethinking Surgical Instrument Segmentation: A Background Image Can Be All You Need
- Author
-
Wang, An, Islam, Mobarakol, Xu, Mengya, and Ren, Hongliang
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Data diversity and volume are crucial to the success of training deep learning models, while in the medical imaging field, the difficulty and cost of data collection and annotation are especially huge. Specifically in robotic surgery, data scarcity and imbalance have heavily affected the model accuracy and limited the design and deployment of deep learning-based surgical applications such as surgical instrument segmentation. Considering this, we rethink the surgical instrument segmentation task and propose a one-to-many data generation solution that gets rid of the complicated and expensive process of data collection and annotation from robotic surgery. In our method, we only utilize a single surgical background tissue image and a few open-source instrument images as the seed images and apply multiple augmentations and blending techniques to synthesize amounts of image variations. In addition, we also introduce the chained augmentation mixing during training to further enhance the data diversities. The proposed approach is evaluated on the real datasets of the EndoVis-2018 and EndoVis-2017 surgical scene segmentation. Our empirical analysis suggests that without the high cost of data collection and annotation, we can achieve decent surgical instrument segmentation performance. Moreover, we also observe that our method can deal with novel instrument prediction in the deployment domain. We hope our inspiring results will encourage researchers to emphasize data-centric methods to overcome demanding deep learning limitations besides data shortage, such as class imbalance, domain adaptation, and incremental learning. Our code is available at https://github.com/lofrienger/Single_SurgicalScene_For_Segmentation., Comment: 10 pages, MICCAI2022
- Published
- 2022
62. Surgical-VQA: Visual Question Answering in Surgical Scenes using Transformer
- Author
-
Seenivasan, Lalithkumar, Islam, Mobarakol, Krishna, Adithya K, and Ren, Hongliang
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning ,Computer Science - Robotics ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Visual question answering (VQA) in surgery is largely unexplored. Expert surgeons are scarce and are often overloaded with clinical and academic workloads. This overload often limits their time answering questionnaires from patients, medical students or junior residents related to surgical procedures. At times, students and junior residents also refrain from asking too many questions during classes to reduce disruption. While computer-aided simulators and recording of past surgical procedures have been made available for them to observe and improve their skills, they still hugely rely on medical experts to answer their questions. Having a Surgical-VQA system as a reliable 'second opinion' could act as a backup and ease the load on the medical experts in answering these questions. The lack of annotated medical data and the presence of domain-specific terms has limited the exploration of VQA for surgical procedures. In this work, we design a Surgical-VQA task that answers questionnaires on surgical procedures based on the surgical scene. Extending the MICCAI endoscopic vision challenge 2018 dataset and workflow recognition dataset further, we introduce two Surgical-VQA datasets with classification and sentence-based answers. To perform Surgical-VQA, we employ vision-text transformers models. We further introduce a residual MLP-based VisualBert encoder model that enforces interaction between visual and text tokens, improving performance in classification-based answering. Furthermore, we study the influence of the number of input image patches and temporal visual features on the model performance in both classification and sentence-based answering., Comment: Code: https://github.com/lalithjets/Surgical_VQA.git
- Published
- 2022
63. Class Balanced PixelNet for Neurological Image Segmentation
- Author
-
Islam, Mobarakol and Ren, Hongliang
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
In this paper, we propose an automatic brain tumor segmentation approach (e.g., PixelNet) using a pixel-level convolutional neural network (CNN). The model extracts feature from multiple convolutional layers and concatenate them to form a hyper-column where samples a modest number of pixels for optimization. Hyper-column ensures both local and global contextual information for pixel-wise predictors. The model confirms the statistical efficiency by sampling a few pixels in the training phase where spatial redundancy limits the information learning among the neighboring pixels in conventional pixel-level semantic segmentation approaches. Besides, label skewness in training data leads the convolutional model often converge to certain classes which is a common problem in the medical dataset. We deal with this problem by selecting an equal number of pixels for all the classes in sampling time. The proposed model has achieved promising results in brain tumor and ischemic stroke lesion segmentation datasets.
- Published
- 2022
64. Ischemic Stroke Lesion Segmentation Using Adversarial Learning
- Author
-
Islam, Mobarakol, Vaidyanathan, N Rajiv, Jose, V Jeya Maria, and Ren, Hongliang
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
Ischemic stroke occurs through a blockage of clogged blood vessels supplying blood to the brain. Segmentation of the stroke lesion is vital to improve diagnosis, outcome assessment and treatment planning. In this work, we propose a segmentation model with adversarial learning for ischemic lesion segmentation. We adopt U-Net with skip connection and dropout as segmentation baseline network and a fully connected network (FCN) as discriminator network. Discriminator network consists of 5 convolution layers followed by leaky-ReLU and an upsampling layer to rescale the output to the size of the input map. Training a segmentation network along with an adversarial network can detect and correct higher order inconsistencies between the segmentation maps produced by ground-truth and the Segmentor. We exploit three modalities (CT, DPWI, CBF) of acute computed tomography (CT) perfusion data provided in ISLES 2018 (Ischemic Stroke Lesion Segmentation) for ischemic lesion segmentation. Our model has achieved dice accuracy of 42.10% with the cross-validation of training and 39% with the testing data., Comment: Published in MICCAI ISLES Challenge 2018
- Published
- 2022
65. CholecTriplet2021: A benchmark challenge for surgical action triplet recognition
- Author
-
Nwoye, Chinedu Innocent, Alapatt, Deepak, Yu, Tong, Vardazaryan, Armine, Xia, Fangfang, Zhao, Zixuan, Xia, Tong, Jia, Fucang, Yang, Yuxuan, Wang, Hao, Yu, Derong, Zheng, Guoyan, Duan, Xiaotian, Getty, Neil, Sanchez-Matilla, Ricardo, Robu, Maria, Zhang, Li, Chen, Huabin, Wang, Jiacheng, Wang, Liansheng, Zhang, Bokai, Gerats, Beerend, Raviteja, Sista, Sathish, Rachana, Tao, Rong, Kondo, Satoshi, Pang, Winnie, Ren, Hongliang, Abbing, Julian Ronald, Sarhan, Mohammad Hasan, Bodenstedt, Sebastian, Bhasker, Nithya, Oliveira, Bruno, Torres, Helena R., Ling, Li, Gaida, Finn, Czempiel, Tobias, Vilaça, João L., Morais, Pedro, Fonseca, Jaime, Egging, Ruby Mae, Wijma, Inge Nicole, Qian, Chen, Bian, Guibin, Li, Zhen, Balasubramanian, Velmurugan, Sheet, Debdoot, Luengo, Imanol, Zhu, Yuanbo, Ding, Shuai, Aschenbrenner, Jakob-Anton, van der Kar, Nicolas Elini, Xu, Mengya, Islam, Mobarakol, Seenivasan, Lalithkumar, Jenke, Alexander, Stoyanov, Danail, Mutter, Didier, Mascagni, Pietro, Seeliger, Barbara, Gonzalez, Cristians, and Padoy, Nicolas
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of
combination delivers comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms by competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery., Comment: CholecTriplet2021 challenge report. Paper accepted at Elsevier journal of Medical Image Analysis. 22 pages, 8 figures, 11 tables. Challenge website: https://cholectriplet2021.grand-challenge.org - Published
- 2022
- Full Text
- View/download PDF
66. Global-Reasoned Multi-Task Learning Model for Surgical Scene Understanding
- Author
-
Seenivasan, Lalithkumar, Mitheran, Sai, Islam, Mobarakol, and Ren, Hongliang
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Robotics - Abstract
Global and local relational reasoning enable scene understanding models to perform human-like scene analysis and understanding. Scene understanding enables better semantic segmentation and object-to-object interaction detection. In the medical domain, a robust surgical scene understanding model allows the automation of surgical skill evaluation, real-time monitoring of surgeon's performance and post-surgical analysis. This paper introduces a globally-reasoned multi-task surgical scene understanding model capable of performing instrument segmentation and tool-tissue interaction detection. Here, we incorporate global relational reasoning in the latent interaction space and introduce multi-scale local (neighborhood) reasoning in the coordinate space to improve segmentation. Utilizing the multi-task model setup, the performance of the visual-semantic graph attention network in interaction detection is further enhanced through global reasoning. The global interaction space features from the segmentation module are introduced into the graph network, allowing it to detect interactions based on both node-to-node and global interaction reasoning. Our model reduces the computation cost compared to running two independent single-task models by sharing common modules, which is indispensable for practical applications. Using a sequential optimization technique, the proposed multi-task model outperforms other state-of-the-art single-task models on the MICCAI endoscopic vision challenge 2018 dataset. Additionally, we also observe the performance of the multi-task model when trained using the knowledge distillation technique. The official code implementation is made available in GitHub., Comment: Code available at: https://github.com/lalithjets/Global-reasoned-multi-task-model
- Published
- 2022
- Full Text
- View/download PDF
67. Conceptual Origami Bending and Bistability for Transoral Mechanisms
- Author
-
Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
68. Preface and A Brief Guide to the Chapters
- Author
-
Ren, Hongliang, Yeow, Bok Seng, Cai, Catherine Jiayi, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
69. Orimimetic Folds into Deployable Mechanisms with Potential Functionalities in Biomedical Robotics
- Author
-
Liu, Hannah, Yeow, Bok Seng, Cai, Catherine Jiayi, Tse, Zion Tsz Ho, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
70. Unsupervised Intelligent Pose Estimation of Origami-Inspired Deployable Robots
- Author
-
Lal, Rohit, Ruphan, S., Sifan, C. A. O., Yuan, Sishen, Lalith, Liang, Qui, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
71. Deployable Kirigami for Intra-Abdominal Monitoring
- Author
-
Xu, Zongyuan, Ng, Kai Li, Ow, Valerie, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
72. Multi-DOF Proprioceptive Origami Structures with Fiducial Markers
- Author
-
Ting, Joelle Chua Shu, Cheng, Royston Long Kun, Dinata, Noor Ezza Varhana Binti Surya, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
73. Deployable Compression Generating and Sensing for Wearable Compression-Aware Force Rendering
- Author
-
Qi, Jiaming, Song, Xiao, Fan, Shicheng, Xu, Chenjie, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
74. Flat Foldable Kirigami for Chipless Wireless Sensing
- Author
-
Le, Yeo Wei, Ponraj, Godwin, Cai, Catherine, Kumar, Kirthika, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
75. Stretchable Strain Sensors by Kirigami Deployable on Balloons with Temporary Tattoo Paper
- Author
-
Jia, Li, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
76. Wormigami and Tippysaurus: Magnetically Actuated Origami Structures
- Author
-
Lew, Serena, Syahindah, Amirah, Kiat, Chiang Soon, Jie, Yeo Ying, Ye, Yang Wei, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
77. Untethered Soft Ferromagnetic Quad-Jaws Cootie Catcher with Selectively Coupled Degrees of Freedom
- Author
-
Cai, Xinchen, Cai, Catherine Jiayi, Seenivasan, Lalithkumar, Tse, Zion, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
78. Wearable Origami Rendering Mechanism Towards Haptic Illusion
- Author
-
Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
79. Kinesthesia Sensorization of Foldable Designs Using Soft Sensors
- Author
-
Qing, Lim, Kumar, Kirthika, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
80. Untethered Motion Generation and Characterization of Multi-Leg Insect-Size Soft Foldable Robots Under Magnetic Actuation
- Author
-
Sawalani, Kashish Sunil, Gupta, Himanshi, Cai, Xin Chen, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
81. Magnetically Actuated Luminal Origami
- Author
-
Mugilvannan, Arjun Kesav, Han, Tan Jing, Elaine, Chen Shi An, Jun, Ignatius Lee Jia, Aung, Thet Htet Win Naing, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
82. Tactile Sensitive Origami Trihexaflexagon Gripper Actuated by Foldable Pneumatic Bellows
- Author
-
Prituja, A. V., Cheng, Bryna Tan, Banerjee, Hritwick, Seng, Yeow Bok, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
83. Compressable and Steerable Slinky Motions
- Author
-
Jiaqi, Lim, Jun, Tan Yong, Wei, Moh Chin, Ning, Lee Hui, Ahamed, Faizah Hairun Sabir, Yi, Oh Zhong, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
84. Biomimetic Untethered Inflatable Origami
- Author
-
Clarice, Low Rae-Yin, Shiying, Cai, Yang, Chee Jenn, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
85. Magnetically Actuated Origami Structures for Untethered Optical Steering in Remote Set-up: Preliminary Designs and Characterisations
- Author
-
Ong, Lydia Si Jia, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
86. Deployable and Interchangeable Telescoping Tubes
- Author
-
Hyojin, Ae, Meng, Tong, Han, Tan Yong, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
87. Deployable Parallelogram Mechanism for Generating Remote Centre of Motion Towards Ocular Procedures
- Author
-
Heng, Tan Tat, Ren, Hongliang, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
88. SimCol3D — 3D reconstruction during colonoscopy challenge
- Author
-
Rau, Anita, Bano, Sophia, Jin, Yueming, Azagra, Pablo, Morlana, Javier, Kader, Rawen, Sanderson, Edward, Matuszewski, Bogdan J., Lee, Jae Young, Lee, Dong-Jae, Posner, Erez, Frank, Netanel, Elangovan, Varshini, Raviteja, Sista, Li, Zhengwen, Liu, Jiquan, Lalithkumar, Seenivasan, Islam, Mobarakol, Ren, Hongliang, Lovat, Laurence B., Montiel, José M.M., and Stoyanov, Danail
- Published
- 2024
- Full Text
- View/download PDF
89. ST-MTL: Spatio-Temporal Multitask Learning Model to Predict Scanpath While Tracking Instruments in Robotic Surgery
- Author
-
Islam, Mobarakol, VS, Vibashan, Lim, Chwee Ming, and Ren, Hongliang
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Robotics ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Representation learning of the task-oriented attention while tracking instrument holds vast potential in image-guided robotic surgery. Incorporating cognitive ability to automate the camera control enables the surgeon to concentrate more on dealing with surgical instruments. The objective is to reduce the operation time and facilitate the surgery for both surgeons and patients. We propose an end-to-end trainable Spatio-Temporal Multi-Task Learning (ST-MTL) model with a shared encoder and spatio-temporal decoders for the real-time surgical instrument segmentation and task-oriented saliency detection. In the MTL model of shared parameters, optimizing multiple loss functions into a convergence point is still an open challenge. We tackle the problem with a novel asynchronous spatio-temporal optimization (ASTO) technique by calculating independent gradients for each decoder. We also design a competitive squeeze and excitation unit by casting a skip connection that retains weak features, excites strong features, and performs dynamic spatial and channel-wise feature recalibration. To capture better long term spatio-temporal dependencies, we enhance the long-short term memory (LSTM) module by concatenating high-level encoder features of consecutive frames. We also introduce Sinkhorn regularized loss to enhance task-oriented saliency detection by preserving computational efficiency. We generate the task-aware saliency maps and scanpath of the instruments on the dataset of the MICCAI 2017 robotic instrument segmentation challenge. Compared to the state-of-the-art segmentation and saliency methods, our model outperforms most of the evaluation metrics and produces an outstanding performance in the challenge., Comment: 12 pages
- Published
- 2021
90. RASEC: Rescaling Acquisition Strategy with Energy Constraints under SE-OU Fusion Kernel for Active Trachea Palpation and Incision Recommendation in Laryngeal Region
- Author
-
Yue, Wenchao, Bai, Fan, Liu, Jianbang, Ju, Feng, Meng, Max Q-H, Lim, Chwee Ming, and Ren, Hongliang
- Subjects
Computer Science - Robotics - Abstract
A novel palpation-based incision detection strategy in the laryngeal region, potentially for robotic tracheotomy, is proposed in this letter. A tactile sensor is introduced to measure tissue hardness in the specific laryngeal region by gentle contact. The kernel fusion method is proposed to combine the Squared Exponential (SE) kernel with Ornstein-Uhlenbeck (OU) kernel to figure out the drawbacks that the existing kernel functions are not sufficiently optimal in this scenario. Moreover, we further regularize exploration factor and greed factor, and the tactile sensor's moving distance and the robotic base link's rotation angle during the incision localization process are considered as new factors in the acquisition strategy. We conducted simulation and physical experiments to compare the newly proposed algorithm - Rescaling Acquisition Strategy with Energy Constraints (RASEC) in trachea detection with current palpation-based acquisition strategies. The result indicates that the proposed acquisition strategy with fusion kernel can successfully localize the incision with the highest algorithm performance (Average Precision 0.932, Average Recall 0.973, Average F1 score 0.952). During the robotic palpation process, the cumulative moving distance is reduced by 50%, and the cumulative rotation angle is reduced by 71.4% with no sacrifice in the comprehensive performance capabilities. Therefore, it proves that RASEC can efficiently suggest the incision zone in the laryngeal region and greatly reduced the energy loss., Comment: Submitted to RA-L
- Published
- 2021
91. Class-Distribution-Aware Calibration for Long-Tailed Visual Recognition
- Author
-
Islam, Mobarakol, Seenivasan, Lalithkumar, Ren, Hongliang, and Glocker, Ben
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Despite impressive accuracy, deep neural networks are often miscalibrated and tend to overly confident predictions. Recent techniques like temperature scaling (TS) and label smoothing (LS) show effectiveness in obtaining a well-calibrated model by smoothing logits and hard labels with scalar factors, respectively. However, the use of uniform TS or LS factor may not be optimal for calibrating models trained on a long-tailed dataset where the model produces overly confident probabilities for high-frequency classes. In this study, we propose class-distribution-aware TS (CDA-TS) and LS (CDA-LS) by incorporating class frequency information in model calibration in the context of long-tailed distribution. In CDA-TS, the scalar temperature value is replaced with the CDA temperature vector encoded with class frequency to compensate for the over-confidence. Similarly, CDA-LS uses a vector smoothing factor and flattens the hard labels according to their corresponding class distribution. We also integrate CDA optimal temperature vector with distillation loss, which reduces miscalibration in self-distillation (SD). We empirically show that class-distribution-aware TS and LS can accommodate the imbalanced data distribution yielding superior performance in both calibration error and predictive accuracy. We also observe that SD with an extremely imbalanced dataset is less effective in terms of calibration performance. Code is available in https://github.com/mobarakol/Class-Distribution-Aware-TS-LS., Comment: Presented at the ICML 2021 Workshop on Uncertainty and Robustness in Deep Learning
- Published
- 2021
92. Class-Incremental Domain Adaptation with Smoothing and Calibration for Surgical Report Generation
- Author
-
Xu, Mengya, Islam, Mobarakol, Lim, Chwee Ming, and Ren, Hongliang
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Generating surgical reports aimed at surgical scene understanding in robot-assisted surgery can contribute to documenting entry tasks and post-operative analysis. Despite the impressive outcome, the deep learning model degrades the performance when applied to different domains encountering domain shifts. In addition, there are new instruments and variations in surgical tissues appeared in robotic surgery. In this work, we propose class-incremental domain adaptation (CIDA) with a multi-layer transformer-based model to tackle the new classes and domain shift in the target domain to generate surgical reports during robotic surgery. To adapt incremental classes and extract domain invariant features, a class-incremental (CI) learning method with supervised contrastive (SupCon) loss is incorporated with a feature extractor. To generate caption from the extracted feature, curriculum by one-dimensional gaussian smoothing (CBS) is integrated with a multi-layer transformer-based caption prediction model. CBS smoothes the features embedding using anti-aliasing and helps the model to learn domain invariant features. We also adopt label smoothing (LS) to calibrate prediction probability and obtain better feature representation with both feature extractor and captioning model. The proposed techniques are empirically evaluated by using the datasets of two surgical domains, such as nephrectomy operations and transoral robotic surgery. We observe that domain invariant feature learning and the well-calibrated network improves the surgical report generation performance in both source and target domain under domain shift and unseen classes in the manners of one-shot and few-shot learning. The code is publicly available at https://github.com/XuMengyaAmy/CIDACaptioning., Comment: Accepted in MICCAI 2021
- Published
- 2021
93. Brain Tumor Segmentation and Survival Prediction using 3D Attention UNet
- Author
-
Islam, Mobarakol, VS, Vibashan, Jose, V Jeya Maria, Wijethilake, Navodini, Utkarsh, Uppal, and Ren, Hongliang
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
In this work, we develop an attention convolutional neural network (CNN) to segment brain tumors from Magnetic Resonance Images (MRI). Further, we predict the survival rate using various machine learning methods. We adopt a 3D UNet architecture and integrate channel and spatial attention with the decoder network to perform segmentation. For survival prediction, we extract some novel radiomic features based on geometry, location, the shape of the segmented tumor and combine them with clinical information to estimate the survival duration for each patient. We also perform extensive experiments to show the effect of each feature for overall survival (OS) prediction. The experimental results infer that radiomic features such as histogram, location, and shape of the necrosis region and clinical features like age are the most critical parameters to estimate the OS., Comment: MICCAI-BrainLes Workshop
- Published
- 2021
94. Glioma Prognosis: Segmentation of the Tumor and Survival Prediction using Shape, Geometric and Clinical Information
- Author
-
Islam, Mobarakol, Jose, V Jeya Maria, and Ren, Hongliang
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
Segmentation of brain tumor from magnetic resonance imaging (MRI) is a vital process to improve diagnosis, treatment planning and to study the difference between subjects with tumor and healthy subjects. In this paper, we exploit a convolutional neural network (CNN) with hypercolumn technique to segment tumor from healthy brain tissue. Hypercolumn is the concatenation of a set of vectors which form by extracting convolutional features from multiple layers. Proposed model integrates batch normalization (BN) approach with hypercolumn. BN layers help to alleviate the internal covariate shift during stochastic gradient descent (SGD) training by zero-mean and unit variance of each mini-batch. Survival Prediction is done by first extracting features(Geometric, Fractal, and Histogram) from the segmented brain tumor data. Then, the number of days of overall survival is predicted by implementing regression on the extracted features using an artificial neural network (ANN). Our model achieves a mean dice score of 89.78%, 82.53% and 76.54% for the whole tumor, tumor core and enhancing tumor respectively in segmentation task and 67.90% in overall survival prediction task with the validation set of BraTS 2018 challenge. It obtains a mean dice accuracy of 87.315%, 77.04% and 70.22% for the whole tumor, tumor core and enhancing tumor respectively in the segmentation task and a 46.80% in overall survival prediction task in the BraTS 2018 test data set., Comment: MICCAI-BrainLes Workshop
- Published
- 2021
95. Flexible Needle Steering with Tethered and Untethered Actuation: Current States, Targeting Errors, Challenges and Opportunities
- Author
-
Lu, Mingyue, Zhang, Yongde, Lim, Chwee Ming, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
96. Task-aware asynchronous multi-task model with class incremental contrastive learning for surgical scene understanding
- Author
-
Seenivasan, Lalithkumar, Islam, Mobarakol, Xu, Mengya, Lim, Chwee Ming, and Ren, Hongliang
- Published
- 2023
- Full Text
- View/download PDF
97. Novel miniature transendoscopic telerobotic system for endoscopic submucosal dissection (with videos)
- Author
-
Yang, Xiaoxiao, Gao, Huxin, Fu, Shichen, Ji, Rui, Hou, Cheng, Liu, Huicong, Luan, Nan, Ren, Hongliang, Sun, Liping, Yang, Jialin, Zhou, Zhifeng, Yang, Xiaoyun, Sun, Lining, Li, Yanqing, and Zuo, Xiuli
- Published
- 2024
- Full Text
- View/download PDF
98. Learning Domain Adaptation with Model Calibration for Surgical Report Generation in Robotic Surgery
- Author
-
Xu, Mengya, Islam, Mobarakol, Lim, Chwee Ming, and Ren, Hongliang
- Subjects
Computer Science - Robotics - Abstract
Generating a surgical report in robot-assisted surgery, in the form of natural language expression of surgical scene understanding, can play a significant role in document entry tasks, surgical training, and post-operative analysis. Despite the state-of-the-art accuracy of the deep learning algorithm, the deployment performance often drops when applied to the Target Domain (TD) data. For this purpose, we develop a multi-layer transformer-based model with the gradient reversal adversarial learning to generate a caption for the multi-domain surgical images that can describe the semantic relationship between instruments and surgical Region of Interest (ROI). In the gradient reversal adversarial learning scheme, the gradient multiplies with a negative constant and updates adversarially in backward propagation, discriminating between the source and target domains and emerging domain-invariant features. We also investigate model calibration with label smoothing technique and the effect of a well-calibrated model for the penultimate layer's feature representation and Domain Adaptation (DA). We annotate two robotic surgery datasets of MICCAI robotic scene segmentation and Transoral Robotic Surgery (TORS) with the captions of procedures and empirically show that our proposed method improves the performance in both source and target domain surgical reports generation in the manners of unsupervised, zero-shot, one-shot, and few-shot learning., Comment: Accepted in ICRA 2021
- Published
- 2021
99. Glioblastoma Multiforme Prognosis: MRI Missing Modality Generation, Segmentation and Radiogenomic Survival Prediction
- Author
-
Islam, Mobarakol, Wijethilake, Navodini, and Ren, Hongliang
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
The accurate prognosis of Glioblastoma Multiforme (GBM) plays an essential role in planning correlated surgeries and treatments. The conventional models of survival prediction rely on radiomic features using magnetic resonance imaging (MRI). In this paper, we propose a radiogenomic overall survival (OS) prediction approach by incorporating gene expression data with radiomic features such as shape, geometry, and clinical information. We exploit TCGA (The Cancer Genomic Atlas) dataset and synthesize the missing MRI modalities using a fully convolutional network (FCN) in a conditional Generative Adversarial Network (cGAN). Meanwhile, the same FCN architecture enables the tumor segmentation from the available and the synthesized MRI modalities. The proposed FCN architecture comprises octave convolution (OctConv) and a novel decoder, with skip connections in spatial and channel squeeze & excitation (skip-scSE) block. The OctConv can process low and high-frequency features individually and improve model efficiency by reducing channel-wise redundancy. Skip-scSE applies spatial and channel-wise excitation to signify the essential features and reduces the sparsity in deeper layers learning parameters using skip connections. The proposed approaches are evaluated by comparative experiments with state-of-the-art models in synthesis, segmentation, and overall survival (OS) prediction. We observe that adding missing MRI modality improves the segmentation prediction, and expression levels of gene markers have a high contribution in the GBM prognosis prediction, and fused radiogenomic features boost the OS estimation., Comment: Accepted for a journal
- Published
- 2021
100. Radiogenomics of Glioblastoma: Identification of Radiomics associated with Molecular Subtypes
- Author
-
Wijethilake, Navodini, Islam, Mobarakol, Meedeniya, Dulani, Chitraranjan, Charith, Perera, Indika, and Ren, Hongliang
- Subjects
Quantitative Biology - Quantitative Methods ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Glioblastoma is the most malignant type of central nervous system tumor with GBM subtypes cleaved based on molecular level gene alterations. These alterations are also happened to affect the histology. Thus, it can cause visible changes in images, such as enhancement and edema development. In this study, we extract intensity, volume, and texture features from the tumor subregions to identify the correlations with gene expression features and overall survival. Consequently, we utilize the radiomics to find associations with the subtypes of glioblastoma. Accordingly, the fractal dimensions of the whole tumor, tumor core, and necrosis regions show a significant difference between the Proneural, Classical and Mesenchymal subtypes. Additionally, the subtypes of GBM are predicted with an average accuracy of 79% utilizing radiomics and accuracy over 90% utilizing gene expression profiles., Comment: 2nd MICCAI workshop on Radiomics and Radiogenomics in Neuro-oncology using AI, Springer, LNCS, (to appear)
- Published
- 2020
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.