2,665 results on '"POST-PROCESSING"'
Search Results
2. Restoring Connectivity in Vascular Segmentations Using a Learned Post-processing Model
- Author
-
Carneiro-Esteves, Sophie, Vacavant, Antoine, Merveille, Odyssée, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Chen, Chao, editor, Singh, Yash, editor, and Hu, Xiaoling, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Refining Deep Learning Segmentation Maps with a Local Thresholding Approach: Application to Liver Surface Nodularity Quantification in CT
- Author
-
Yang, Sisi, Bône, Alexandre, Decaens, Thomas, Glaunes, Joan Alexis, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ali, Sharib, editor, van der Sommen, Fons, editor, Papież, Bartłomiej Władysław, editor, Ghatwary, Noha, editor, Jin, Yueming, editor, and Kolenbrander, Iris, editor
- Published
- 2025
- Full Text
- View/download PDF
4. A review on microstructure, mechanical behavior and post processing of additively manufactured Ni-based superalloys
- Author
-
Kuntoğlu, Mustafa, Salur, Emin, Gupta, Munish Kumar, Waqar, Saad, Szczotkarz, Natalia, Vashishtha, Govind, Korkmaz, Mehmet Erdi, and Krolczyk, Grzegorz M.
- Published
- 2024
- Full Text
- View/download PDF
5. Kfd-net: a knowledge fusion decision method for post-processing brain glioma MRI segmentation.
- Author
-
Wang, Guizeng, Lu, Huimin, Li, Niya, Xue, Han, and Sang, Pengcheng
- Abstract
The automatic segmentation of brain glioma in MRI images is of great significance for clinical diagnosis and treatment planning. However, achieving precise segmentation requires effective post-processing of the segmentation results. Current post-processing methods fail to differentiate processing based on the glioma category, limiting the improvement of MRI segmentation accuracy. This paper proposes a novel knowledge fusion decision method for post-processing brain glioma MRI segmentation. The method takes grading information and the area ratio from the initial segmentation as input, performs fuzzy reasoning based on formulated rules, and generates decision coefficients for different segmentation regions. To address class imbalance in the segmentation network, a Boundary Region Voxel Dynamic Weighted Loss Function is introduced. On the BraTS2019 validation set, our method achieves DSC values of 0.756, 0.990, and 0.805 for ET, WT, and TC regions, respectively, along with HD values of 4.02mm, 10.73mm, and 9.52mm. Compared to state-of-the-art methods, our proposed approach demonstrates superior segmentation performance. Validation on the BraTS2020 dataset further confirms the stability and reliability of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Electrochemical machining post-processing of additively manufactured nickel-based superalloys via laser directed energy deposition.
- Author
-
Zhang, Shaoli, Zhang, Dan, Li, Xiangyang, Luo, Kai, Zhang, Yaping, Ma, Yaming, and Yan, Lei
- Subjects
LASER machining ,MANUFACTURING processes ,SURFACE finishing ,HEAT resistant alloys ,MACHINING - Abstract
The electrochemical machining post-processing of additively manufactured nickel-based superalloys was studied to solve the contradiction between material processing efficiency and accuracy. The research results show that the surface of the fabricated specimens presented an uneven undulating morphology due to the overlapping of the cladding tracks and the stacking of the cladding layer in the process of laser directed energy deposition. After electrochemical machining post-processing, the levelling degree of the specimen surface gradually increased with the processing. However, at the same time, the levelling efficiency of the electrochemical machining post-processing gradually decreased due to the gradual decrease of the current density difference between the peaks and valleys on the specimen surface. The current efficiency remained basically unchanged at 67.6% throughout the electrochemical machining post-processing. This study provides experimental basis and theoretical foundation for the development of new combined machining technology of nickel-based superalloys based on laser directed energy deposition and electrochemical machining. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. SIMULATION OF MOTION NONLINEAR ERROR COMPENSATION OF CNC MACHINE TOOLS WITH MULTI-AXIS LINKAGE.
- Author
-
XIANYI LI
- Subjects
NUMERICAL control of machine tools ,HARMONIC functions ,NONLINEAR analysis ,MACHINE tools ,MACHINE parts ,PROBLEM solving - Abstract
In order to solve the problem of nonlinear error for a dual rotary table five-axis CNC machine tool due to the linkage of rotary and translational axes, the simulation of motion nonlinear error compensation for a multi-axis linkage CNC machine tool is proposed. The adjacent points in the tool position file are selected as the tool position points for building the model, and then the nonlinear error model resolved by the harmonic function is established according to the error distribution in the classical post-processing. The nonlinear error between the two tool position points is quickly predicted by the analytical expression of this model, and the real-time error compensation of the intermediate interpolation points is realized. Finally, MALTLAB simulation analysis is performed on the tool position file of an impeller part machining to verify the effectiveness of the proposed algorithm. The experimental results show that it can be seen from the distribution curve of the nonlinear error that it is about 10% after compensation as before compensation, thus verifying the effectiveness of the nonlinear error compensation mechanism. The correctness of the nonlinear error analysis and compensation method and the effectiveness of post-processing are verified. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. An enhanced deep learning method for the quantification of epicardial adipose tissue.
- Author
-
Tang, Ke-Xin, Liao, Xiao-Bo, Yuan, Ling-Qing, He, Sha-Qi, Wang, Min, Mei, Xi-Long, Zhou, Zhi-Ang, Fu, Qin, Lin, Xiao, and Liu, Jun
- Subjects
- *
EPICARDIAL adipose tissue , *DEEP learning , *COMPUTED tomography , *HUMAN error , *DISEASE progression - Abstract
Epicardial adipose tissue (EAT) significantly contributes to the progression of cardiovascular diseases (CVDs). However, manually quantifying EAT volume is labor-intensive and susceptible to human error. Although there have been some deep learning-based methods for automatic quantification of EAT, they are mostly uninterpretable and fail to harness the complete anatomical characteristics. In this study, we proposed an enhanced deep learning method designed for EAT quantification on coronary computed tomography angiography (CCTA) scan, which integrated both data-driven method and specific morphological information. A total of 108 patients who underwent routine CCTA examinations were included in this study. They were randomly assigned to training set (n = 60), validation set (n = 8), and test set (n = 40). We quantified and calculated the EAT volume based on the CT attenuation values within the predicted pericardium. The automatic method demonstrated strong agreement with expert manual quantification, yielding a median Dice score coefficients (DSC) of 0.916 (Interquartile Range (IQR): 0.846–0.948) for 2D slices. Meanwhile, the median DSC for the 3D volume was 0.896 (IQR: 0.874–0.908) between these two measures, with an excellent correlation of 0.980 (p < 0.001) for EAT volumes. Additionally, our model's Bland-Altman analysis revealed a low bias of -2.39 cm³. The incorporation of pericardial anatomical structures into deep learning methods can effectively enhance the automatic quantification of EAT. The promising results demonstrate its potential for clinical application. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Dual-Branch Dynamic Object Segmentation Network Based on Spatio-Temporal Information Fusion.
- Author
-
Huang, Fei, Wang, Zhiwen, Zheng, Yu, Wang, Qi, Hao, Bingsen, and Xiang, Yangkai
- Subjects
POINT cloud ,PLURALITY voting ,AUTONOMOUS vehicles ,FEATURE extraction ,ALGORITHMS - Abstract
To address the issue of low accuracy in the segmentation of dynamic objects using semantic segmentation networks, a dual-branch dynamic object segmentation network has been proposed, which is based on the fusion of spatiotemporal information. First, an appearance–motion feature fusion module is designed, which characterizes the motion information of objects by introducing a residual graph. This module combines a co-attention mechanism and a motion correction method to enhance the extraction of appearance features for dynamic objects. Furthermore, to mitigate boundary blurring and misclassification issues when 2D semantic information is projected back into 3D point clouds, a majority voting strategy based on time-series point cloud information has been proposed. This approach aims to overcome the limitations of post-processing in single-frame point clouds. By doing this, this method can significantly enhance the accuracy of segmenting moving objects in practical scenarios. Test results from the semantic KITTI public dataset demonstrate that our improved method outperforms mainstream dynamic object segmentation networks like LMNet and MotionSeg3D. Specifically, it achieves an Intersection over Union (IoU) of 72.19%, representing an improvement of 9.68% and 4.86% compared to LMNet and MotionSeg3D, respectively. The proposed method, with its precise algorithm, has practical applications in autonomous driving perception. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. An In-vitro Evaluation of Grinding and Polishing on Surface Roughness and Flexural Strength of Monolithic Zirconia.
- Author
-
RANJAN, RISHABH, MITTAL, SANJEEV, SHARMA, PRABAL, SHARMA, BHUMIKA, SINGH, ANKITA, and PATEL, SNEHA
- Subjects
- *
FLEXURAL strength , *SURFACE roughness , *PEARSON correlation (Statistics) , *ZIRCONIUM oxide , *SURFACE finishing - Abstract
Introduction: For a dentist, it is a matter of concern to restore the original luster or glaze on a Monolithic Zirconia (MZ) restoration after clinical adjustments. For a long time, the gold standard for surface restoration was reglazing; however, with advancements in technology, new polishing kits optimised for zirconia have become available for chairside polishing. Aim: To examine the effects of grinding, reglazing, and polishing techniques on the surface roughness and flexural strength of MZ specimens. Materials and Methods: This in-vitro study was conducted in the Department of Prosthodontics at MM College of Dental Sceiences and Research in Mullana, Haryana, India from April to December 2019 and 32 specimens of MZ, each measuring 20 mm × 5 mm × 3 mm, were fabricated and divided into four groups, with each group consisting of eight specimens. Group C was considered the Control group. Specimens in Group G were only Ground, specimens in Group GR were Ground and Reglazed, and specimens in Group GP were Ground and Polished using a zirconia polishing kit. All specimens were then analysed for surface roughness and flexural strength using a profilometer and a Universal Testing Machine (UTM), respectively. Statistical analysis was performed using Analysis of Variance (ANOVA), Honest Significant Test (HSD) post-hoc test, Pearson's correlation, and other methods using International Business Machine (IBM) Statistics version 25.0 (Armonk, USA). Results: The surface roughness (Ra) of the control group (C) was 0.4403 μm, followed by the Polished Group (GP) at 0.656 µm and the Reglazed Group (GR) at 0.809 μm. The difference between the polished (GP) and reglazed (GR) groups, was statistically insignificant (p=0.53). There was a statistically significant increase in flexural strength in the reglazed samples (GR) when compared to the polished samples (GP). No significant correlation (p=0.58 and r=-0.1) was found between surface roughness and flexural strength. Conclusion: Chairside polishing can be an effective alternative to reglazing for restoring the surface finish of MZ. Additionally, polishing increases the strength of zirconia after adjustments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. LL-Diff: Low-Light Image Enhancement Utilizing Langevin Sampling Diffusion.
- Author
-
Ding, Boren, Zhang, Xiaofeng, Yu, Zekun, and Hui, Zheng
- Subjects
- *
IMAGE intensifiers , *PARTICLE motion , *SAMPLING methods , *SPEED , *NOISE - Abstract
In this paper, we propose a new algorithm called LL-Diff, which is innovative compared to traditional augmentation methods in that it introduces the sampling method of Langevin dynamics. This sampling approach simulates the motion of particles in complex environments and can better handle noise and details in low-light conditions. We also incorporate a causal attention mechanism to achieve causality and address the issue of confounding effects. This attention mechanism enables us to better capture local information while avoiding over-enhancement. We have conducted experiments on the LOL-V1 and LOL-V2 datasets, and the results show that LL-Diff significantly improves computational speed and several evaluation metrics, demonstrating the superiority and effectiveness of our method for low-light image enhancement tasks. The code will be released on GitHub when the paper has been accepted. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Ensemble weather forecast post‐processing with a flexible probabilistic neural network approach.
- Author
-
Mlakar, Peter, Merše, Janko, and Faganeli Pucer, Jana
- Subjects
- *
ARTIFICIAL neural networks , *DISTRIBUTION (Probability theory) , *LEAD time (Supply chain management) , *WEATHER forecasting , *MACHINE learning - Abstract
Ensemble forecast post‐processing is a necessary step in producing accurate probabilistic forecasts. Many post‐processing methods operate by estimating the parameters of a predetermined probability distribution; others operate on a per‐lead‐time or per‐station basis. All of the aforementioned factors either limit the expressive power of the methods in question or require additional models, one for each lead time and station. We propose a novel, neural network‐based method that produces forecasts for all lead times jointly and requires a single model for all stations. We incorporate normalizing spline flows as flexible parametric distribution estimators, which enables us to model complex forecast distributions. Furthermore, we demonstrate the effectiveness of our method in the context of the EUPPBench benchmark, where we conduct 2‐m temperature forecast post‐processing for stations in a subregion of Europe. We show that our novel method exhibits state‐of‐the‐art performance on the benchmark, improving upon other well‐performing entries. Additionally, by providing a detailed comparison of three variants of our novel post‐processing method, we elucidate the reasons why our method outperforms per‐lead‐time‐based approaches and approaches with distributional assumptions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Effects of Post-Processing Parameters on 3D-Printed Dental Appliances: A Review.
- Author
-
Hassanpour, Mana, Narongdej, Poom, Alterman, Nicolas, Moghtadernejad, Sara, and Barjasteh, Ehsan
- Subjects
- *
ACQUISITION of property , *RESEARCH personnel , *CUSTOMIZATION , *BIOCOMPATIBILITY , *DENTISTRY , *THREE-dimensional printing - Abstract
In recent years, additive manufacturing (AM) has been recognized as a transformative force in the dental industry, with the ability to address escalating demand, expedite production timelines, and reduce labor-intensive processes. Despite the proliferation of three-dimensional printing technologies in dentistry, the absence of well-established post-processing protocols has posed formidable challenges. This comprehensive review paper underscores the critical importance of precision in post-processing techniques for ensuring the acquisition of vital properties, encompassing mechanical strength, biocompatibility, dimensional accuracy, durability, stability, and aesthetic refinement in 3D-printed dental devices. Given that digital light processing (DLP) is the predominant 3D printing technology in dentistry, the main post-processing techniques and effects discussed in this review primarily apply to DLP printing. The four sequential stages of post-processing support removal, washing, secondary polymerization, and surface treatments are systematically navigated, with each phase requiring meticulous evaluation and parameter determination to attain optimal outcomes. From the careful selection of support removal tools to the consideration of solvent choice, washing methodology, and post-curing parameters, this review provides a comprehensive guide for practitioners and researchers. Additionally, the customization of post-processing approaches to suit the distinct characteristics of different resin materials is highlighted. A comprehensive understanding of post-processing techniques is offered, setting the stage for informed decision-making and guiding future research endeavors in the realm of dental additive manufacturing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Transformer-Based Joint Learning Approach for Text Normalization in Vietnamese Automatic Speech Recognition Systems.
- Author
-
Bui, Viet The, Luong, Tho Chi, and Tran, Oanh Thi
- Abstract
In this article, we investigate the task of normalizing transcribed texts in Vietnamese Automatic Speech Recognition (ASR) systems in order to improve user readability and the performance of downstream tasks. This task usually consists of two main sub-tasks: predicting and inserting punctuation (i.e., period, comma); and detecting and standardizing named entities (i.e., numbers, person names) from spoken forms to their appropriate written forms. To achieve these goals, we introduce a complete corpus including of 87,700 sentences and investigate conditional joint learning approaches which globally optimize two sub-tasks simultaneously. The experimental results are quite promising. Overall, the proposed architecture outperformed the conventional architecture which trains individual models on the two sub-tasks separately. The joint models are furthered improved when integrated with the surrounding contexts (SCs). Specifically, we obtained 81.13% for the first sub-task and 94.41% for the second sub-task in the F1 scores using the best model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. A novel methodology for developing dense and porous implants on single generic optimized setting for excellent bio-mechanical characteristics
- Author
-
Mudassar Rehman, Yanen Wang, Kashif Ishfaq, Ray Tahir Mushtaq, and Mohammed Alkahtani
- Subjects
Laser powder bed fusion (L-PBF) ,Bio-implants ,Post-processing ,Multi-stage heat treatment ,Annealing plus aging ,Biomedical Ti alloys ,Mining engineering. Metallurgy ,TN1-997 - Abstract
This work is performed to describe an optimization strategy to cope with the critical need for bio-implants with mechanical properties that closely resemble natural bone(cortical and trabecular), aiming to reduce stress-shielding effects and improve implant efficacy. An investigation was conducted on fracture mechanics, surface integrity, porosity, and cytotoxicity of bio-implants fabricated using Laser Powder Bed Fusion (L-PBF) technology. By varying laser energy density and applying post-processing multi-stage heat treatment (Annealing plus Aging), the bio-mechanical performance of dense and porous implants was optimized and tuned. The materials used include biomedical titanium alloys, which were selected for their superior biocompatibility and mechanical strength. This innovative approach enhanced bone healing, with 87% and 87.7% growth rates and a significant increase in compressive strength by approximately 84.62% post-treatment. These improvements are attributed to densification and elimination of microstructural defects, leading to increased biocompatibility and accelerated osseointegration, essential for the success of orthopedic implants.
- Published
- 2024
- Full Text
- View/download PDF
16. An enhanced deep learning method for the quantification of epicardial adipose tissue
- Author
-
Ke-Xin Tang, Xiao-Bo Liao, Ling-Qing Yuan, Sha-Qi He, Min Wang, Xi-Long Mei, Zhi-Ang Zhou, Qin Fu, Xiao Lin, and Jun Liu
- Subjects
Deep learning ,Epicardial adipose tissue ,Coronary computed tomography angiography (CCTA) ,Segmentation ,Post-processing ,Medicine ,Science - Abstract
Abstract Epicardial adipose tissue (EAT) significantly contributes to the progression of cardiovascular diseases (CVDs). However, manually quantifying EAT volume is labor-intensive and susceptible to human error. Although there have been some deep learning-based methods for automatic quantification of EAT, they are mostly uninterpretable and fail to harness the complete anatomical characteristics. In this study, we proposed an enhanced deep learning method designed for EAT quantification on coronary computed tomography angiography (CCTA) scan, which integrated both data-driven method and specific morphological information. A total of 108 patients who underwent routine CCTA examinations were included in this study. They were randomly assigned to training set (n = 60), validation set (n = 8), and test set (n = 40). We quantified and calculated the EAT volume based on the CT attenuation values within the predicted pericardium. The automatic method demonstrated strong agreement with expert manual quantification, yielding a median Dice score coefficients (DSC) of 0.916 (Interquartile Range (IQR): 0.846–0.948) for 2D slices. Meanwhile, the median DSC for the 3D volume was 0.896 (IQR: 0.874–0.908) between these two measures, with an excellent correlation of 0.980 (p
- Published
- 2024
- Full Text
- View/download PDF
17. An In-vitro Evaluation of Grinding and Polishing on Surface Roughness and Flexural Strength of Monolithic Zirconia
- Author
-
Rishabh Ranjan, Sanjeev Mittal, Prabal Sharma, Bhumika Sharma, Ankita Singh, and Sneha Patel
- Subjects
glaze ,mechanical properties ,post-processing ,Medicine - Abstract
Introduction: For a dentist, it is a matter of concern to restore the original luster or glaze on a Monolithic Zirconia (MZ) restoration after clinical adjustments. For a long time, the gold standard for surface restoration was reglazing; however, with advancements in technology, new polishing kits optimised for zirconia have become available for chairside polishing. Aim: To examine the effects of grinding, reglazing, and polishing techniques on the surface roughness and flexural strength of MZ specimens. Materials and Methods: This in-vitro study was conducted in the Department of Prosthodontics at MM College of Dental Sceiences and Research in Mullana, Haryana, India from April to December 2019 and 32 specimens of MZ, each measuring 20 mm × 5 mm × 3 mm, were fabricated and divided into four groups, with each group consisting of eight specimens. Group C was considered the Control group. Specimens in Group G were only Ground, specimens in Group GR were Ground and Reglazed, and specimens in Group GP were Ground and Polished using a zirconia polishing kit. All specimens were then analysed for surface roughness and flexural strength using a profilometer and a Universal Testing Machine (UTM), respectively. Statistical analysis was performed using Analysis of Variance (ANOVA), Honest Significant Test (HSD) post-hoc test, Pearson’s correlation, and other methods using International Business Machine (IBM) Statistics version 25.0 (Armonk, USA). Results: The surface roughness (Ra) of the control group (C) was 0.4403 μm, followed by the Polished Group (GP) at 0.656 μm and the Reglazed Group (GR) at 0.809 μm. The difference between the polished (GP) and reglazed (GR) groups, was statistically insignificant (p=0.53). There was a statistically significant increase in flexural strength in the reglazed samples (GR) when compared to the polished samples (GP). No significant correlation (p=0.58 and r=-0.1) was found between surface roughness and flexural strength. Conclusion: Chairside polishing can be an effective alternative to reglazing for restoring the surface finish of MZ. Additionally, polishing increases the strength of zirconia after adjustments.
- Published
- 2024
- Full Text
- View/download PDF
18. Historical reconstruction dataset of hourly expected wind generation based on dynamically downscaled atmospheric reanalysis for assessing spatio-temporal impact of on-shore wind in Japan
- Author
-
Yu Fujimoto, Masamichi Ohba, Yujiro Tanno, Daisuke Nohara, Yuki Kanno, Akihisa Kaneko, Yasuhiro Hayashi, Yuki Itoda, and Wataru Wayama
- Subjects
On-shore wind power ,numerical weather prediction ,machine learning ,post-processing ,dataset ,Geography. Anthropology. Recreation ,Geology ,QE1-996.5 - Abstract
Wind power is crucial for achieving carbon neutrality, but its output can vary due to local wind conditions. The spatio-temporal behavior of wind power generation connected to the power grid can have a significant impact on system operations. To assess this impact, the use of long-term reanalysis results of wind data based on a numerical weather prediction (NWP) model is considered valid. However, in Japan, the behavior of on-shore wind power generation is influenced by diverse topographical and meteorological features (TMFs) of the installation site, making it challenging to assess possible operational impacts based solely on power curve-based estimates using a popular conversion equation. In this study, a nonparametric machine learning-based post-processing model that learns the statistical relationship between the TMFs at the target location and the actual wind farm (WF) output was developed to represent the expected per-unit output at each location. Focusing on historical reconstruction results and using this post-processing model to reproduce the real-world WF output behavior created a set of expected wind power generation profiles. The dataset includes hourly long term (1958–2012) wind power generation profiles expected under the WF installation assumptions at various on-shore locations in Japan with a 5 km spatial resolution and is expected to contribute to an accurate understanding of the impact of spatio-temporal wind power behavior. The dataset is publicly accessible at https://doi.org/10.5281/zenodo.11496867 (Fujimoto et al., 2024).
- Published
- 2024
- Full Text
- View/download PDF
19. Functional post-processing of extrusion-based 3D printed parts: polyaniline (PAni) as a coating for thermoplastics components
- Author
-
Cruzeiro, Arthur de Carvalho, Santana, Leonardo, Manzo Jaime, Danay, Ramoa, Sílvia, Alves, Jorge Lino, and Barra, Guilherme Mariz de Oliveira
- Published
- 2024
- Full Text
- View/download PDF
20. Research landscape and trending topics on 3D food printing – a bibliometric review
- Author
-
Bi, Siwei, Pi, Jinkui, Chen, Haohan, Zhou, Yannan, Liu, Ruiqi, Chen, Yuanyuan, Che, Qianli, Li, Wei, Gu, Jun, and Zhang, Yi
- Published
- 2024
- Full Text
- View/download PDF
21. Application and prospective of sand-type 3D printing material in rock mechanics: a review
- Author
-
Yu, Chen and Tian, Wei
- Published
- 2024
- Full Text
- View/download PDF
22. Reslice3Dto2D: Introduction of a software tool to reformat 3D volumes into reference 2D slices in cardiovascular magnetic resonance imaging
- Author
-
Darian Viezzer, Maximilian Fenski, Thomas Hiroshi Grandy, Johanna Kuhnt, Thomas Hadler, Steffen Lange, and Jeanette Schulz-Menger
- Subjects
3D ,2D ,Cardiovascular Magnetic Resonance ,CMR ,Reference slice position ,Post-processing ,Medicine ,Biology (General) ,QH301-705.5 ,Science (General) ,Q1-390 - Abstract
Abstract Objective Cardiovascular magnetic resonance enables the quantification of functional and morphological parameters with an impact on therapeutical decision making. While quantitative assessment is established in 2D, novel 3D techniques lack a standardized approach. Multi-planar-reformatting functionality in available software relies on visual matching location and often lacks necessary functionalities for further post-processing. Therefore, the easy-to-use Reslice3Dto2D software tool was developed as part of another research project to fill this gap and is now introduced with this work. Results The Reslice3Dto2D reformats 3D data at the exact location of a reference slice with a two-step-based interpolation in order to reflect in-plane discretization and through-plane slice thickness including a slice profile selection. The tool was successfully validated on an artificial dataset and tested on 119 subjects with different underlying pathologies. The exported reformatted data could be imported into three different post-processing software tools. The quantified image sharpness by the Frequency Domain Image Blur Measure was significantly decreased by around 40% on rectangular slice profiles with 7 mm slice thickness compared to 0 mm due to partial volume effects. Consequently, Reslice3Dto2D enables the quantification of 3D data with conventional post-processing tools as well as the comparison of 3D acquisitions with their established 2D version.
- Published
- 2024
- Full Text
- View/download PDF
23. Influence of various cleaning solutions on the geometry, roughness, gloss, hardness, and flexural strength of 3D-printed zirconia
- Author
-
HongXin Cai, Min-Yong Lee, Heng Bo Jiang, and Jae-Sung Kwon
- Subjects
Additive manufacturing ,Ceramic ,Post-processing ,Cleaning solution ,Medicine ,Science - Abstract
Abstract This study aimed to investigate the impact of various cleaning solutions on the geometry, roughness, gloss, hardness, and flexural strength of 3D-printed zirconia. Cleaning solutions, including isopropyl alcohol (IPA, 99.9%), ethyl alcohol (EtOH, 99.9%), and tripropylene glycol monomethyl ether (TPM, ≥ 97.5%), were diluted to a concentration of 70% and categorized into six groups: IPA99, EtOH99, TPM97, IPA70, EtOH70, and TPM70. Zirconia discs, printed via digital light processing, were sintered after cleaning. The geometry, roughness, gloss, hardness, and flexural strength were analyzed. Statistical analysis was performed using one-way ANOVA with Tukey’s post hoc test (p 0.05). Different cleaning solutions did not affect the surface gloss, hardness, and flexural strength of 3D-printed zirconia. High and low concentrations of the same cleaning solution did not affect the surface gloss, hardness, and flexural strength. IPA70, TPM97, and EtOH can be considered viable post-printing cleaning alternatives to the traditional gold standard, IPA99.
- Published
- 2024
- Full Text
- View/download PDF
24. An Improved Postprocessing Method to Mitigate the Macroscopic Cross-Slice B0 Field Effect on R2* Measurements in the Mouse Brain at 7T
- Author
-
Chu-Yu Lee, Daniel R. Thedens, Olivia Lullmann, Emily J. Steinbach, Michelle R. Tamplin, Michael S. Petronek, Isabella M. Grumbach, Bryan G. Allen, Lyndsay A. Harshman, and Vincent A. Magnotta
- Subjects
background gradients ,R 2 * ,T 2 * ,post-processing ,noise ,gradient-echo ,Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
The MR transverse relaxation rate, R2*, has been widely used to detect iron and myelin content in tissue. However, it is also sensitive to macroscopic B0 inhomogeneities. One approach to correct for the B0 effect is to fit gradient-echo signals with the three-parameter model, a sinc function-weighted monoexponential decay. However, such three-parameter models are subject to increased noise sensitivity. To address this issue, this study presents a two-stage fitting procedure based on the three-parameter model to mitigate the B0 effect and reduce the noise sensitivity of R2* measurement in the mouse brain at 7T. MRI scans were performed on eight healthy mice. The gradient-echo signals were fitted with the two-stage fitting procedure to generate R2corr_t*. The signals were also fitted with the monoexponential and three-parameter models to generate R2nocorr* and R2corr*, respectively. Regions of interest (ROIs), including the corpus callosum, internal capsule, somatosensory cortex, caudo-putamen, thalamus, and lateral ventricle, were selected to evaluate the within-ROI mean and standard deviation (SD) of the R2* measurements. The results showed that the Akaike information criterion of the monoexponential model was significantly reduced by using the three-parameter model in the selected ROIs (p = 0.0039–0.0078). However, the within-ROI SD of R2corr* using the three-parameter model was significantly higher than that of the R2nocorr* in the internal capsule, caudo-putamen, and thalamus regions (p = 0.0039), a consequence partially due to the increased noise sensitivity of the three-parameter model. With the two-stage fitting procedure, the within-ROI SD of R2corr* was significantly reduced by 7.7–30.2% in all ROIs, except for the somatosensory cortex region with a fast in-plane variation of the B0 gradient field (p = 0.0039–0.0078). These results support the utilization of the two-stage fitting procedure to mitigate the B0 effect and reduce noise sensitivity for R2* measurement in the mouse brain.
- Published
- 2024
- Full Text
- View/download PDF
25. Influence of various cleaning solutions on the geometry, roughness, gloss, hardness, and flexural strength of 3D-printed zirconia.
- Author
-
Cai, HongXin, Lee, Min-Yong, Jiang, Heng Bo, and Kwon, Jae-Sung
- Subjects
- *
FLEXURAL strength , *ETHANOL , *ONE-way analysis of variance , *HARDNESS , *ETHYLENE glycol - Abstract
This study aimed to investigate the impact of various cleaning solutions on the geometry, roughness, gloss, hardness, and flexural strength of 3D-printed zirconia. Cleaning solutions, including isopropyl alcohol (IPA, 99.9%), ethyl alcohol (EtOH, 99.9%), and tripropylene glycol monomethyl ether (TPM, ≥ 97.5%), were diluted to a concentration of 70% and categorized into six groups: IPA99, EtOH99, TPM97, IPA70, EtOH70, and TPM70. Zirconia discs, printed via digital light processing, were sintered after cleaning. The geometry, roughness, gloss, hardness, and flexural strength were analyzed. Statistical analysis was performed using one-way ANOVA with Tukey's post hoc test (p < 0.05). The thickness of TPM70 was the highest. The diameter of TPM70 was significantly larger than that of EtOH99 and IPA70 (p < 0.05). The weight of the TPM groups was significantly higher than that of IPA70 (p < 0.05). The roughness Ra of TPM70 was significantly greater than that of IPA99, EtOH99, and EtOH70 (p < 0.05). The differences in surface gloss, hardness, and flexural strength among the different groups were not statistically significant (p > 0.05). Different cleaning solutions did not affect the surface gloss, hardness, and flexural strength of 3D-printed zirconia. High and low concentrations of the same cleaning solution did not affect the surface gloss, hardness, and flexural strength. IPA70, TPM97, and EtOH can be considered viable post-printing cleaning alternatives to the traditional gold standard, IPA99. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Reslice3Dto2D: Introduction of a software tool to reformat 3D volumes into reference 2D slices in cardiovascular magnetic resonance imaging.
- Author
-
Viezzer, Darian, Fenski, Maximilian, Grandy, Thomas Hiroshi, Kuhnt, Johanna, Hadler, Thomas, Lange, Steffen, and Schulz-Menger, Jeanette
- Subjects
- *
CARDIAC magnetic resonance imaging , *SOFTWARE development tools , *MAGNETIC resonance , *DECISION making , *INTERPOLATION - Abstract
Objective: Cardiovascular magnetic resonance enables the quantification of functional and morphological parameters with an impact on therapeutical decision making. While quantitative assessment is established in 2D, novel 3D techniques lack a standardized approach. Multi-planar-reformatting functionality in available software relies on visual matching location and often lacks necessary functionalities for further post-processing. Therefore, the easy-to-use Reslice3Dto2D software tool was developed as part of another research project to fill this gap and is now introduced with this work. Results: The Reslice3Dto2D reformats 3D data at the exact location of a reference slice with a two-step-based interpolation in order to reflect in-plane discretization and through-plane slice thickness including a slice profile selection. The tool was successfully validated on an artificial dataset and tested on 119 subjects with different underlying pathologies. The exported reformatted data could be imported into three different post-processing software tools. The quantified image sharpness by the Frequency Domain Image Blur Measure was significantly decreased by around 40% on rectangular slice profiles with 7 mm slice thickness compared to 0 mm due to partial volume effects. Consequently, Reslice3Dto2D enables the quantification of 3D data with conventional post-processing tools as well as the comparison of 3D acquisitions with their established 2D version. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Impact of Annealing in Various Atmospheres on Characteristics of Tin-Doped Indium Oxide Layers towards Thermoelectric Applications.
- Author
-
Kaźmierczak-Bałata, Anna, Bodzenta, Jerzy, Szperlich, Piotr, Jesionek, Marcin, Michalewicz, Anna, Domanowska, Alina, Mayandi, Jeyanthinath, Venkatachalapathy, Vishnukanthan, and Kuznetsov, Andrej
- Subjects
- *
ATMOSPHERIC carbon dioxide , *INDIUM tin oxide , *ANNEALING of crystals , *CARBON dioxide , *THERMOELECTRIC materials - Abstract
The aim of this work was to investigate the possibility of modifying the physical properties of indium tin oxide (ITO) layers by annealing them in different atmospheres and temperatures. Samples were annealed in vacuum, air, oxygen, nitrogen, carbon dioxide and a mixture of nitrogen with hydrogen (NHM) at temperatures from 200 °C to 400 °C. Annealing impact on the crystal structure, optical, electrical, thermal and thermoelectric properties was examined. It has been found from XRD measurements that for samples annealed in air, nitrogen and NHM at 400 °C, the In2O3/In4Sn3O12 share ratio decreased, resulting in a significant increase of the In4Sn3O12 phase. The annealing at the highest temperature in air and nitrogen resulted in larger grains and the mean grain size increase, while vacuum, NHM and carbon dioxide atmospheres caused the decrease in the mean grain size. The post-processing in vacuum and oxidizing atmospheres effected in a drop in optical bandgap and poor electrical properties. The carbon dioxide seems to be an optimal atmosphere to obtain good TE generator parameters—high ZT. The general conclusion is that annealing in different atmospheres allows for controlled changes in the structure and physical properties of ITO layers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Versatile Video Coding-Post Processing Feature Fusion: A Post-Processing Convolutional Neural Network with Progressive Feature Fusion for Efficient Video Enhancement.
- Author
-
Das, Tanni, Liang, Xilong, and Choi, Kiho
- Subjects
CONVOLUTIONAL neural networks ,STREAMING video & television ,VIDEO codecs ,INTERNET content ,DEEP learning ,VIDEO coding - Abstract
Advanced video codecs such as High Efficiency Video Coding/H.265 (HEVC) and Versatile Video Coding/H.266 (VVC) are vital for streaming high-quality online video content, as they compress and transmit data efficiently. However, these codecs can occasionally degrade video quality by adding undesirable artifacts such as blockiness, blurriness, and ringing, which can detract from the viewer's experience. To ensure a seamless and engaging video experience, it is essential to remove these artifacts, which improves viewer comfort and engagement. In this paper, we propose a deep feature fusion based convolutional neural network (CNN) architecture (VVC-PPFF) for post-processing approach to further enhance the performance of VVC. The proposed network, VVC-PPFF, harnesses the power of CNNs to enhance decoded frames, significantly improving the coding efficiency of the state-of-the-art VVC video coding standard. By combining deep features from early and later convolution layers, the network learns to extract both low-level and high-level features, resulting in more generalized outputs that adapt to different quantization parameter (QP) values. The proposed VVC-PPFF network achieves outstanding performance, with Bjøntegaard Delta Rate (BD-Rate) improvements of 5.81% and 6.98% for luma components in random access (RA) and low-delay (LD) configurations, respectively, while also boosting peak signal-to-noise ratio (PSNR). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Acrylonitrile Butadiene Styrene-ZrO2 Composites for Roller Burnishing as Post-processing of 3D Printed Parts: Machine Learning Modeling Using Classification and Regression Trees.
- Author
-
Badogu, Ketan, Thakur, Vishal, Kumar, Raman, Kumar, Ranvijay, and Singh, Sunpreet
- Subjects
MACHINE learning ,RAPID prototyping ,RAPID tooling ,SURFACE roughness ,METALLURGICAL analysis ,ACRYLONITRILE butadiene styrene resins - Abstract
Parts manufactured by additive manufacturing (AM) are criticized due to the high surface roughness. Roller burnishing is one of the post-processing processes which may be applied for reducing the roughness and asperities on surfaces of 3D printed parts. In this study, innovative zirconium oxide (ZrO
2 ) reinforced acrylonitrile butadiene styrene (ABS) based composites have developed in filament form for 3D printing of roller burnishing rapid tools. The filaments of ABS-ZrO2 were extruded under the range of 230-240 °C of barrel temperature and 4-6 RPM of screw speed under heat treatment as per Taguchi L9 based experimentation. As per the mechanical properties concern of composite filament material, the combination of preheat treatment, 235 °C of barrel temperature, and 6 RPM of screw speed have been noted best setting (Young's modulus: 886.00 MPa). The filament preparation has been supported with such as x-ray diffraction (XRD), Fourier transforms infrared (FTIR), and microstructural analysis using metallurgical image analysis software (MIAS). A machine learning (ML) approach based on classification and regression trees (CART) has been utilized to predict the tensile peak and break strength. Finally, the 3D printed roller burnishing tool has significantly reduced the surface roughness after burnishing. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
30. Potential of Dual-Energy CT-Based Collagen Maps for the Assessment of Disk Degeneration in the Lumbar Spine.
- Author
-
Mahmoudi, Scherwin, Gruenewald, Leon D., Koch, Vitali, Bernatz, Simon, Martin, Simon S., Engelskirchen, Lara, Radic, Ivana, Bucolo, Giuseppe, D'Angelo, Tommaso, Gotta, Jennifer, Mader, Christoph, dos Santos, Daniel Pinto, Scholtz, Jan-Erik, Gruber-Rouh, Tatjana, Eichler, Katrin, Vogl, Thomas J., Booz, Christian, and Yel, Ibrahim
- Abstract
Lumbar disk degeneration is a common condition contributing significantly to back pain. The objective of the study was to evaluate the potential of dual-energy CT (DECT)-derived collagen maps for the assessment of lumbar disk degeneration. We conducted a retrospective analysis of 127 patients who underwent dual-source DECT and MRI of the lumbar spine between 07/2019 and 10/2022. The level of lumbar disk degeneration was categorized by three radiologists as follows: no/mild (Pfirrmann 1&2), moderate (Pfirrmann 3&4), and severe (Pfirrmann 5). Recall (sensitivity) and accuracy of DECT collagen maps were calculated. Intraclass correlation coefficient (ICC) was used to evaluate inter-reader reliability. Subjective evaluations were performed using 5-point Likert scales for diagnostic confidence and image quality. We evaluated a total of 762 intervertebral disks from 127 patients (median age, 69.7 (range, 23.0–93.7), female, 56). MRI identified 230 non/mildly degenerated disks (30.2%), 484 moderately degenerated disks (63.5%), and 48 severely degenerated disks (6.3%). DECT collagen maps yielded an overall accuracy of 85.5% (1955/2286). Recall (sensitivity) was 79.3% (547/690) for the detection of no/mild lumbar disk degeneration, 88.7% (1288/1452) for the detection of moderate disk degeneration, and 83.3% (120/144) for the detection of severe disk degeneration (ICC = 0.9). Subjective evaluations of DECT collagen maps showed high diagnostic confidence (median 4) and good image quality (median 4). The use of DECT collagen maps to distinguish different stages of lumbar disk degeneration may have clinical significance in the early diagnosis of disk-related pathologies in patients with contraindications for MRI or in cases of unavailability of MRI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. 3D-Printed Meat Paste Using Minimal Additive: Assessment of Rheological and Printing Behavior with Post-Processing Stability.
- Author
-
Yatmaz, Hanife Aydan
- Abstract
Printing foods in the desired shape with minimal additives and their stability after printing are the most important points for 3D food technology. In this study, the effects of water (5%, 10%, 15%, and 20%) and salt (0.5%, 1%, 1.5%, and 2%) on the printability of meat paste were evaluated to achieve improved textural and rheological properties. The printing parameters were examined at every stage, starting from the line thickness of the printed product, until the final 3D printed product was obtained. Accordingly, meat printability determined using different ingredient flow speed (3, 3.5, 4, 4.5, and 5), fill factor (1.2%, 1.3%, 1.4%, 1.5%, and 1.6%) and distance between layers (1.2, 1.4, and 1.6 mm). Salt addition increased the firmness and consistency of the samples, while the viscosity, storage modulus, and loss modulus decreased with the addition of water. Considering the line thickness and outer length, the most appropriate shape was obtained with 10% water and 1.5% salt. The optimal ingredient flow speed, fill factor, and distance between layers at a constant printing speed (2500 mm/min) were 3, 1.2%, and 1.4 mm, respectively. Four-layer-infilled 3D-printed samples maintained their initial shape after cooking, regardless of the cooking method. However, only baked products maintained their initial shapes among full-infilled samples. Although water and salt have different functions in meat, the use of the appropriate ratio is necessary for 3D-printed meat-based products to provide printability and post-production stability. To sum up optimum parameters and road map for printing meat and meat products including leftover meats and low-value by-products were revealed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Influence of post-processing treatment on the surface roughness of polyamide PA12 samples manufactured using additive methods in the context of the production of orthoses.
- Author
-
Turek, Paweł, Bazan, Anna, and Zakrecki, Andrzej
- Abstract
Additive techniques are gaining popularity, primarily due to the emergence of new 3D printing methods, advancements in 3D printers, and the availability of innovative materials. Models produced using additive processes can undergo additional post-processing and dyeing to modify their functional and visual properties. This article presents the results of surface roughness tests conducted on samples made of polyamide PA12, using the Selective Laser Sintering (SLS) and HP MultiJet Fusion (MJF) methods. Regarding the processing methods, chemical surface treatment contributed to reducing Ra and Rz parameters by about 80% for both analyzed printing methods, while mechanical surface treatment resulted in a reduction of approximately 40% for SLS samples and 30% for MJF samples. On the other hand, dyeing and applying an antibacterial coating did not significantly affect the Ra and Rz parameter values. Considering the obtained results, the recommended manufacturing method for orthosis is the MJF method, and the finishing process should include mechanical treatment followed by dyeing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Wettability Behaviour of Metal Surfaces after Sequential Nanosecond and Picosecond Laser Texturing.
- Author
-
Tang, Yin, Fang, Zheng, Fei, Yang, Wang, Shuai, Perrie, Walter, Edwardson, Stuart, and Dearden, Geoff
- Subjects
OPTICAL interferometers ,HYDROPHOBIC surfaces ,SURFACE texture ,CONTACT angle ,METALLIC surfaces - Abstract
This study examines the wettability behaviour of 304 stainless steel (304SS) and Ti-6Al-4V (Ti64) surfaces after sequential nanosecond (ns) and picosecond (ps) laser texturing; in particular, how the multi-scale surface structures created influence the lifecycle of surface hydrophobicity. The effect of different post-process treatments is also examined. Surfaces were analysed using Scanning Electron Microscopy (SEM), a white light interferometer optical profiler, and Energy Dispersive X-ray (EDX) spectroscopy. Wettability was assessed through sessile drop contact angle (CA) measurements, conducted at regular intervals over periods of up to 12 months, while EDX scans monitored elemental chemical changes. The results show that sequential (ns + ps) laser processing produced multi-scale surface texture with laser-induced periodic surface structures (LIPSS). Compared to the ns laser case, the (ns + ps) laser processed surfaces transitioned more rapidly to a hydrophobic state and maintained this property for much longer, especially when the single post-process treatment was ultrasonic cleaning. Some interesting features in CA development over these extended timescales are revealed. For 304SS, hydrophobicity was reached in 1–2 days, with the CA then remaining in the range of 120 to 140° for up to 180 days; whereas the ns laser-processed surfaces took longer to reach hydrophobicity and only maintained the condition for up to 30 days. Similar results were found for the case of Ti64. The findings show that such multi-scale structured metal surfaces can offer relatively stable hydrophobic properties, the lifetime of which can be extended significantly through the appropriate selection of laser process parameters and post-process treatment. The addition of LIPSS appears to help extend the longevity of the hydrophobic property. In seeking to identify other factors influencing wettability, from our EDX results, we observed a significant and steady rate of increase in the carbon content at the surface over the study period. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Improvement of Fatigue Strength in Additively Manufactured Aluminum Alloy AlSi10Mg via Submerged Laser Peening.
- Author
-
Soyama, Hitoshi
- Subjects
METAL fatigue ,FATIGUE limit ,LASER peening ,PULSED lasers ,YAG lasers - Abstract
As the fatigue properties of as-built components of additively manufactured (AM) metals are considerably weaker than those of wrought metals because of their rougher surface, post-processing is necessary to improve the fatigue properties. To demonstrate the improvement in the fatigue properties of AM metals via post-processing methods, the fabrication of AlSi10Mg, i.e., PBF–LS/AlSi10Mg, through powder bed fusion (PBF) using laser sintering (LS) and its treatment via submerged laser peening (SLP), using a fiber laser and/or a Nd/YAG laser, was evaluated via plane bending fatigue tests. In SLP, laser ablation (LA) is generated by a pulsed laser and a bubble is generated after LA, which behaves like a cavitation bubble that is referred to as "laser cavitation (LC)". In this paper, LA-dominated SLP is referred to as "laser treatment (LT)", while LC collapse-dominated SLP is referred to as "laser cavitation peening (LCP)", as the impact of LC collapse is used for peening. It was revealed that SLP using a fiber laser corresponded with LT rather than LCP. It was demonstrated that the fatigue strength at N = 10
7 was 85 MPa for LCP and 103 MPa for the combined process of blasting (B) + LT + LCP, whereas the fatigue strength of the as-built specimen was 54 MPa. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
35. Post-processing for Bayesian analysis of reduced rank regression models with orthonormality restrictions.
- Author
-
Aßmann, Christian, Boysen-Hogrefe, Jens, and Pape, Markus
- Abstract
Orthonormality constraints are common in reduced rank models. They imply that matrix-variate parameters are given as orthonormal column vectors. However, these orthonormality restrictions do not provide identification for all parameters. For this setup, we show how the remaining identification issue can be handled in a Bayesian analysis via post-processing the sampling output according to an appropriately specified loss function. This extends the possibilities for Bayesian inference in reduced rank regression models with a part of the parameter space restricted to the Stiefel manifold. Besides inference, we also discuss model selection in terms of posterior predictive assessment. We illustrate the proposed approach with a simulation study and an empirical application. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Post-processing and improved error estimates of numerical methods for evolutionary systems.
- Author
-
Franz, Sebastian
- Abstract
We consider evolutionary systems, i.e. systems of linear partial differential equations arising from the mathematical physics. For these systems, there exists a general solution theory in exponentially weighted spaces, which can be exploited in the analysis of numerical methods. The numerical method considered in this paper is a discontinuous Galerkin method in time combined with a conforming Galerkin method in space. Building on our recent paper (Franz, S. Trostorff, S. & Waurick, M. (2019) Numerical methods for changing type systems. IMAJNA , 39 , 1009–1038), we improve some of the results, study the dependence of the numerical solution on the weight parameter and consider a reformulation and post-processing of its numerical solution. As a by-product, we provide error estimates for the dG-C0 method. Numerical simulations support the theoretical findings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Evaluation of precipitation temporal distribution pattern of post-processed sub-daily ECMWF forecasts.
- Author
-
Hoghoughinia, Kousha, Saghafian, Bahram, and Aminyavari, Saleh
- Subjects
- *
PRECIPITATION forecasting , *RANDOM forest algorithms , *FLOOD forecasting , *SUPPORT vector machines , *FLOOD control - Abstract
Accurate forecasting of the temporal distribution pattern of sub-daily precipitation is of paramount importance for effective flood control design and early warning systems. This study focuses on improving the accuracy of such forecasts by employing post-processing techniques. The European Centre for Medium-Range Weather Forecasts (ECMWF) precipitation product over Iran was adopted along with three post-processing methods including Quantile Mapping (QM), Support Vector Machine (SVM), and Random Forest (RF). The accuracy of the forecasts for various precipitation temporal characteristics, including the start, duration, and end of precipitation events were evaluated. The RF method proved to be the most effective in improving forecast accuracy, especially in regions with higher precipitation rates. Additionally, RF corrected the first quartile of precipitation forecasts across all precipitation regions, significantly enhancing forecast accuracy in regions 3 and 5 of Iran. As for the temporal distribution pattern, post-processing methods improved the accuracy of the forecasts across all regions. The QM method performed better in terms of distributing precipitation amounts among quartiles. Moreover, all post-processing methods showed a high degree of similarity between observed and forecasted temporal distribution patterns. The deterministic evaluation showed that RF outperforms other methods in enhancing the accuracy of most precipitation quartiles, particularly that of the third quartile. The SVM and QM methods showed mixed performances, improving accuracy in some quartiles but performing adversely in others. Overall, this research highlighted the importance of data post-processing in enhancing the accuracy of precipitation forecasts and their temporal distribution patterns. The RF method proved to be the most effective post-processing technique. These findings have significant implications for flood forecasting and management in regions prone to extreme precipitation events. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. The Effect of Pre-Treatment and the Drying Method on the Nutritional and Bioactive Composition of Sea Cucumbers—A Review.
- Author
-
Das, Amit, Hossain, Abul, and Dave, Deepika
- Subjects
SEA cucumbers ,MICROBIAL enzymes ,NUTRITIONAL value ,HIGH temperatures ,BIOACTIVE compounds - Abstract
Sea cucumbers are well demarcated for their valuable role in the food, pharmaceutical, nutraceutical, and cosmeceutical sectors. The demand for well-processed dried sea cucumber retaining quality is prioritized by local markets and industries. There are several techniques for the pre-processing of fresh sea cucumbers, including traditional and modern methods, such as salting, boiling, high-pressure processing, high-pressure steaming, and vacuum cooking, among others, in order to inactivate enzymes and microbial attacks. Further, pre-treated sea cucumbers require post-processing before human consumption, transportation, or industry uses such as hot air, freeze, cabinet, sun, or smoke drying. However, despite the ease, traditional processing is associated with several challenges hampering the quality of processed products. For instance, due to high temperatures in boiling and drying, there is a higher chance of disrupting valuable nutrients, resulting in low-quality products. Therefore, the integration of traditional and modern methods is a crucial approach to optimizing sea cucumber processing to obtain valuable products with high nutritional values and retain bioactive compounds. The value of dried sea cucumbers relies not only on species and nutritional value but also on the processing methods in terms of retaining sensory attributes, including colour, appearance, texture, taste, and odour. Therefore, this review, for the first time, provides insight into different pre- and post-treatments, their perspective, challenges, and how these methods can be optimized for industry use to obtain better-quality products and achieve economic gains from sea cucumber. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Machine Learning-Based Temperature and Wind Forecasts in the Zhangjiakou Competition Zone during the Beijing 2022 Winter Olympic Games.
- Author
-
Sun, Zhuo, Li, Jiangbo, Guo, Ruiqiang, Zhang, Yiran, Zhu, Gang, and Yang, Xiaoliang
- Abstract
Weather forecasting for the Zhangjiakou competition zone of the Beijing 2022 Winter Olympic Games is a challenging task due to its complex terrain. Numerical weather prediction models generally perform poorly for cold air pools and winds over complex terrains, due to their low spatiotemporal resolution and limitations in the description of dynamics, thermodynamics, and microphysics in mountainous areas. This study proposes an ensemble-learning model, named ENSL, for surface temperature and wind forecasts at the venues of the Zhangjiakou competition zone, by integrating five individual models—linear regression, random forest, gradient boosting decision tree, support vector machine, and artificial neural network (ANN), with a ridge regression as meta model. The ENSL employs predictors from the high-resolution ECMWF model forecast (ECMWF-HRES) data and topography data, and targets from automatic weather station observations. Four categories of predictors (synoptic-pattern related fields, surface element fields, terrain, and temporal features) are fed into ENSL. The results demonstrate that ENSL achieves better performance and generalization than individual models. The root-mean-square error (RMSE) for the temperature and wind speed predictions is reduced by 48.2% and 28.5%, respectively, relative to ECMWF-HRES. For the gust speed, the performance of ENSL is consistent with ANN (best individual model) in the whole dataset, whereas ENSL outperforms on extreme gust samples (42.7% compared with 38.7% obtained by ECMWF-HRES in terms of RMSE reduction). Sensitivity analysis of predictors in the four categories shows that ENSL fits their feature importance rankings and physical explanations effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. RIRNet: A Direction-Guided Post-Processing Network for Road Information Reasoning.
- Author
-
Zhou, Guoyuan, He, Changxian, Wang, Hao, Xie, Qiuchang, Chen, Qiong, Hong, Liang, and Chen, Jie
- Subjects
- *
ROAD maintenance , *CONVOLUTIONAL neural networks , *DEEP learning , *IMAGE analysis , *REMOTE sensing - Abstract
Road extraction from high-resolution remote sensing images (HRSIs) is one of the tasks in image analysis. Deep convolutional neural networks have become the primary method for road extraction due to their powerful feature representation capability. However, roads are often obscured by vegetation, buildings, and shadows in HRSIs, resulting in incomplete and discontinuous road extraction results. To address this issue, we propose a lightweight post-processing network called RIRNet in this study, which include an information inference module and a road direction inference task branch. The information inference module can infer spatial information relationships between different rows or columns of feature images from different directions, effectively inferring and repairing road fractures. The road direction inference task branch performs the road direction prediction task, which can constrain and promote the road extraction task, thereby indirectly enhancing the inference ability of the post-processing model and realizing the optimization of the initial road extraction results. Experimental results demonstrate that the RIRNet model can achieve an excellent post-processing effect, which is manifested in the effective repair of broken road segments, as well as the handling of errors such as omission, misclassification, and noise, proving the effectiveness and generalization of the model in post-processing optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Towards a Customizable, SLA 3D-Printed Biliary Stent: Optimizing a Commercially Available Resin and Predicting Stent Behavior with Accurate In Silico Testing.
- Author
-
Cordista, Victoria, Patel, Sagar, Lawson, Rebecca, Lee, Gunhee, Verheyen, Morgan, Westbrook, Ainsley, Shelton, Nathan, Sapkota, Prakriti, Zabala Valencia, Isabella, Gaddam, Cynthia, and Thomas, Joanna
- Subjects
- *
BILE ducts , *BEND testing , *YIELD stress , *THREE-dimensional printing , *MEDICAL equipment - Abstract
Inflammation of the bile ducts and surrounding tissues can impede bile flow from the liver into the intestines. If this occurs, a plastic or self-expanding metal (SEM) stent is placed to restore bile drainage. United States (US) Food and Drug Administration (FDA)-approved plastic biliary stents are less expensive than SEMs but have limited patency and can occlude bile flow if placed spanning a duct juncture. Recently, we investigated the effects of variations to post-processing and autoclaving on a commercially available stereolithography (SLA) resin in an effort to produce a suitable material for use in a biliary stent, an FDA Class II medical device. We tested six variations from the manufacturer's recommended post-processing and found that tripling the isopropanol (IPA) wash time to 60 min and reducing the time and temperature of the UV cure to 10 min at 40 °C, followed by a 30 min gravity autoclave cycle, yielded a polymer that was flexible and non-cytotoxic. In turn, we designed and fabricated customizable, SLA 3D-printed polymeric biliary stents that permit bile flow at a duct juncture and can be deployed via catheter. Next, we generated an in silico stent 3-point bend test to predict displacements and peak stresses in the stent designs. We confirmed our simulation accuracy with experimental data from 3-point bend tests on SLA 3D-printed stents. Unfortunately, our 3-point bend test simulation indicates that, when bent to the degree needed for placement via catheter (~30°), the peak stress the stents are predicted to experience would exceed the yield stress of the polymer. Thus, the risk of permanent deformation or damage during placement via catheter to a stent printed and post-processed as we have described would be significant. Moving forward, we will test alternative resins and post-processing parameters that have increased elasticity but would still be compatible with use in a Class II medical device. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Robust nuclei segmentation with encoder‐decoder network from the histopathological images.
- Author
-
Gour, Mahesh, Jain, Sweta, and Kumar, T. Sunil
- Subjects
- *
IMAGE segmentation , *EARLY detection of cancer , *CANCER prognosis , *HISTOPATHOLOGY , *MORPHOLOGY - Abstract
Nuclei segmentation is a prerequisite and an essential step in cancer detection and prognosis. Automatic nuclei segmentation from the histopathological images is challenging due to nuclear overlap, disease types, chromatic stain variability, and cytoplasmic morphology differences. Furthermore, it is demanding to develop a single accurate method for segmenting nuclei of different organs because of the diversity in nuclei size, shape, and appearance across the various organs. To address these challenges, we developed a robust Encoder‐Decoder network for nuclei segmentation from the multi‐organ histopathological images. In this approach, we utilize a pre‐trained EfficientNet‐B4 as an Encoder subnetwork and design a new Decoder subnetwork architecture. Additionally, we have applied morphological operation‐based post‐processing to improve the segmentation results. The performance of our approach has been evaluated on three public datasets, namely, Kumar, TNBC, and CPM‐17 datasets, which contain histopathological images of seven organs, one organ, and four organs, respectively. The proposed method achieved an aggregated Jacquard index of 0.636, 0.611, and 0.706 on Kumar, TNBC, and CPM‐17 datasets, respectively. Our proposed approach also shows superiority over the existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Research on Three-Dimensional Reconstruction of Ribs Based on Point Cloud Adaptive Smoothing Denoising.
- Author
-
Zhu, Darong, Wang, Diao, Chen, Yuanjiao, Xu, Zhe, and He, Bishi
- Subjects
- *
POINT cloud , *IMAGE denoising , *DEEP learning , *IMAGE processing , *COMPUTED tomography - Abstract
The traditional methods for 3D reconstruction mainly involve using image processing techniques or deep learning segmentation models for rib extraction. After post-processing, voxel-based rib reconstruction is achieved. However, these methods suffer from limited reconstruction accuracy and low computational efficiency. To overcome these limitations, this paper proposes a 3D rib reconstruction method based on point cloud adaptive smoothing and denoising. We converted voxel data from CT images to multi-attribute point cloud data. Then, we applied point cloud adaptive smoothing and denoising methods to eliminate noise and non-rib points in the point cloud. Additionally, efficient 3D reconstruction and post-processing techniques were employed to achieve high-accuracy and comprehensive 3D rib reconstruction results. Experimental calculations demonstrated that compared to voxel-based 3D rib reconstruction methods, the 3D rib models generated by the proposed method achieved a 40% improvement in reconstruction accuracy and were twice as efficient as the former. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Non-crossing Quantile Regression Neural Network as a Calibration Tool for Ensemble Weather Forecasts.
- Author
-
Song, Mengmeng, Yang, Dazhi, Lerch, Sebastian, Xia, Xiang'ao, Yagli, Gokhan Mert, Bright, Jamie M., Shen, Yanbo, Liu, Bai, Liu, Xingli, and Mayer, Martin János
- Subjects
- *
QUANTILE regression , *NUMERICAL weather forecasting , *CALIBRATION - Abstract
Despite the maturity of ensemble numerical weather prediction (NWP), the resulting forecasts are still, more often than not, under-dispersed. As such, forecast calibration tools have become popular. Among those tools, quantile regression (QR) is highly competitive in terms of both flexibility and predictive performance. Nevertheless, a long-standing problem of QR is quantile crossing, which greatly limits the interpretability of QR-calibrated forecasts. On this point, this study proposes a non-crossing quantile regression neural network (NCQRNN), for calibrating ensemble NWP forecasts into a set of reliable quantile forecasts without crossing. The overarching design principle of NCQRNN is to add on top of the conventional QRNN structure another hidden layer, which imposes a non-decreasing mapping between the combined output from nodes of the last hidden layer to the nodes of the output layer, through a triangular weight matrix with positive entries. The empirical part of the work considers a solar irradiance case study, in which four years of ensemble irradiance forecasts at seven locations, issued by the European Centre for Medium-Range Weather Forecasts, are calibrated via NCQRNN, as well as via an eclectic mix of benchmarking models, ranging from the naïve climatology to the state-of-the-art deep-learning and other non-crossing models. Formal and stringent forecast verification suggests that the forecasts post-processed via NCQRNN attain the maximum sharpness subject to calibration, amongst all competitors. Furthermore, the proposed conception to resolve quantile crossing is remarkably simple yet general, and thus has broad applicability as it can be integrated with many shallow- and deep-learning-based neural networks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Generating synthetic rainfall fields by R‐vine copulas applied to seamless probabilistic predictions.
- Author
-
Schaumann, Peter, Rempel, Martin, Blahak, Ulrich, and Schmidt, Volker
- Subjects
- *
MARGINAL distributions , *FLOOD forecasting , *METEOROLOGICAL precipitation , *WATERSHEDS , *PRECIPITATION forecasting - Abstract
Many post‐processing methods improve forecasts at individual locations but remove their correlation structure. However, this information is essential for forecasting larger‐scale events, such as the total precipitation amount over areas like river catchments, which are relevant for weather warnings and flood predictions. We propose a method to reintroduce spatial correlation into a post‐processed forecast using an R‐vine copula fitted to historical observations. The method rearranges predictions at individual locations and ensures that they still exhibit the post‐processed marginal distributions. It works similarly to well‐known approaches, like the "Schaake shuffle" and "ensemble copula coupling." However, compared to these methods, which rely on a ranking with no ties at each considered location in their source for spatial correlation, the copula serves as a measure of how well a given arrangement compares with the observed historical distribution. Therefore, no close relationship is required between the post‐processed marginal distributions and the spatial correlation source. This is advantageous for post‐processed seamless forecasts in two ways. First, meteorological parameters such as the precipitation amount, whose distribution has an atom at zero, have rankings with ties. Second, seamless forecasts represent an optimal combination of their input forecasts and may spatially shifted from them at scales larger than the areas considered herein, leading to non‐reasonable spatial correlation sources for the well‐known methods. Our results indicate that the calibration of the combination model carries over to the output of the proposed model, that is, the evaluation of area predictions shows a similar improvement in forecast quality as the predictions for individual locations. Additionally, the spatial correlation of the forecast is evaluated with the help of object‐based metrics, for which the proposed model also shows an improvement compared to both input forecasts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Application of Machine Learning and Deep Learning in Finite Element Analysis: A Comprehensive Review.
- Author
-
Nath, Dipjyoti, Ankit, Neog, Debanga Raj, and Gautam, Sachin Singh
- Abstract
Machine learning (ML) has evolved as a technology used in even broader domains, ranging from spam detection to space exploration, as a result of the boom in available data and affordable computing power in recent years. To find field variables in a domain under investigation, partial differential equations (PDEs) are solved using the numerical method known as finite element method (FEM). Problems in a variety of fields, including solid and fluid mechanics, material science, biomechanics, electronics, and geomechanics, have been solved using FEM. There are initiatives to apply ML approaches to the field of finite element analysis (FEA) due to the broad applicability of ML to numerous fields. The field of FEA is constrained by the length of time needed for modeling, the expense and length of time required for computing to solve the problem, and the necessity of considerable expert participation to understand the findings. These problems are frequently solved using ML approaches, according to evidence from ML applications. This work provides a thorough analysis of how ML has been applied in solid mechanics as an additional and beneficial tool to FEA. The goal is to demonstrate ML's effectiveness in the FEA sector and to pinpoint areas that might use improvement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Improving Flood Forecasting Skill by Combining Ensemble Precipitation Forecasts and Multiple Hydrological Models in a Mountainous Basin.
- Author
-
Xiang, Yiheng, Peng, Tao, Qi, Haixia, Yin, Zhiyuan, and Shen, Tieyuan
- Subjects
FLOOD forecasting ,PRECIPITATION forecasting ,HYDROLOGIC models ,NUMERICAL weather forecasting ,HYDROLOGICAL forecasting - Abstract
Ensemble precipitation forecasts (EPFs) derived from single numerical weather predictions (NWPs) often miss extreme events, and individual hydrological models (HMs) often fail to accurately capture all types of flows, including flood peaks. To address these shortcomings, this study introduced four "EPF + HM" schemes for ensemble flood forecasting (EFF) by combining two EPFs and two HMs. A generator-based post-processing (GPP) method was applied to correct biases and under-dispersion within the raw EPF data. The effectiveness of these schemes in delivering high-quality flood forecasts was assessed using both deterministic and probabilistic metrics. The results indicate that, once post-processed by GPP, all proposed schemes show improvements in both deterministic and probabilistic performances, with skillful flood forecasts for 1–7 lead days. The deterioration in forecast performance with extended lead times is also lessened. Notably, the results indicate that uncertainty within hydrological models has a more pronounced impact on capturing flood peaks than uncertainty in precipitation inputs. This study recommends combining individual EPF with multiple hydrological models for reliable flood forecasting. In conclusion, effective flood forecasting necessitates employing post-processing techniques to correct EPFs and accounting for the uncertainty inherent in hydrological models, rather than relying solely on the uncertainty of the input data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Improved Real‐Time Post‐Processing for Quantum Random Number Generators.
- Author
-
Li, Qian, Sun, Xiaoming, Zhang, Xingjian, and Zhou, Hongyi
- Subjects
RANDOM number generators ,QUANTUM numbers ,QUANTUM cryptography ,ONLINE algorithms ,COMPUTER science ,HUFFMAN codes - Abstract
Randomness extraction is a key problem in cryptography and theoretical computer science. With the recent rapid development of quantum cryptography, quantum‐proof randomness extraction has also been widely studied, addressing the security issues in the presence of a quantum adversary. In contrast with conventional quantum‐proof randomness extractors characterizing the input raw data as min‐entropy sources, it is found that the input raw data generated by a large class of trusted‐device quantum random number generators can be characterized as the so‐called reverse block source. This fact enables us to design improved extractors. Two novel quantum‐proof randomness extractors for reverse block sources that realize real‐time block‐wise extraction are proposed specifically. In comparison with the general min‐entropy randomness extractors, the designs achieve a significantly higher extraction speed and a longer output data length with the same seed length. In addition, they enjoy the property of online algorithms, which process the raw data on the fly without waiting for the entire input raw data to be available. These features make the design an adequate choice for the real‐time post‐processing of practical quantum random number generators. Applying the extractors to the raw data generated by a widely used quantum random number generator, a simulated extraction speed as high as 300 Gbps is achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Capturing Photons.
- Author
-
RIDDLE, BOB
- Subjects
PHOTONS ,METEOR showers ,OPTICAL telescopes ,FOCAL length ,APPLICATION software ,CERES (Dwarf planet) - Abstract
The article discusses the use of smart telescopes and digital imaging technology in astronomy education. It highlights the affordability and accessibility of smart telescopes, which can be operated using computer technology and offer features such as automatic tracking of celestial objects and access to online databases. The article also mentions the challenges of teaching astronomy and suggests that introducing astrophotography can make the subject more engaging for students. It provides information on the features to consider when choosing a smart telescope and lists some smart telescopes available on the market. The article concludes by discussing the benefits of smart telescopes in collecting and analyzing data, as well as participating in citizen science projects. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
50. POSMind: developing a hierarchical GNSS/SINS post-processing service system for precise position and attitude determination.
- Author
-
Zhu, Feng, Cai, Qinqing, Tao, Xianlu, Zhang, Xiaohong, and Liu, Wanke
- Abstract
The School of Geodesy and Geomatics at Wuhan University has developed a GNSS/SINS post-processing service system named POSMind. With the growing demand for mobile mapping in scientific and engineering applications, such as earth observation and high-precision mapping, there is a crucial need for efficient and accurate direct georeference based on GNSS/SINS integration. To accommodate diverse applications, an analysis of existing service forms was conducted, culminating the development of the hierarchical post-processing service system. This system consists of three service forms: module for interface calls, software for fine processing, and web for efficient cluster processing. POSMind has assimilated existing excellent methodologies and constructed a high-precision GNSS/SINS integration algorithm framework through theoretical derivations and experimental tests. Refinements have been introduced in several facets, including pre-processing, quality control, ambiguity resolution, and smoothing schemes. To assess the performance of POSMind, a series of experiments and analyses were conducted. The first experiment is conducted in open-sky environments (including carborne, airborne, and shipborne) to evaluate the consistency between POSMind and Inertial Explorer. Additionally, experiment under urban environments is carried out to assess the performance of POSMind in realistic cases. Moreover, the practical performance of POSMind was also demonstrated with two mobile mapping cases, with the evaluation of the accuracy of point cloud. Looking forward, we plan to enhance POSMind by introducing reliable filters or optimizers, integrating observations from other sensors and utilizing the benefits of post-processing in existing powerful GNSS/SINS processing platforms. The goal is to provide a powerful GNSS/SINS post-processing service that delivers high-precision, excellent availability, and utmost reliability for diverse scenes and applications. The POSMind web and software can be freely accessed at posmind-web.com and on Kaggle website at kaggle.com/datasets/fengzhusgg/smartpnt-pos. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.