20 results on '"Phienphanich P"'
Search Results
2. Donut: Augmentation Technique for Enhancing The Efficacy of Glaucoma Suspect Screening
- Author
-
Sangchocanonta, S., primary, Ingpochai, S., additional, Puangarom, S., additional, Munthuli, A., additional, Phienphanich, P., additional, Itthipanichpong, R., additional, Chansangpetch, S., additional, Manassakorn, A., additional, Ratanawongphaibul, K., additional, Tantisevi, V., additional, Rojanapongpun, P., additional, and Tantibundhit, C., additional
- Published
- 2023
- Full Text
- View/download PDF
3. Alzheimer’s Together with Mild Cognitive Impairment Screening Using Polar Transformation of Middle Zone of Fundus Images Based Deep Learning
- Author
-
Luengnaruemitchai, G., primary, Kaewmahanin, W., additional, Munthuli, A., additional, Phienphanich, P., additional, Puangarom, S., additional, Sangchocanonta, S., additional, Jariyakosol, S., additional, Hirunwiwatkul, P., additional, and Tantibundhit, C., additional
- Published
- 2023
- Full Text
- View/download PDF
4. Extravasation Screening and Severity Prediction from Skin Lesion Image using Deep Neural Networks
- Author
-
Munthuli, A., primary, Intanai, J., additional, Tossanuch, P., additional, Pooprasert, P., additional, Ingpochai, P., additional, Boonyasatian, S., additional, Kittithammo, K., additional, Thammarach, P., additional, Boonmak, T., additional, Khaengthanyakan, S., additional, Yaemsuk, A., additional, Vanichvarodom, P., additional, Phienphanich, P., additional, Pongcharoen, P., additional, Sakonlaya, D., additional, Sitthiwatthanawong, P., additional, Wetchawalit, S., additional, Chakkavittumrong, P., additional, Thongthawee, B., additional, Pathomjaruwat, T., additional, and Tantibundhit, C., additional
- Published
- 2022
- Full Text
- View/download PDF
5. GlauCUTU: Virtual Reality Visual Field Test
- Author
-
Kunumpol, P., primary, Lerthirunvibul, N., additional, Phienphanich, P., additional, Munthuli, A., additional, Tantisevi, V., additional, Manassakorn, A., additional, Chansangpetch, S., additional, Itthipanichpong, R., additional, Ratanawongphaibol, K., additional, Rojanapongpun, P., additional, and Tantibundhit, C., additional
- Published
- 2021
- Full Text
- View/download PDF
6. AI Chest 4 All
- Author
-
Thammarach, P., primary, Khaengthanyakan, S., additional, Vongsurakrai, S., additional, Phienphanich, P., additional, Pooprasert, P., additional, Yaemsuk, A., additional, Vanichvarodom, P., additional, Munpolsri, N., additional, Khwayotha, S., additional, Lertkowit, M., additional, Tungsagunwattana, S., additional, Vijitsanguan, C., additional, Lertrojanapunya, S., additional, Noisiri, W., additional, Chiawiriyabunya, I., additional, Aphikulvanich, N., additional, and Tantibundhit, C., additional
- Published
- 2020
- Full Text
- View/download PDF
7. Automatic Stroke Screening on Mobile Application: Features of Gyroscope and Accelerometer for Arm Factor in FAST
- Author
-
Phienphanich, P., primary, Tankongchamruskul, N., additional, Akarathanawat, W., additional, Chutinet, A., additional, Nimnual, R., additional, Tantibundhit, C., additional, and Suwanwela, N. C., additional
- Published
- 2019
- Full Text
- View/download PDF
8. Modeling predictive perceptual representation of Thai initial consonants
- Author
-
Phienphanich, P., primary, Onsuwan, C., additional, Tantibundhit, C., additional, Saimai, N., additional, and Saimai, T., additional
- Published
- 2014
- Full Text
- View/download PDF
9. Lexical tone perception in Thai normal-hearing adults and those using hearing aids: a case study
- Author
-
Tantibundhit, C., primary, Onsuwan, C., additional, Klangpornkun, N., additional, Phienphanich, P., additional, Saimai, T., additional, Saimai, N., additional, Pitathawatchai, P., additional, and Wutiwiwatchai, Chai, additional
- Published
- 2013
- Full Text
- View/download PDF
10. Methodological issues in assessing perceptual representation of consonant sounds in Thai
- Author
-
Tantibundhit, Charturong, primary, Onsuwan, Chutamanee, additional, Phienphanich, P., additional, and Wutiwiwatchai, Chai, additional
- Published
- 2012
- Full Text
- View/download PDF
11. Alzheimer's Together with Mild Cognitive Impairment Screening Using Polar Transformation of Middle Zone of Fundus Images Based Deep Learning.
- Author
-
Luengnaruemitchai G, Kaewmahanin W, Munthuli A, Phienphanich P, Puangarom S, Sangchocanonta S, Jariyakosol S, Hirunwiwatkul P, and Tantibundhit C
- Subjects
- Humans, Magnetic Resonance Imaging methods, Retina, Deep Learning, Alzheimer Disease diagnostic imaging, Cognitive Dysfunction diagnostic imaging
- Abstract
Alzheimer's disease (AD) and Mild Cognitive Impairment (MCI) are considered an increasing major health problem in elderlies. However, current clinical methods of Alzheimer's detection are expensive and difficult to access, making the detection inconvenient and unsuitable for developing countries such as Thailand. Thus, we developed a method of AD together with MCI screening by fine-tuning a pre-trained Densely Connected Convolutional Network (DenseNet-121) model using the middle zone of polar transformed fundus image. The polar transformation in the middle zone of the fundus is a key factor helping the model to extract features more effectively and that enhances the model accuracy. The dataset was divided into 2 groups: normal and abnormal (AD and MCI). This method can classify between normal and abnormal patients with 96% accuracy, 99% sensitivity, 90% specificity, 95% precision, and 97% F1 score. Parts of both MCI and AD input images that most impact the classification score visualized by Grad-CAM++ focus in superior and inferior retinal quadrants.Clinical relevance- The parts of both MCI and AD input images that have the most impact the classification score (visualized by Grad-CAM++) are superior and inferior retinal quadrants. Polar transformation of the middle zone of retinal fundus images is a key factor that enhances the classification accuracy.
- Published
- 2023
- Full Text
- View/download PDF
12. 3-LbNets: Tri-Labeling Deep Convolutional Neural Network for the Automated Screening of Glaucoma, Glaucoma Suspect, and No Glaucoma in Fundus Images.
- Author
-
Puangarom S, Twinvitoo A, Sangchocanonta S, Munthuli A, Phienphanich P, Itthipanichpong R, Ratanawongphaibul K, Chansangpetch S, Manassakorn A, Tantisevi V, Rojanapongpun P, and Tantibundhit C
- Subjects
- Humans, Fundus Oculi, Neural Networks, Computer, Optic Disk, Deep Learning, Glaucoma diagnostic imaging
- Abstract
Early detection of glaucoma, a widespread visual disease, can prevent vision loss. Unfortunately, ophthalmologists are scarce and clinical diagnosis requires much time and cost. Therefore, we developed a screening Tri-Labeling deep convolutional neural network (3-LbNets) to identify no glaucoma, glaucoma suspect, and glaucoma cases in global fundus images. 3-LbNets extracts important features from 3 different labeling modals and puts them into an artificial neural network (ANN) to find the final result. The method was effective, with an AUC of 98.66% for no glaucoma, 97.54% for glaucoma suspect, and 97.19% for glaucoma when analysing 206 fundus images evaluated with unanimous agreement from 3 well-trained ophthalmologists (3/3). When analysing 178 difficult to interpret fundus images (with majority agreement (2/3)), this method had an AUC of 80.80% for no glaucoma, 69.52% for glaucoma suspect, and 82.74% for glaucoma cases.Clinical relevance-This establishes a robust global fundus image screening network based on the ensemble method that can optimize glaucoma screening to alleviate the toll on those with glaucoma and prevent glaucoma suspects from developing the disease.
- Published
- 2023
- Full Text
- View/download PDF
13. Donut: Augmentation Technique for Enhancing The Efficacy of Glaucoma Suspect Screening.
- Author
-
Sangchocanonta S, Ingpochai S, Puangarom S, Munthuli A, Phienphanich P, Itthipanichpong R, Chansangpetch S, Manassakorn A, Ratanawongphaibul K, Tantisevi V, Rojanapongpun P, and Tantibundhit C
- Subjects
- Humans, Mass Screening, Diagnostic Techniques, Ophthalmological, Sensitivity and Specificity, Optic Disk diagnostic imaging, Glaucoma diagnosis
- Abstract
Glaucoma is the second most common cause of blindness. A glaucoma suspect has risk factors that increase the possibility of developing glaucoma. Evaluating a patient with suspected glaucoma is challenging. The "donut method" was developed in this study as an augmentation technique for obtaining high-quality fundus images for training ConvNeXt-Small model. Fundus images from GlauCUTU-DATA, labelled by randomizing at least 3 well-trained ophthalmologists (4 well-trained ophthalmologists in case of no majority agreement) with a unanimous agreement (3/3) and majority agreement (2/3), were used in the experiment. The experimental results from the proposed method showed the training model with the "donut method" increased the sensitivity of glaucoma suspects from 52.94% to 70.59% for the 3/3 data and increased the sensitivity of glaucoma suspects from 37.78% to 42.22% for the 2/3 data. This method enhanced the efficacy of classifying glaucoma suspects in both equalizing sensitivity and specificity sufficiently. Furthermore, three well-trained ophthalmologists agreed that the GradCAM++ heatmaps obtained from the training model using the proposed method highlighted the clinical criteria.Clinical relevance- The donut method for augmentation fundus images focuses on the optic nerve head region for enhancing efficacy of glaucoma suspect screening, and uses Grad-CAM++ to highlight the clinical criteria.
- Published
- 2023
- Full Text
- View/download PDF
14. Classification and analysis of text transcription from Thai depression assessment tasks among patients with depression.
- Author
-
Munthuli A, Pooprasert P, Klangpornkun N, Phienphanich P, Onsuwan C, Jaisin K, Pattanaseri K, Lortrakul J, and Tantibundhit C
- Subjects
- Humans, Thailand, Southeast Asian People, Language, Depression diagnosis, Depression psychology, Suicide
- Abstract
Depression is a serious mental health disorder that poses a major public health concern in Thailand and have a profound impact on individuals' physical and mental health. In addition, the lack of number to mental health services and limited number of psychiatrists in Thailand make depression particularly challenging to diagnose and treat, leaving many individuals with the condition untreated. Recent studies have explored the use of natural language processing to enable access to the classification of depression, particularly with a trend toward transfer learning from pre-trained language model. In this study, we attempted to evaluate the effectiveness of using XLM-RoBERTa, a pre-trained multi-lingual language model supporting the Thai language, for the classification of depression from a limited set of text transcripts from speech responses. Twelve Thai depression assessment questions were developed to collect text transcripts of speech responses to be used with XLM-RoBERTa in transfer learning. The results of transfer learning with text transcription from speech responses of 80 participants (40 with depression and 40 normal control) showed that when only one question (Q1) of "How are you these days?" was used, the recall, precision, specificity, and accuracy were 82.5%, 84.65, 85.00, and 83.75%, respectively. When utilizing the first three questions from Thai depression assessment tasks (Q1 - Q3), the values increased to 87.50%, 92.11%, 92.50%, and 90.00%, respectively. The local interpretable model explanations were analyzed to determine which words contributed the most to the model's word cloud visualization. Our findings were consistent with previously published literature and provide similar explanation for clinical settings. It was discovered that the classification model for individuals with depression relied heavily on negative terms such as 'not,' 'sad,', 'mood', 'suicide', 'bad', and 'bore' whereas normal control participants used neutral to positive terms such as 'recently,' 'fine,', 'normally', 'work', and 'working'. The findings of the study suggest that screening for depression can be facilitated by eliciting just three questions from patients with depression, making the process more accessible and less time-consuming while reducing the already huge burden on healthcare workers., Competing Interests: The authors have declared that no competing interests exist., (Copyright: © 2023 Munthuli et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.)
- Published
- 2023
- Full Text
- View/download PDF
15. Extravasation Screening and Severity Prediction from Skin Lesion Image using Deep Neural Networks.
- Author
-
Munthuli A, Intanai J, Tossanuch P, Pooprasert P, Ingpochai P, Boonyasatian S, Kittithammo K, Thammarach P, Boonmak T, Khaengthanyakan S, Yaemsuk A, Vanichvarodom P, Phienphanich P, Pongcharoen P, Sakonlaya D, Sitthiwatthanawong P, Wetchawalit S, Chakkavittumrong P, Thongthawee B, Pathomjaruwat T, and Tantibundhit C
- Subjects
- Humans, Research, Sensitivity and Specificity, Skin diagnostic imaging, Neural Networks, Computer, Skin Diseases
- Abstract
Extravasation occurs secondary to the leakage of medication from blood vessels into the surrounding tissue during intravenous administration resulting in significant soft tissue injury and necrosis. If treatment is delayed, invasive management such as surgical debridement, skin grafting, and even amputation may be required. Thus, it is imperative to develop a smartphone application for predicting extravasation severity from skin image. Two Deep Neural Network (DNN) architectures, U-Net and DenseNet-121, were used to segment skin and lesion, and to classify extravasation severity. Sensitivity and specificity for predicting between asymptomatic and abnormal cases were 77.78 and 90.24%. For each severity in abnormal cases, mild extravasation attained the highest F1-score of 0.8049, followed by severe extravasation of 0.6429, and moderate extravasation of 0.6250. The F1-score of moderate-to-severe extravasation classification can improve by applying the our proposed rule-based for multi-class classification. These findings proposed a novel and feasible DNN approach for screening extravasation from skin images. The implementation of DNN-based applications on mobile devices has a strong potential for clinical application in low-resource countries. Clinical relevance- The application can serve as a valuable tool in monitoring when extravasation occurs during intravaneous administration. It can also help in the scheduling process across worksite to reduce the risks associated with working shifts.
- Published
- 2022
- Full Text
- View/download PDF
16. GlauCUTU: Virtual Reality Visual Field Test.
- Author
-
Kunumpol P, Lerthirunvibul N, Phienphanich P, Munthuli A, Tantisevi V, Manassakorn A, Chansangpetch S, Itthipanichpong R, Ratanawongphaibol K, Rojanapongpun P, and Tantibundhit C
- Subjects
- Humans, Time Factors, Visual Field Tests, Visual Fields, Glaucoma diagnosis, Virtual Reality
- Abstract
This study proposed a virtual reality (VR) head-mounted visual field (VF) test system, or also known as the GlauCUTU VF test, for a reaction time (RT) perimetry with moving visual stimuli that progressively increase in intensity. The test entailed 24-2 VF protocol and was examined on 2 study groups, controls with normal fields and subjects with glaucoma. To collect reaction times, participants were urged to respond to the stimulus by pressing on the clicker as fast as possible. Performance of the GlauCUTU VF test was compared to the gold standard Humphrey Visual Field Analyzer (HFA). The HFA showed a significant difference between the GlauCUTU and HFA with mean duration of 254.41 and 609, respectively [t(16) = 15.273, p<0.05]. Likewise, our system also effectively differentiated glaucomatous eyes from normal eyes for the left eye and right eye, respectively. When compared to the HFA, the GlauCUTU test produced a significantly shorter average test duration by 354 seconds which reduced test-induced eye fatigue. The portable and inexpensive GlauCUTU perimetry system proves to be a promising method for increasing accessibility to glaucoma screening.Clinical relevance- GlauCUTU, an automated head-mounted VR perimetry device for VF test, is portable, cost-effective, and suitable for low resource settings. Unlike the conventional HFA test, GlauCUTU VF test reports in terms of subjects RT which is reportedly higher in glaucoma patients.
- Published
- 2021
- Full Text
- View/download PDF
17. AI Chest 4 All.
- Author
-
Thammarach P, Khaengthanyakan S, Vongsurakrai S, Phienphanich P, Pooprasert P, Yaemsuk A, Vanichvarodom P, Munpolsri N, Khwayotha S, Lertkowit M, Tungsagunwattana S, Vijitsanguan C, Lertrojanapunya S, Noisiri W, Chiawiriyabunya I, Aphikulvanich N, and Tantibundhit C
- Subjects
- Humans, Mass Screening, Sensitivity and Specificity, Thailand, Lung Neoplasms diagnostic imaging, Tuberculosis
- Abstract
AIChest4All is the name of the model used to label and screening diseases in our area of focus, Thailand, including heart disease, lung cancer, and tuberculosis. This is aimed to aid radiologist in Thailand especially in rural areas, where there is immense staff shortages. Deep learning is used in our methodology to classify the chest X-ray images from datasets namely, NIH set, which is separated into 14 observations, and the Montgomery and Shenzhen set, which contains chest X-ray images of patients with tuberculosis, further supplemented by the dataset from Udonthani Cancer hospital and the National Chest Institute of Thailand. The images are classified into six categories: no finding, suspected active tuberculosis, suspected lung malignancy, abnormal heart and great vessels, Intrathoracic abnormal findings, and Extrathroacic abnormal findings. A total of 201,527 images were used. Results from testing showed that the accuracy values of the categories heart disease, lung cancer, and tuberculosis were 94.11%, 93.28%, and 92.32%, respectively with sensitivity values of 90.07%, 81.02%, and 82.33%, respectively and the specificity values were 94.65%, 94.04%, and 93.54%, respectively. In conclusion, the results acquired have sufficient accuracy, sensitivity, and specificity values to be used. Currently, AIChest4All is being used to help several of Thailand's government funded hospitals, free of charge.Clinical relevance- AIChest4All is aimed to aid radiologist in Thailand especially in rural areas, where there is immense staff shortages. It is being used to help several of Thailand's goverment funded hospitals, free of charege to screening heart disease, lung cancer, and tubeculosis with 94.11%, 93.28%, and 92.32% accuracy.
- Published
- 2020
- Full Text
- View/download PDF
18. Automatic Stroke Screening on Mobile Application: Features of Gyroscope and Accelerometer for Arm Factor in FAST.
- Author
-
Phienphanich P, Tankongchamruskul N, Akarathanawat W, Chutinet A, Nimnual R, Tantibundhit C, and Suwanwela NC
- Subjects
- Arm, Case-Control Studies, Humans, Stroke Rehabilitation, Accelerometry instrumentation, Mobile Applications, Movement, Stroke diagnosis
- Abstract
This study focuses on automatic stroke-screening of the arm factor in the FAST (Face, Arm, Speech, and Time) stroke screening method. The study provides a methodology to collect data on specific arm movements, using signals from the gyroscope and accelerometer in mobile devices. Fifty-two subjects were enrolled in this study (20 stroke patients and 32 healthy subjects). Given in the instructions of the application, the patients were asked to perform two arm movements, Curl Up and Raise Up. The two exercises were classified into three parts: curl part, raise part, and stable part. Stroke patients were expected to experience difficulty in performing both exercises efficiently on the same arm. We proposed 20 handcrafted features from these three parts. Our study achieved an average accuracy of 61.7%-74.2% and an average area under the ROC curve (AUC) of 66.2%-81.5% from the combination of both exercises. Compared to the FAST method used by examiners in a previous study (Kapes et al., 2014) that showed with an accuracy of 69%-77% for every age group, our study showed promising results for early stroke identification, giving that our study is based only on the arm factor.
- Published
- 2019
- Full Text
- View/download PDF
19. Automated embolic signal detection using Deep Convolutional Neural Network.
- Author
-
Sombune P, Phienphanich P, Phuechpanpaisal S, Muengtaweepongsa S, Ruamthanthong A, and Tantibundhit C
- Subjects
- Artifacts, Humans, Neural Networks, Computer, Ultrasonography, Doppler, Transcranial, Embolism
- Abstract
This work investigated the potential of Deep Neural Network in detection of cerebral embolic signal (ES) from transcranial Doppler ultrasound (TCD). The resulting system is aimed to couple with TCD devices in diagnosing a risk of stroke in real-time with high accuracy. The Adaptive Gain Control (AGC) approach developed in our previous study is employed to capture suspected ESs in real-time. By using spectrograms of the same TCD signal dataset as that of our previous work as inputs and the same experimental setup, Deep Convolutional Neural Network (CNN), which can learn features while training, was investigated for its ability to bypass the traditional handcrafted feature extraction and selection process. Extracted feature vectors from the suspected ESs are later determined whether they are of an ES, artifact (AF) or normal (NR) interval. The effectiveness of the developed system was evaluated over 19 subjects going under procedures generating emboli. The CNN-based system could achieve in average of 83.0% sensitivity, 80.1% specificity, and 81.4% accuracy, with considerably much less time consumption in development. The certainly growing set of training samples and computational resources will contribute to high performance. Besides having potential use in various clinical ES monitoring settings, continuation of this promising study will benefit developments of wearable applications by leveraging learnable features to serve demographic differentials.
- Published
- 2017
- Full Text
- View/download PDF
20. Automated embolic signal detection using adaptive gain control and classification using ANFIS.
- Author
-
Sombune P, Phienphanich P, Muengtaweepongsa S, Ruamthanthong A, and Tantibundhit C
- Subjects
- Algorithms, Fuzzy Logic, Humans, Sensitivity and Specificity, Wavelet Analysis, Embolism diagnostic imaging, Signal Processing, Computer-Assisted, Ultrasonography, Doppler, Transcranial methods
- Abstract
This work proposes an automated system for real-time high-accuracy detection of cerebral embolic signals (ES) to couple with transcranial Doppler ultrasound (TCD) devices in diagnosing a risk of stroke. The algorithm employs Adaptive Gain Control (AGC) approach to capture suspected ESs in real-time. Then, Adaptive Wavelet Packet Transform (AWPT) and Fast Fourier Transform (FFT) are used to extract from them features most efficiently representing ES, which determined by Sequential Feature Selection technique. Extracted feature vectors from the suspected ESs are later determined whether they are of an ES or non-ES interval by Adaptive Neuro-Fuzzy Inference System (ANFIS) based classifier. The effectiveness of the developed system was evaluated over 19 subjects going under procedures generating solid and gaseous emboli. The results showed that the proposed algorithm yielded 91.5% sensitivity, 90.0% specificity, and 90.5% accuracy. Cross validations were performed 20 times on both the proposed algorithm and the High Dimensional Model Representation (HDMR) method (the most efficient algorithm to date) and their performances were compared. Paired t-test difference showed that the proposed algorithm outperformed the HDMR method, in both detection accuracy [t(19, 0.01) = 132.2073, p ~ 0] and sensitivity [t(19, 0.01) = 131.4676, p ~ 0] at 90.0% specificity, suggesting promising potential as a medical support system in ES monitoring of various clinical settings.
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.