6,923 results
Search Results
2. An Overview of Machine Learning in Orthopedic Surgery: An Educational Paper.
- Author
-
Padash, Sirwa, Mickley, John P., Vera Garcia, Diana V., Nugen, Fred, Khosravi, Bardia, Erickson, Bradley J., Wyles, Cody C., and Taunton, Michael J.
- Abstract
The growth of artificial intelligence combined with the collection and storage of large amounts of data in the electronic medical record collection has created an opportunity for orthopedic research and translation into the clinical environment. Machine learning (ML) is a type of artificial intelligence tool well suited for processing the large amount of available data. Specific areas of ML frequently used by orthopedic surgeons performing total joint arthroplasty include tabular data analysis (spreadsheets), medical imaging processing, and natural language processing (extracting concepts from text). Previous studies have discussed models able to identify fractures in radiographs, identify implant type in radiographs, and determine the stage of osteoarthritis based on walking analysis. Despite the growing popularity of ML, there are limitations including its reliance on "good" data, potential for overfitting, long life cycle for creation, and ability to only perform one narrow task. This educational article will further discuss a general overview of ML, discussing these challenges and including examples of successfully published models. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. State of the art paper: Cardiac computed tomography of the left atrium in atrial fibrillation.
- Author
-
Bodagh, Neil, Williams, Michelle C., Vickneson, Keeran, Gharaviri, Ali, Niederer, Steven, and Williams, Steven E.
- Abstract
The clinical spectrum of atrial fibrillation means that a patient-individualized approach is required to ensure optimal treatment. Cardiac computed tomography can accurately delineate atrial structure and function and could contribute to a personalized care pathway for atrial fibrillation patients. The imaging modality offers excellent spatial resolution and has been utilised in pre-, peri- and post-procedural care for patients with atrial fibrillation. Advances in temporal resolution, acquisition times and analysis techniques suggest potential expanding roles for cardiac computed tomography in the future management of patients with atrial fibrillation. The aim of the current review is to discuss the use of cardiac computed tomography in atrial fibrillation in pre-, peri- and post-procedural settings. Potential future applications of cardiac computed tomography including atrial wall thickness assessment and epicardial fat volume quantification are discussed together with emerging analysis techniques including computational modelling and machine learning with attention paid to how these developments may contribute to a personalized approach to atrial fibrillation management. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. SMIAltmetric: A comprehensive metric for evaluating social media impact of scientific papers on Twitter (X).
- Author
-
Wang, Zuzheng, Lu, Yongxu, Zhou, Yuanyuan, and Ji, Jiaojiao
- Subjects
MACHINE learning ,SOCIAL impact ,SCHOLARLY communication ,SOCIAL classes ,ALTMETRICS ,SOCIAL media - Abstract
• A new indicator, SMIAltmetric, for measuring scientific paper's social media impact. • Key impact factors: Followers, retweets, mentions, and Citation. • SMIAltmetric outperforms Altmetric in finer differentiation. The rise of social media has significantly influenced scholarly communication, knowledge dissemination, and research evaluation, leading to the enrichment of alternative metrics (altmetrics) for evaluating academic papers' social impact, which assesses the social impact of academic papers through online activities, including reading, bookmarking, downloading, and commenting. However, these altmetrics often focus on the number of mentions on social media rather than thoroughly evaluating the source, content, and dissemination of these mentions. To address this gap, this study introduces the social media impact altmetric (SMIAltmetric), which is based on 44,087 publications and 860,680 tweets (now "posts"), a comprehensive scoring system for evaluating scientific papers on Twitter (now "X"), using diverse features, including literature-related, social media engagement-related, user-related, and content-related features. Employing Altmetric Attention Acores (AAS) as labels, we tested eight machine learning algorithms, with XGBoost demonstrating the highest accuracy at 0.8672. Crucial factors influencing SMIAltmetric, as identified by the SHAP value, were followers, retweets, mentions, and citation. Furthermore, consistency analysis and convergent validation between the proposed SMIAltmetric and AAS confirm the reliability and finer differentiation of SMIAltmetric. The proposed SMIAltmetric provides a more comprehensive understanding of a paper's social media impact, enhancing the evaluation of scientific discourse and its engagement with society. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Late Middle Ages watermarked Italian paper: A Machine Learning spatial-temporal approach.
- Author
-
Teodonio, Lorenzo, Scatigno, Claudia, Missori, Mauro, and Festa, Giulia
- Subjects
- *
WATERMARKS , *CATALOGS , *MACHINE learning , *MIDDLE Ages , *FOURIER transform infrared spectroscopy , *PAPER mills - Abstract
• Machine Learning to classify watermarked Italian paper. • XRF and FTIR spectroscopies to study late middle-aged Italian watermarked paper. • A new spatial-temporal approach to trace the local recipes and the geographical paper mills' production. • Bergamo and Bologna stand out as marked watermarked Italian paper mills. Manuscripts, illuminated codex, books, documents and letters are composite materials, traces of the past starting from the invention of the writing. In this context, dating is one of the most important information for document attribution, and watermarked papeItaly'smarkers for studying their time-spatial distribution. In the Late Middle Ages, Italy's most important centre of paper mill was located in the town of Fabriano (Marche region, Italy). Here, a selection of ten Italian Late Middle Ages watermarked papers belonging to the Corpus Chartarum Italicarum (Corpus of Italian papers) is characterised by elemental and molecular spectroscopies and collected data are analysed by Machine Learning (ML) to trace the local fabrication recipes and the geographical paper mills production. Data from portable X-ray fluorescence and Fourier transform infrared spectroscopy were analysed through Singular Vector Machine, Soft Independent Modeling of Class Analogy and Moving Blocks methods. This innovative ML spatial-temporal approach based on keeping the temporal variable fixed is used to find elemental benchmarks for classifying the watermarked Italian notarial catalogue of the Late Middle Ages finding differences in the local recipes and studying the homogeneity in the paper mills' production. Results show that watermarked paper from Northern Italy, from the town of Strozza, Piacenza and Bergamo , as well as Bologna, present a high elemental and molecular homogeneity, which indicates that the hand-made processing technique could be the same helped by the proximity of the three cities, starting point for technologies exchange or influence. No heavy metals are found in the watermarked paper and K, Ca, Fe and Zn are identified as elemental benchmarks. Finally, Ca, Ti, Mn, Cr and Fe are particularly present on the edges of the watermarked papers. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Content-based quality evaluation of scientific papers using coarse feature and knowledge entity network.
- Author
-
Wang, Zhongyi, Zhang, Haoxuan, Chen, Haihua, Feng, Yunhe, and Ding, Junhua
- Subjects
MACHINE learning ,SCIENCE education ,COMPUTER science ,PEER pressure ,RANDOM forest algorithms - Abstract
Pre-evaluating scientific paper quality aids in alleviating peer review pressure and fostering scientific advancement. Although prior studies have identified numerous quality-related features, their effectiveness and representativeness of paper content remain to be comprehensively investigated. Addressing this issue, we propose a content-based interpretable method for pre-evaluating the quality of scientific papers. Firstly, we define quality attributes of computer science (CS) papers as integrity , clarity , novelty , and significance , based on peer review criteria from 11 top-tier CS conferences. We formulate the problem as two classification tasks: Accepted/Disputed/Rejected (ADR) and Accepted/Rejected (AR). Subsequently, we construct fine-grained features from metadata and knowledge entity networks, including text structure, readability, references, citations, semantic novelty, and network structure. We empirically evaluate our method using the ICLR paper dataset, achieving optimal performance with the Random Forest model, yielding F1 scores of 0.715 and 0.762 for the two tasks, respectively. Through feature analysis and case studies employing SHAP interpretable methods, we demonstrate that the proposed features enhance the performance of machine learning models in scientific paper quality evaluation, offering interpretable evidence for model decisions. • Define four criteria for quality evaluation of scientific papers: integrity, clarity, novelty, and significance. • Propose a framework for quality evaluation of scientific papers based on coarse features and knowledge entity network. • An effective algorithm for measuring the novelty and significance of scientific papers based on knowledge entity networks. • Create and release a rigorous dataset, which could serve as the gold standard for quality evaluation of scientific papers. • Conduct extensive experiments to validate the effectiveness of the proposed framework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Textual features of peer review predict top-cited papers: An interpretable machine learning perspective.
- Author
-
Sun, Zhuanlan
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,TECHNICAL reports ,PEER communication ,THEATER reviews ,EMOTION recognition - Abstract
• A framework combining machine learning models and SHAP to interpret how peer review improves the research impact. • The importance of key linguistic, sentiment, and peer review features from peer review reports in determining the scientific significance of papers. • Valuable insights for authors to improve the quality of work and increase academic influence by paying closer attention to peer review characteristics. • Textual features of peer review reports play an important role in predicting post-publication scientific impact. Peer review is crucial in improving the quality and reliability of scientific research. However, the mechanisms through which peer review practices ensure papers become top-cited papers (TCPs) after publication are not well understood. In this study, by collecting a data set containing 13, 066 papers published between 2016 and 2020 from Nature communications with open peer review reports, we aim to examine how textual features embedded within the peer review reports of papers that reflect the reviewers' emotions may predict the papers to be TCPs. We compiled a list of 15 textual features and classified them into three categories: peer review features, linguistic features, and sentiment features. We then chose the XGBoost machine learning model with the best performance in predicting TCPs, and utilized the explainable artificial intelligence techniques SHAP to interpret the role of feature importance on the prediction results. The distribution of feature importance ranking results demonstrates that sentiment features play a crucial role in determining papers' potential to be highly cited. This conclusion still holds, even when the ranking of the feature importance changes in the subgroup analysis of dividing the samples into four disciplines (biological sciences, health sciences, physical sciences, and earth and environmental sciences), as well as two groups based on whether reviewers' identities were revealed. This research emphasizes the textual features retrieved from peer review reports that play role in improving manuscript quality can predict the post-publication research impact. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Cigarette paper as evidence: Forensic profiling using ATR-FTIR spectroscopy and machine learning algorithms.
- Author
-
Kapoor, Muskaan, Sharma, Akanksha, and Sharma, Vishal
- Subjects
- *
CIGARETTES , *FORENSIC sciences , *FOURIER transform infrared spectroscopy , *MACHINE learning , *ALGORITHMS - Abstract
This research highlights the underestimated significance of cigarette paper as evidence at crime scenes. The primary objective is to distinguish cigarette paper from similar-looking alternatives, addressing the first research objective. The second objective involves identifying cigarette paper brands using attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopy and machine learning (ML) algorithms. Accurate differentiation of cigarette paper from normal paper is emphasized. ATR-FTIR spectroscopy, coupled with principal component analysis (PCA) for dimensionality reduction, is employed for brand identification. Among fifteen ML algorithms compared, the CatBoost classifier excels for both objectives. This research presents a non-destructive, effective method for studying cigarette paper, contributing valuable insights to crime scene investigations. [Display omitted] • Forensic evaluation of cigarette paper utilizing ATR-FTIR spectroscopy and Machine learning algorithms. • Peak characterization and differentiation-distinguishing cigarette paper from other types. • Machine learning algorithm comparison: assessing discrimination across nine cigarette brands. • External validation of the dominant algorithm using unknown samples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Machine learning-assisted wide-gamut fluorescence visual test paper for propazine determination in fish and seawater samples.
- Author
-
Liu, Hua, You, Jinjie, Liu, Chenxi, Zhang, Zeming, Sun, Aili, Hao, Guijie, and Shi, Xizhi
- Subjects
- *
FLUORESCENCE , *SEAWATER , *SUPPORT vector machines , *FLUORESCENCE quenching , *IMPRINTED polymers - Abstract
Molecularly imprinted polymer (MIP-QDs) with fluorescence quenching ability toward propazine was synthesized for propazine detection. b(Blue)-MIP-QDs were prepared using ZnCdS/ZnS QDs via reverse micro-emulsion, whereas r(red)-MIP-QDs were synthesized using CdSe/ZnS QDs. By utilizing graphene quantum dots (GQDs) as a stable fluorescence intensity reference, the wide-gamut fluorescence test paper was constructed on the basis of mixing b-MIP-QDs, r-MIP-QDs, and GQDs under the optimal ratio. When analyzing spiked propazine in fish and seawater samples using a test paper, satisfactory recoveries of 104.0 %–114.6 % and 92.0 %–96.4 % were obtained, with corresponding limits of detection of 5.0 μg/kg and 1.0 μg/L, respectively. The RGB extractor was utilized to extract the actual fluorescence color and construct a dataset consisting of R, G, and B values, as well as concentration data from 400 samples. The SVR model of Python 3.9.7 was used to obtain and analyze the concentration and feature data. After optimization, the constructed model achieved a correlation coefficient of 0.98 and an RMSE of only 1.81, indicating high prediction accuracy and excellent generalization ability that meet quenching prediction requirements. As an intelligent and rapid detection method, this model holds significant practical significance. [Display omitted] • A wide-gamut fluorescence visual test paper was fabricated. • The visual test paper demonstrates excellent linearity, accuracy, precision, and sensitivity for promazine detect. • An RGB model by support vector machine were constructed. • The model exhibits high accuracy and strong generalization ability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Nonparametric functional analysis under joint estimation with applications to identifying highly cited papers.
- Author
-
Chowdhury, K.P.
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,FUNCTIONAL analysis ,ELECTRONIC data processing ,STATISTICAL significance ,CITATION indexes ,SCIENTOMETRICS - Abstract
This article introduces a nonparametric methodology combining the strengths of binary regression and latent variable formulations, while overcoming their disadvantages. The mathematical results are implemented through a novel Bayesian Hierarchical estimation methodology called Latent Adaptive Hierarchical Expectation Maximization Like algorithm. Requiring minimal assumptions, it extends extant methodologies, and in simulation studies gives better prediction and inference performances for asymmetric data generating processes. A new classification statistic, called Adjusted Receiver Operating Curve Statistic is also introduced. Utilizing it we demonstrate better overall model fit, inference and prediction performance of the proposed methodology over widely used existing methods in the sciences. In addition, the methodology can be used to perform model diagnostics for any model specification. This is a highly useful result, and it extends existing work for categorical model diagnostics broadly across the sciences. Furthermore, the mathematical results also highlight important new findings regarding the interplay of statistical significance and scientific significance. Finally, the methodology is applied to identifying highly-cited papers in the social sciences in a joint estimation framework. The results indicate that the methodology outperforms widely used existing artificial intelligence and machine learning models with very few Monte Carlo iterations. In Scientometric application, it finds Journal Impact Factor to be more important than Keyword Popularity parameters for explaining citation outcomes in select social science fields. It further finds that the percentage change in Published Popularity may also help to explain citation outcomes in the field. The findings appear to be new to the Scientometric field. • Introduces new statistical method for regressions requiring few assumptions. • Outperforms existing artificial intelligence/machine learning algorithms. • Introduces ARS: A general statistic for predication/comparison of models. • Applies methodology to identify highly-cited papers in the social sciences. • JIF more important than keywords for getting cited in some social science fields. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Primate eye tracking with carbon-nanotube-paper-composite based capacitive sensors and machine learning algorithms.
- Author
-
Li, Tianyi, Sakthivelpathi, Vigneshwar, Qian, Zhongjie, Soetedjo, Robijanto, and Chung, Jae-Hyun
- Subjects
- *
PROXIMITY detectors , *EYE tracking , *CAPACITIVE sensors , *MACHINE learning , *EYE movements - Abstract
Accurate real-time eye tracking is crucial in oculomotor system research. While the scleral search coil system is the gold standard, its implantation procedure and bulkiness pose challenges. Camera-based systems are affected by ambient lighting and require high computational and electric power. This study presents a novel eye tracker using proximity capacitive sensors made of carbon-nanotube-paper-composite (CPC). These sensors detect femtofarad-level capacitance changes caused by primate corneal movement during horizontal and vertical eye rotations. Data processing and machine learning algorithms are evaluated to enhance the accuracy of gaze angle prediction. The system performance is benchmarked against the scleral coil during smooth pursuits, saccades tracking, and fixations. The eye tracker demonstrates up to 0.97 correlation with the coil in eye tracking and is capable of estimating gaze angle with a median absolute error as low as 0.30°. The capacitive eye tracker demonstrates good consistency and accuracy in comparison to the gold-standard scleral search coil method. This lightweight, non-invasive capacitive eye tracker offers potential as an alternative to traditional coil and camera-based systems in oculomotor research and vision science. [Display omitted] • Novel capacitive proximity sensors are evaluated for non-human primate eye tracking. • The sensors demonstrate a high correlation (up to 0.97) with the scleral coil system. • Machine learning estimates the gaze with a median absolute error as low as 0.30°. • Show promise as a non-contact, low-cost, and low-power eye tracking tool. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Detection of SARS-CoV-2 using machine learning-enabled paper-assisted ratiometric fluorescent sensors based on target-induced magnetic DNAzyme.
- Author
-
Wang, Wenhai, Luo, Lun, Li, Yanmei, Hong, Bin, Ma, Yi, Kang, Keren, and Wang, Jufang
- Subjects
- *
DEOXYRIBOZYMES , *SARS-CoV-2 , *ARTIFICIAL vision , *MACHINE learning , *DETECTORS , *TARGETED drug delivery - Abstract
The development of an advanced analytical platform with regard to SARS-CoV-2 is crucial for public health. Herein, we present a machine learning platform based on paper-assisted ratiometric fluorescent sensors for highly sensitive detection of the SARS-CoV-2 RdRp gene. The assay involves target-induced rolling circle amplification to generate magnetic DNAzyme, which is then detectable using the paper-assisted ratiometric fluorescent sensor. This sensor detects the SARS-CoV-2 RdRp gene with a visible-fluorescence color response. Moreover, leveraging different fluorescence responses, the ResNet algorithm of machine learning assists in accurately identifying fluorescence images and differentiating the concentration of the SARS-CoV-2 RdRp gene with over 99% recognition accuracy. The machine learning platform exhibits exceptional sensitivity and color responsiveness, achieving a limit of detection of 30 fM for the SARS-CoV-2 RdRp gene. The integration of intelligent artificial vision with the paper-assisted ratiometric fluorescent sensor presents a novel approach for the on-site detection of COVID-19 and holds potential for broader use in disease diagnostics in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Enhancing pathogen identification in cheese with high background microflora using an artificial neural network-enabled paper chromogenic array sensor approach.
- Author
-
Jia, Zhen, Lin, Zhuangsheng, Luo, Yaguang, Cardoso, Zachary A., Wang, Dayang, Flock, Genevieve H., Thompson-Witrick, Katherine A., Yu, Hengyong, and Zhang, Boce
- Subjects
- *
SENSOR arrays , *ARRAIGNMENT , *ESCHERICHIA coli O157:H7 , *PATHOGENIC bacteria , *SALMONELLA enteritidis , *IDENTIFICATION - Abstract
Biohazards, which may occur at all supply chain stages, pose significant threats to food safety and public health. Addressing these concerns and enhancing food safety necessitates a nondestructive pathogen surveillance approach capable of continuously and simultaneously detecting multiple pathogens. Detecting and differentiating low concentrations of pathogenic bacteria amid high background microflora levels in foods is challenging, requiring technology with high sensitivity and robust discriminatory capability. This study introduces an artificial neural network-driven paper chromogenic array sensor (ANN-PCA) technique developed for the nondestructive, continuous, and simultaneous detection of Salmonella Enteritidis (SE) and Escherichia coli O157:H7 (Ec) from a high background microflora in shredded cheddar cheese. This method enables accurate detection of SE and Ec in monoculture and cocktail culture while distinguishing them from a high level of background microflora (∼7.5 log CFU/g), with accuracies ranging from 72 ± 11% to 92 ± 3%. In addition, SE and Ec were successfully identified at concentrations as low as 1 log CFU/g within one day, with an accuracy of 72 ± 11%. This approach exhibits promising potential for integration into a digitalized, smart, and resilient nondestructive surveillance system for real-time pathogen detection in foods throughout the supply chain without enrichment, incubation, or other sample preparation steps. [Display omitted] • Development of a machine learning-driven paper chromogenic array sensor approach. • Non-destructive, continuous, and simultaneous detection of multiple pathogens. • Detection of pathogenic bacteria from a high level of background microflora. • The approach provided potential detection for pathogens throughout the food supply chain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Versatile photo-sensing ability of paper based flexible 2D-Sb0.3Sn0.7Se2 photodetector and performance prediction with machine learning algorithm.
- Author
-
Rawal, Kuntesh, Devendrabhai, Patel Dixita, Pataniya, Pratik, Jain, Prince, Joshi, Anand, Solanki, G.K., and Tannarana, Mohit
- Subjects
- *
PHOTODETECTORS , *K-nearest neighbor classification , *STANDARD deviations , *MACHINE performance , *MACHINE learning , *CHEMICAL peel - Abstract
Present report demonstrates the application of Sb 0.3 Sn 0.7 Se 2 single crystal as a paper based flexible photodetector. Direct vapour transport grown bulk crystals of Sb 0.3 Sn 0.7 Se 2 has been converted to nanosheets by chemical assisted exfoliation process. The paper-based photodetector is fabricated and the switching action is studied. Tuning of photodetector has been carried out by 670 nm laser illumination for different bias voltages. Temporal photo-response is also studied under polychromatic light with different intensities in vacuum and open environment. The low temperature stability of photodetector has been studied for temperature range 300 K–180 K. Experimental results are obtained in terms of time resolved photocurrent under different illumination and atmospheric conditions. The flexibility and stability are also examined in detail for fabricated detector. Overall, the results suggest the application of Sb 0.3 Sn 0.7 Se 2 as a versatile flexible photodetector. Additionally, the machine learning (ML) model is trained and tested using an experimental photocurrent dataset that has a complex material design with variations in time, bias voltage, intensity, and temperature. The k-nearest neighbor algorithm exhibited outstanding performance, achieving the highest R2 value of 0.9986 when applied to a temperature dataset, with a test size of 0.4. Performance metrics such as mean absolute error and root mean squared error of various test sizes ranging from 0.4 to 0.6 are used to assess the model's accuracy and robustness in changing conditions. This comprehensive analysis not only establishes a platform for future experimental optimization of photodetector materials but also underscores the efficacy of ML regression techniques in developing high-performance photodetectors. [Display omitted] • Chemical assisted exfoliation process were used for the synthesis of SbSnSe2 nanosheets. • Synthesized nanosheets were decorated on paper for the fabrication of flexible photo-detector. • The photodetection application was demonstrated for different atmospheric conditions and different illuminations. • Low temperature stability of the detector was also studied. • KNN Machine learning model were applied to the experimental data for the prediction of photodetection behavior. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Surveillance of pathogenic bacteria on a food matrix using machine-learning-enabled paper chromogenic arrays.
- Author
-
Jia, Zhen, Luo, Yaguang, Wang, Dayang, Holliday, Emma, Sharma, Arnav, Green, Madison M., Roche, Michelle R., Thompson-Witrick, Katherine, Flock, Genevieve, Pearlstein, Arne J., Yu, Hengyong, and Zhang, Boce
- Subjects
- *
PATHOGENIC bacteria , *SALMONELLA , *ESCHERICHIA coli , *FOOD pathogens , *SENSOR arrays , *FOOD safety , *MACHINE learning , *ESCHERICHIA coli O157:H7 , *FOOD microbiology - Abstract
Global food systems can benefit significantly from continuous monitoring of microbial food safety, a task for which tedious operations, destructive sampling, and the inability to monitor multiple pathogens remain challenging. This study reports significant improvements to a paper chromogenic array sensor - machine learning (PCA-ML) methodology sensing concentrations of volatile organic compounds (VOCs) emitted on a species-specific basis by pathogens by streamlining dye selection, sensor fabrication, database construction, and machine learning and validation. This approach enables noncontact, time-dependent, simultaneous monitoring of multiple pathogens (Listeria monocytogenes , Salmonella , and E. coli O157:H7) at levels as low as 1 log CFU/g with over 90% accuracy. The report provides theoretical and practical frameworks demonstrating that chromogenic response, including limits of detection, depends on time integrals of VOC concentrations. The paper also discusses the potential for implementing PCA-ML in the food supply chain for different food matrices and pathogens, with species- and strain-specific identification. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Comment on papers using machine learning for significant wave height time series prediction: Complex models do not outperform auto-regression.
- Author
-
Jiang, Haoyu, Zhang, Yuan, Qian, Chengcheng, and Wang, Xuan
- Subjects
- *
ARTIFICIAL neural networks , *TIME series analysis , *PREDICTION models , *ARTIFICIAL intelligence , *MACHINE learning , *DECOMPOSITION method - Abstract
• Five Machine Learning (ML) models compared for wave height time series prediction. • Complex ML models do not outperform simple AR in wave height time series prediction. • Comment to related papers: signal decomposition in test set series is WRONG. Significant Wave Height (SWH) is crucial in many aspect of ocean engineering. The accurate prediction of SWH has therefore been of immense practical value. Recently, Artificial Intelligence (AI) time series prediction methods have been widely used for single-point short-term SWH time-series forecasting, resulting in many AI-based models claiming to achieve good results. However, the extent to which these complex AI models can outperform traditional methods has largely been overlooked. This study compared five different models - AutoRegressive (AR), eXtreme Gradient Boosting (XGB), Artificial Neural Network (ANN), Long Short-Term Memory (LSTM), and WaveNet - for their performance on SWH time series prediction at 16 buoy locations. Surprisingly, the results suggest that the differences of performance among different models are negligible, indicating that all these AI models have only "learned" the linear auto-regression from the data. Additionally, we noticed that many recent studies used signal decomposition method for such time series prediction, and most of them decomposed the test sets, which is WRONG. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Addressing the sample volume dependency of the colorimetric glucose measurement on microfluidic paper-based and thread/paper-based analytical devices using a novel low-cost analytical viewpoint.
- Author
-
Derakhshani, Mohammad, Jahanshahi, Amir, and Ghourchian, Hedayatollah
- Subjects
- *
IMAGE processing , *TIME measurements - Abstract
[Display omitted] • A novel analytical technique for colorimetric glucose determination on µPADs. • Non-invasive sweat glucose detection independent of sample volume. • Seamless operation: no extra gadgets nor smartphone camera is required. • Computationally inexpensive readout on portable Arm platforms demonstrated. The colorimetric method is widely exploited for glucose measurement on microfluidic paper-based and microfluidic thread/paper-based analytical devices, μPADs and μTPADs, respectively. However, two significant challenges still hinder the real-world applications: the variation of the generated color based on sample volume variation, and the consistent dependency on an imaging camera - usually a smartphone camera - for color readout. The latter also suffers from multiple limitations, such as the variable ambient light conditions and imaging variation among different smartphone brands and models. This manuscript suggests a new device demonstrating a novel viewpoint to the colorimetric method to address both aforementioned challenges. The presented device, Volume-Independent Autonomous box (VIA box), continuously monitors the transient color of the µPAD upon the introduction of the sample till color stabilization, in contrast to similar studies where merely the final stabilized color is read. The interpretation of the transient color profile with respect to time results in the measurement of the glucose level independent of sample volume. In addition, the VIA box monitors µPAD's color using an ordinary RGB sensor instead of an imaging camera in similar studies. Since merely RGB values are recorded in any measurement instance, the computational cost is extremely low compared to the relevant literature where image processing techniques are used. VIA box exploits a simple low-end microcontroller to continuously monitor the transient color, interpret the color profile, and show the glucose level. The samples used in this manuscript are made of artificial sweat samples, including the known glucose range in real sweat samples. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Predicting moisture penetration dynamics in paper with machine learning approach.
- Author
-
Alzweighi, Mossab, Mansour, Rami, Maass, Alexander, Hirn, Ulrich, and Kulachenko, Artem
- Subjects
- *
MACHINE learning , *FEEDFORWARD neural networks , *HYGROTHERMOELASTICITY , *MACHINE dynamics , *RECURRENT neural networks , *MOISTURE - Abstract
• A machine learning approach was used to investigate moisture penetration in paper materials. • The study evaluated the capabilities of both the Feedforward Neural Network (FNN) and the Recurrent Neural Network (RNN). • Numerically generated data was employed for network training. • A continuum model incorporating anisotropic properties, creep behavior, viscoelasticity, and moisture dependency was used for simulations. • The RNN demonstrated superior predictive capability compared to the FNN. In this work, we predicted the gradient of the deformational moisture dynamics in a sized commercial paper by observing the curl deformation in response to the one-sided water application. The deformational moisture is a part of the applied liquid which ends up in the fibers causing swelling and subsequent mechanical response of the entire fiber network structure. The adapted approach combines traditional experimental procedures, advanced machine learning techniques and continuum modeling to provide insights into the complex phenomenon relevant to ink-jet digital printing in which the sized and coated paper is often used, meaning that not all the applied moisture will reach the fibers. Key material properties including elasticity, plastic parameters, viscoelasticity, creep, moisture dependent behavior, along with hygroexpansion coefficients are identified through extensive testing, providing vital data for subsequent simulation using a continuum model. Two machine learning models, a Feedforward Neural Network (FNN) and a Recurrent Neural Network (RNN), are probed in this study. Both models are trained using exclusively numerically generated moisture profile histories, showcasing the value of such data in contexts where experimental data acquisition is challenging. These two models are subsequently utilized to predict moisture profile history based on curl experimental measurements, with the RNN demonstrating superior accuracy due to its ability to account for temporal dependencies. The predicted moisture profiles are used as inputs for the continuum model to simulate the associated curl response comparing it to the experiment representing "never seen" data. The result of comparison shows highly predictive capability of the RNN. This study melds traditional experimental methods and innovative machine learning techniques, providing a robust technique for predicting moisture gradient dynamics that can be used for both optimizing the ink solution and paper structure to achieve desirable printing quality with lowest curl propensities during printing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Visible detection of chilled beef freshness using a paper-based colourimetric sensor array combining with deep learning algorithms.
- Author
-
Lin, Yuandong, Ma, Ji, Cheng, Jun-Hu, and Sun, Da-Wen
- Subjects
- *
MACHINE learning , *DEEP learning , *SENSOR arrays , *PATTERN recognition systems , *MULTIVARIATE analysis , *FEATURE extraction - Abstract
• Qualitative and quantitative detection of amine gases could be achieved by CSA. • A visible detection of beef freshness using the amine-responsive CSA was proposed. • ResNet34 had the best performance for beef freshness detection based on CSA. • T-SNE could further visualize and understand the classification process of DL. This study developed an innovative approach that combines a colourimetric sensor array (CSA) composed of twelve pH-response dyes with advanced algorithms, aiming to detect amine gases and assess the freshness of chilled beef. With the assistance of multivariate statistical analysis, the sensor array can effectively distinguish five amine gases and enable rapid quantification of trimethylamine vapour with a limit of detection (LOD) of 8.02 ppb and visually monitor the fresh levels of chilled beef. Moreover, the utilization of deep learning models (ResNet34, VGG16, and GoogleNet) for chilled beef freshness evaluation achieved an overall accuracy of 98.0 %. Furthermore, t -distributed stochastic neighbour embedding (t -SNE) visualized the feature extraction process and provided explanations to understand the classification process of deep learning. The results demonstrated that applying deep learning techniques in the process of pattern recognition of CSA can help in realizing the rapid, robust, and accurate assessment of chilled beef freshness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Enhanced 3-D asynchronous correlation data preprocessing method for Raman spectroscopy of Chinese handmade paper.
- Author
-
Yan, Chunsheng, Cheng, Zhongyi, Cao, Linquan, and Wen, Yingke
- Subjects
- *
MACHINE learning , *RAMAN spectroscopy , *CONVOLUTIONAL neural networks , *HILBERT transform , *DATA structures - Abstract
[Display omitted] • 3D-ACM involves two rounds of Hilbert transform and tensor product operations. • It significantly enhances the equivalent frequency points and sample numbers. • The R-squared values for PLS-LR, KNN, RF and CNN models approach or equal 1. • These supervised models are comparable to unsupervised models such as PCA-LR. We have developed a novel 3D asynchronous correlation method (3D-ACM) designed for the classification and identification of Chinese handmade paper samples using Raman spectra and machine learning. The 3D-ACM approach involves two rounds of tensor product and Hilbert transform operations. In the tensor product process, the outer product of the spectral data from different samples within the same category is computed, establishing inner connections among all samples within that category. The Hilbert transform introduces a 90-degree phase shift, resulting in a true three-dimensional spectral data structure. This expansion significantly increases the number of equivalent frequency points and samples within each category. This enhancement substantially boosts spectral resolution and reveals more hidden information within the spectral data. To maximize the potential of 3D-ACM, we employed six machine learning models: principal component analysis (PCA) with linear regression (LR), support vector machine (SVM) with LR, k-Nearest Neighbors (KNN), random forest (RF), and convolutional neural network (CNN). When applied to the 3D-ACM data preprocessing method, R-squared values of PLS-LR, KNN, RF and CNN supervised models, approached or equaled 1. This indicates exceptional performance comparable to unsupervised models like PCA. 3D-ACM stands as a versatile mathematical technique not confined to spectral data. It also eliminates the necessity for additional experimental setups or external control conditions, distinct from traditional two-dimensional correlation spectroscopy. Moreover, it preserves the original experimental data, setting it apart from conventional data preprocessing methods. This positions 3D-ACM as a promising tool for future material classification and identification in conjunction with machine learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Identifying potential breakthrough research: A machine learning method using scientific papers and Twitter data.
- Author
-
Li, Xin, Wen, Yang, Jiang, Jiaojiao, Daim, Tugrul, and Huang, Lucheng
- Subjects
MACHINE learning ,RESOURCE allocation ,DATA mining ,DISRUPTIVE innovations ,GREEN technology - Abstract
Breakthrough research may signal shifts in science, technology, and innovation systems. Early identification of breakthrough research is important not only for scientists, but also for policy makers and R&D experts in developing R&D strategies and allocating R&D resources. Researchers mostly use scientific papers data to identify potential breakthrough research, but they rarely make use of Twitter data related to scientific research and machine learning methods. Analysis of Twitter data is of great significance for us to understand the public's perception of potential breakthrough research and to identify potential breakthrough research. Machine learning methods can assist us in predicting the trend of events by utilizing prior knowledge and experience. Therefore, this paper proposes a framework for identifying potential breakthrough research using machine learning methods with scientific papers and Twitter data. We select solar cells as a case study to verify the valid and flexible of this framework. In this case, we use machine learning method to discover potential breakthrough research from scientific papers, and we use Twitter data mining to analyze Twitter users' sense of and response to the discovered potential breakthrough research, which aims to achieve a more extensive and diverse assessment of the discovered potential breakthrough research. This paper contributes to identifying potential breakthrough research, as well as understanding the emergence and development of breakthrough research. It will be of interest to R&D experts in the field of solar cell technology. • We proposed a framework for identifying potential breakthrough research using machine learning method. • We found 8 potential breakthrough researches in the field of solar cell technology in 2015. • Twitter data mining could be used to assist in identifying potential breakthrough research. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. Machine learning-assisted photoluminescent sensor array based on gold nanoclusters for the discrimination of antibiotics with test paper.
- Author
-
Xu, Jinming, Chen, Xihang, Zhou, Huangmei, Zhao, Yu, Cheng, Yuchi, Wu, Ying, Zhang, Jie, Chen, Jinquan, and Zhang, Sanjun
- Subjects
- *
GOLD clusters , *PHOTOLUMINESCENT polymers , *SENSOR arrays , *FISHER discriminant analysis , *ANTIBIOTIC residues , *ANTIBIOTICS - Abstract
Antibiotic residues accumulation in the environment endangers ecosystems and human health. There is an urgent need for a facile and efficient strategy to detect antibiotics. Here, we report a photoluminescent sensor array based on protein-stabilized gold nanoclusters (AuNCs) for the detection of two families of antibiotics, tetracyclines and quinolones. The nanoclusters were synthesized with bovine serum albumin (BSA) and ovalbumin (OVA), respectively. They had different interactions with seven kinds of antibiotics and exhibited diverse photoluminescence (PL) responses, which were analyzed by linear discriminant analysis and ExtraTrees algorithms. The sensor array performed well in both classification and quantification of seven antibiotics. And the quantitative results of all antibiotics obtained R2 of no less than 0.99 at 0–100 μM when using suitable regression models. Additionally, the sensor array was able to distinguish antibiotic mixtures and multiple interfering substances, and it also kept 100% classification accuracy in river water samples. Moreover, test paper assisted by a smartphone was applied for quick detection of antibiotics, with good performance in both HEPES buffer and river water. These studies reveal great potential for the point-of-use analysis of antibiotics in environmental monitoring. [Display omitted] • The first nanoclusters-based photoluminescent sensor array for quinolones is developed. • Seven antibiotics of tetracyclines and quinolones are successfully identified. • ExtraTrees algorithm is first employed for antibiotics detection. • Test paper assisted by a smartphone is used to identify antibiotics in river water. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Paper-based multiplexed colorimetric biosensing of cardiac and lipid biomarkers integrated with machine learning for accurate acute myocardial infarction early diagnosis and prognosis.
- Author
-
Low, Joyce Siew Yong, Thevarajah, T. Malathi, Chang, Siow Wee, and Khor, Sook Mei
- Subjects
- *
MYOCARDIAL infarction , *EARLY diagnosis , *LIPIDS , *MACHINE learning , *HIGH density lipoproteins , *PROGNOSIS - Abstract
This study demonstrates how a colorimetric biosensor based on microfluidic paper can swiftly diagnose a disease and predict its prognosis to triage patients effectively. This was the first biosensor to quantify the gold standard cardiac troponin (cTnI) and lipid biomarkers, including high-density lipoprotein (HDL) and low-density lipoprotein (LDL) simultaneously. Prior research encountered obstacles or limitations, such as measuring a single biomarker or total cholesterol, which cannot distinguish between LDL and HDL. CatBoost, an advanced machine learning (ML) technique used for diagnosis that combines the predictive power of ML algorithms obtained an impressive area under the receiver operating curves (AUROC) of 0.97 ± 0.018 for all possible classification thresholds. CatBoost is a brand-new ensemble framework based on the interaction of multiple health parameters that can generate AUROC values of 0.897 ± 0.047 for the accurate prognosis of recurrent acute myocardial infarction (AMI), demonstrating remarkably accurate AMI diagnosis and prognosis. In addition, this paper-based analytical device (µPAD) biosensor employs an electrophoretic method to overcome the challenges posed by non-specific adsorption. This is accomplished by isolating non-specific biomolecules based on differences in their isoelectric points and removing non-specifically adsorbing colorimetric markers. The limits of detection (LoD) for cTnI in AMI were lower than their respective clinical cutoff values. This study also demonstrated that the proposed ML framework produced significantly better results than conventional statistical analysis. High correlation filtering and (t-SNE) dimensionality reduction were utilized for a limited number of data points. The respectable accuracy and AUROC of this method were also validated using cross-validation. [Display omitted] • First ML-integrated colorimetric biosensing for accurate AMI diagnosis and prognosis. • Cardiac and lipid biomarkers simultaneous detection improves diagnostic reliability. • The electrophoretic method omits the washing steps to allow rapid diagnosis. • The new ML framework significantly enhances outcomes compared to statistical analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. An explainable artificial-intelligence-based approach to investigating factors that influence the citation of papers.
- Author
-
Ha, Taehyun
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,CITATION analysis ,STRATEGIC planning ,BIG data ,BIBLIOMETRICS - Abstract
The number of citations is often used to estimate the impact of a study. Previous studies have investigated what factors of publications affect citations and how they affect citations. However, the findings of the studies were unable to reach a consensus because of the limited sample size, domain, and measurement. This study reviewed previous studies that addressed factors influencing citations and then identified 14 measurable factors. Approximately 33 million publications from the Scopus database were used to train and validate a CatBoost model. A SHAP framework was used to interpret the trained model by focusing on how salient factors affect the number of citations. The results showed that the year is a significant factor affecting the citation but not the priority factor. A publication source was presented as the most important factor contributing to the citation. Several implications and strategic approaches to maximizing the impact of a study were discussed. • This study examines 14 factors that can influence the citation of papers. • CatBoost model and SCOPUS dataset are used to examine the influences. • SHAP interprets the model and suggests how the factors contribute to the citation. • The results show that selecting the right journal/conference is the most important. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. Scientific papers citation analysis using textual features and SMOTE resampling techniques.
- Author
-
Umer, Muhammad, Sadiq, Saima, Missen, Malik Muhammad Saad, Hameed, Zahid, Aslam, Zahid, Siddique, Muhammad Abubakar, and NAPPI, Michele
- Subjects
- *
CITATION analysis , *CONTENT analysis , *MACHINE learning , *SENTIMENT analysis , *PATTERN recognition systems , *USER-generated content - Abstract
• Explore qualitative aspects of citations to measure the influence of a research article. • Apply a feature representation technique in combination with machine learning models to find the sentiment of citation. • Determine the sentiment of citation instances into positive, negative, or neutral. • Analyze the efficacy of SMOTE in balancing the citation sentiment dataset. Ascertaining the impact of research is significant for the research community and academia of all disciplines. The only prevalent measure associated with the quantification of research quality is the citation-count. Although a number of citations play a significant role in academic research, sometimes citations can be biased or made to discuss only the weaknesses and shortcomings of the research. By considering the sentiment of citations and recognizing patterns in text can aid in understanding the opinion of the peer research community and will also help in quantifying the quality of research articles. Efficient feature representation combined with machine learning classifiers has yielded significant improvement in text classification. However, the effectiveness of such combinations has not been analyzed for citation sentiment analysis. This study aims to investigate pattern recognition using machine learning models in combination with frequency-based and prediction-based feature representation techniques with and without using Synthetic Minority Oversampling Technique (SMOTE) on publicly available citation sentiment dataset. Sentiment of citation instances are classified into positive, negative or neutral. Results indicate that the Extra tree classifier in combination with Term Frequency-Inverse Document Frequency achieved 98.26% accuracy on the SMOTE-balanced dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
26. Tensor product based 2-D correlation data preprocessing methods for Raman spectroscopy of Chinese handmade paper.
- Author
-
Yan, Chunsheng, Luo, Si, Cao, Linquan, Cheng, Zhongyi, and Zhang, Hui
- Subjects
- *
TENSOR products , *RAMAN spectroscopy , *SUPPORT vector machines , *K-nearest neighbor classification , *PRINCIPAL components analysis , *MACHINE learning - Abstract
[Display omitted] • The 2-D correlation methods do not require external perturbation variables. • They are pure mathematical methods that utilize the tensor product of spectral data. • The R2 values of KNN and RF for TDACM are close to 1, indicating nearly 100% improvement. The paper introduces two new methods, namely the cross correlation method (CCM) and two-dimensional correlation method (TDCM), for preprocessing Raman spectroscopy data for analyzing Chinese handmade paper samples. CCM expands the spectral dimension from 1 × N to 1 × 2 N - 1 by taking cross-correlation between two spectral data of the same category. TDCM includes two-dimensional synchronous correlation method (TDSCM) and two-dimensional asynchronous correlation method (TDACM), which expand the spectral dimension from 1 × N to N × N by taking tensor products between two spectral data and between one spectral data and the Hilbert transformation of the other spectral data of the same category, respectively. The experimental data were preprocessed using baseline removal, CCM, TDSCM, and TDACM methods. Four machine learning models were employed to evaluate the effects of these methods: principal component analysis (PCA) combined with linear regression (LR), support vector machine (SVM) combined with LR, k-Nearest Neighbors (KNN), and random forest (RF). The results show that the R-squared values for the PCA model were nearly 1 for all types of data, indicating high accuracy. However, for SVM-LR, KNN, and RF models, the R-squared values were sorted in the order of raw data, baseline removal data, CCM, TDSCM, and TDACM preprocessed data. The R-squared values of KNN and RF machine learning models for TDACM preprocessed data were approaching 1, indicating that the accuracy of machine learning was significantly improved by nearly 100%. This has led to a remarkable improvement in the accuracy of supervised models such as KNN and RF, bringing them closer to the level of unsupervised models such as PCA. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Detection of poor controller tuning with Gramian Angular Field (GAF) and StackAutoencoder (SAE).
- Author
-
Memarian, Amirreza, Damarla, Seshu Kumar, Memarian, Alireza, and Huang, Biao
- Subjects
- *
PAPER pulp , *PRODUCT quality , *OSCILLATIONS - Abstract
Efficient control loop performance is pivotal in process industries to ensure optimal production, maintain product quality, and adhere to regulatory standards. Poorly tuned controllers can disrupt these objectives, necessitating accurate detection methods. This paper introduces a novel approach for detecting poor controller tuning through advanced techniques: the Gramian Angular Field (GAF) and Stack Auto-Encoder (SAE). Unlike manual methods, this automated system promptly identifies poorly tuned controllers, offering real-time monitoring and timely alerts to operators. The proposed methodology is substantiated through two case studies: the ISDB dataset and the pulp and paper dataset. The outcomes illustrate that the proposed approach correctly determines the appropriate outcome for the majority of the analyzed control loops across diverse industries. • New method detects poorly tuned controllers via Gramian angular field and SAE. • PV and OP images help SAE distinguish poor tuning from other oscillation causes. • Transfer learning has been used to improve methodology's effectiveness. • Tested on benchmark control loops, yielding accurate verdicts for most cases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. 3D plasmonic hexaplex paper sensor for label-free human saliva sensing and machine learning-assisted early-stage lung cancer screening.
- Author
-
Linh, Vo Thi Nhat, Kim, Hongyoon, Lee, Min-Young, Mun, Jungho, Kim, Yeseul, Jeong, Byeong-Ho, Park, Sung-Gyu, Kim, Dong-Ho, Rho, Junsuk, and Jung, Ho Sang
- Subjects
- *
PLASMONICS , *MACHINE learning , *EARLY detection of cancer , *MEDICAL screening , *LUNG cancer , *SERS spectroscopy - Abstract
A label-free detection method for noninvasive biofluids enables rapid on-site disease screening and early-stage cancer diagnosis by analyzing metabolic alterations. Herein, we develop three-dimensional plasmonic hexaplex nanostructures coated on a paper substrate (3D-PHP). This flexible and highly absorptive 3D-PHP sensor is integrated with commercial saliva collection tube to create an efficient on-site sensing platform for lung cancer screening via surface-enhanced Raman scattering (SERS) measurement of human saliva. The multispike hexaplex-shaped gold nanostructure enhances contact with saliva viscosity, enabling effective sampling and SERS enhancement. Through testing patient salivary samples, the 3D-PHP sensor demonstrates successful lung cancer detection and diagnosis. A logistic regression-based machine learning model successfully classifies benign and malignant patients, exhibiting high clinical sensitivity and specificity. Additionally, important Raman peak positions related to different lung cancer stages are investigated, suggesting insights for early-stage cancer diagnosis. Integrating 3D-PHP senor with the conventional saliva collection tube platform is expected to offer promising practicality for rapid on-site disease screening and diagnosis, and significant advancements in cancer detection and patient care. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Choice modelling in the age of machine learning - Discussion paper.
- Author
-
van Cranenburgh, Sander, Wang, Shenhao, Vij, Akshay, Pereira, Francisco, and Walker, Joan
- Subjects
MACHINE learning ,POLLINATION - Abstract
Since its inception, the choice modelling field has been dominated by theory-driven modelling approaches. Machine learning offers an alternative data-driven approach for modelling choice behaviour and is increasingly drawing interest in our field. Cross-pollination of machine learning models, techniques and practices could help overcome problems and limitations encountered in the current theory-driven modelling paradigm, such as subjective labour-intensive search processes for model selection, and the inability to work with text and image data. However, despite the potential benefits of using the advances of machine learning to improve choice modelling practices, the choice modelling field has been hesitant to embrace machine learning. This discussion paper aims to consolidate knowledge on the use of machine learning models, techniques and practices for choice modelling, and discuss their potential. Thereby, we hope not only to make the case that further integration of machine learning in choice modelling is beneficial, but also to further facilitate it. To this end, we clarify the similarities and differences between the two modelling paradigms; we review the use of machine learning for choice modelling; and we explore areas of opportunities for embracing machine learning models and techniques to improve our practices. To conclude this discussion paper, we put forward a set of research questions which must be addressed to better understand if and how machine learning can benefit choice modelling. • Clarifies the similarities and differences between theory and data-driven paradigms. • Reviews the use of machine learning for choice modelling. • Explores opportunities for embracing machine learning to benefit choice modelling. • Puts forward research agenda. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Capillary flow velocity profile analysis on paper-based microfluidic chips for screening oil types using machine learning.
- Author
-
Chung, Soo, Loh, Andrew, Jennings, Christian M., Sosnowski, Katelyn, Ha, Sung Yong, Yim, Un Hyuk, and Yoon, Jeong-Yeol
- Subjects
- *
CAPILLARY flow , *FLOW velocity , *MICROFLUIDIC analytical techniques , *MACHINE learning , *FISHER discriminant analysis , *HEAVY oil - Abstract
We conceived a novel approach to screen oil types on a wax-printed paper-based microfluidic platform. Various oil samples spontaneously flowed through a micrometer-scale channel via capillary action while their components were filtered and partitioned. The resulting capillary flow velocity profile fluctuated during the flow, which was used to screen oil types. Raspberry Pi camera captured the video clips, and a custom Python code analyzed them to obtain the capillary flow velocity profiles. 106 velocity profiles (each with 125 frames for 5 s) were recorded from various oil samples to build a training database. Principal component analysis (PCA), support vector machine (SVM), and linear discriminant analysis (LDA) were used to classify the oil types into heavy-to-medium crude, light crude, marine fuel, lubricant, and diesel oils. The second-order polynomial SVM model with PCA as a pre-processing step showed the highest accuracy: 90% in classifying crude oils and 81% in classifying non-crude oils. The assay took less than 30 s from the sample to answer, with 5 s of the capillary action-driven flow. This simple and effective assay will allow rapid preliminary screening of oil types, enable early tracking, and reduce the number of suspect samples to be analyzed by laboratory fingerprinting analysis. [Display omitted] • A novel approach to screen oil types on a paper microfluidic platform. • Raspberry Pi camera acquired capillary flow velocity profiles of diverse oil samples. • Various machine learning based classifications were tested, including PCA, SVM, and LDA. • 90% accuracy in classifying crude oil samples and 81% in non-crude oil samples. • < 30 s from the sample to answer without the need for laboratory equipment. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Deep learning-assisted ultra-accurate smartphone testing of paper-based colorimetric ELISA assays.
- Author
-
Duan, Sixuan, Cai, Tianyu, Zhu, Jia, Yang, Xi, Lim, Eng Gee, Huang, Kaizhu, Hoettges, Kai, Zhang, Quan, Fu, Hao, Guo, Qiang, Liu, Xinyu, Yang, Zuming, and Song, Pengfei
- Subjects
- *
DEEP learning , *MACHINE learning , *SMARTPHONES , *ENZYME-linked immunosorbent assay , *MEDICAL screening , *MOBILE apps - Abstract
Smartphone has long been considered as one excellent platform for disease screening and diagnosis, especially when combined with microfluidic paper-based analytical devices (μPADs) that feature low cost, ease of use, and pump-free operations. In this paper, we report a deep learning-assisted smartphone platform for ultra-accurate testing of paper-based microfluidic colorimetric enzyme-linked immunosorbent assay (c-ELISA). Different from existing smartphone-based μPAD platforms, whose sensing reliability is suffered from uncontrolled ambient lighting conditions, our platform is able to eliminate those random lighting influences for enhanced sensing accuracy. We first constructed a dataset that contains c-ELISA results (n = 2048) of rabbit IgG as the model target on μPADs under eight controlled lighting conditions. Those images are then used to train four different mainstream deep learning algorithms. By training with these images, the deep learning algorithms can well eliminate the influences of lighting conditions. Among them, the GoogLeNet algorithm gives the highest accuracy (>97%) in quantitative rabbit IgG concentration classification/prediction, which also provides 4% higher area under curve (AUC) value than that of the traditional curve fitting results analysis method. In addition, we fully automate the whole sensing process and achieve the "image in, answer out" to maximize the convenience of the smartphone. A simple and user-friendly smartphone application has been developed that controls the whole process. This newly developed platform further enhances the sensing performance of μPADs for use by laypersons in low-resource areas and can be facilely adapted to the real disease protein biomarkers detection by c-ELISA on μPADs. [Display omitted] • This deep learning-assisted smartphone platform is unaffected by ambient lighting. • A fully automated "image in, answer out" operation fashion. • A 2048 custom image dataset is used to test 4 mainstream deep learning algorithms. • GoogLeNet provides >97% accuracy in quantitative rabbit IgG testing. • The area under the curve (AUC) is 4% higher than that of conventional methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Rapid analysis the type of customs paper using Micro-NIR spectrometers and machine learning algorithms.
- Author
-
Xia, Jingjing, Min, Shungeng, and Li, Jinyao
- Subjects
- *
MACHINE learning , *SPECTROMETERS , *K-nearest neighbor classification , *DISCRIMINANT analysis , *CLASSIFICATION algorithms , *ATTENUATED total reflectance - Abstract
[Display omitted] • Micro-NIR spectrometer was proposed to identify the paper types. • Four machine learning algorithms were discussed in both Micro-NIR and ATR-FTIR. • The performance of Micro-NIR model was better than ATR-FTIR. Quick identification of paper types for customs is extremely crucial. Although there are a variety of researches focus on the discrimination of paper, these techniques either require complex preprocessing or large-scale instruments, which are not suitable for customs environments. In this study, we predicted the type of customs paper by using a Micro-NIR spectrometer, and compared the results with Attenuated Total Reflection-Fourier Transform Infrared Spectroscopy (ATR-FTIR). Four different classification algorithms, including linear and non-linear classifiers: K-nearest neighbor (KNN), soft independent modeling of class analogy (SIMCA), partial least squares discriminant analysis (PLS-DA), and least squares-support vector machine (LS-SVM) were employed to classify the type of paper. 20 groups of datasets were selected by Monte Carlo sampling. For Micro-NIR data, the performances of KNN and LS-SVM were outstanding than SIMCA and PLS-DA, with the average accuracies 96.06% and 98.91%, respectively. The outcome of SIMCA and PLS-DA were similar, with the average accuracies 93.00% and 93.97%. Based on the standard derivation, the best stability of models was LS-SVM (1.06%), followed by PLS-DA (1.12%), KNN (1.22%) and SIMCA (3.07%). Compared with ATR-FTIR, the effects of Micro-NIR were better, which were embodies in the better KNN and SIMCA models, and the comparable LS-SVM model. The result demonstrated that the Micro-NIR combined with machine learning algorithms was an effective method to classify the type of customs paper efficiently and quickly, even better than ATR-FTIR. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Optimizing feature selection in intrusion detection systems: Pareto dominance set approaches with mutual information and linear correlation.
- Author
-
Barbosa, Guilherme Nunes Nasseh, Andreoni, Martin, and Mattos, Diogo Menezes Ferrazani
- Subjects
FEATURE selection ,INTRUSION detection systems (Computer security) ,MACHINE learning ,SOCIAL dominance ,PEARSON correlation (Statistics) ,FILTER paper - Abstract
In the realm of network intrusion detection using machine learning, feature selection aims for computational efficiency, enhanced performance, and model interpretability, preventing overfitting and optimizing data visualization. This paper proposes a filtering method for feature selection, which optimizes information quantity and linear correlation between resultant features. The method identifies Pareto dominant pairs of informative and correlated features, constructs a graph, and selects key features based on betweenness centrality in its connected components. The proposal yields a more concise and informative dataset representation. Experimental results, using three diverse datasets, demonstrate that the proposal achieves more than 95% accuracy in classifying network attacks with just 14% of the total number features in original datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Nondestructive and multiplex differentiation of pathogenic microorganisms from spoilage microflora on seafood using paper chromogenic array and neural network.
- Author
-
Yang, Manyun, Luo, Yaguang, Sharma, Arnav, Jia, Zhen, Wang, Shilong, Wang, Dayang, Lin, Sophia, Perreault, Whitney, Purohit, Sonia, Gu, Tingting, Dillow, Hyden, Liu, Xiaobo, Yu, Hengyong, and Zhang, Boce
- Subjects
- *
SEAFOOD , *PATHOGENIC microorganisms , *SHEWANELLA putrefaciens , *FOOD pathogens , *FOOD safety , *VOLATILE organic compounds , *PATHOGENIC bacteria - Abstract
[Display omitted] • Paper chromogenic array (PCA) integrated with machine learning (ML) was developed. • PCA exhibit distinguishable pattern shifts when reacting volatile metabolites. • PCA pattern recognition achieved using a cross-validated neural network. • PCA accurately identify multiplexed pathogens from indigenous microflora. • The nondestructive PCA-ML holds great potential as a smart food safety system. Non-destructive detection of human foodborne pathogens is critical to ensuring food safety and public health. Here, we report a new method using a paper chromogenic array coupled with a machine learning neural network (PCA-NN) to detect viable pathogens in the presence of background microflora and spoilage microbe in seafood via volatile organic compounds sensing. Morganella morganii and Shewanella putrefaciens were used as the model pathogen and spoilage bacteria. The study evaluated microbial detection in monoculture and cocktail multiplex detection. The accuracy of PCA-NN detection was first assessed on standard media and later validated on cod and salmon as real seafood models with pathogenic and spoilage bacteria, as well as background microflora. In this study PCA-NN method successfully identified pathogenic microorganisms from microflora with or without the prevalent spoilage microbe, Shewanella putrefaciens in seafood, with accuracies ranging from 90% to 99%. This approach has the potential to advance smart packaging by achieving nondestructive pathogen surveillance on food without enrichment, incubation, or other sample preparation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Imitation learning for aerobatic maneuvering in fixed-wing aircraft.
- Author
-
Freitas, Henrique, Camacho, Rui, and Castro Silva, Daniel
- Subjects
MACHINE learning ,MODEL airplanes ,TRANSFER of training ,CONFERENCE papers ,AUTOMATIC pilot (Airplanes) - Abstract
This study focuses on the task of developing automated models for complex aerobatic aircraft maneuvers. The approach employed here utilizes Behavioral Cloning, a technique in which human pilots supply a series of sample maneuvers. These maneuvers serve as training data for a Machine Learning algorithm, enabling the system to generate control models for each maneuver. The optimal instances for each maneuver were chosen based on a set of objective evaluation criteria. By utilizing these selected sets of examples, resilient models were developed, capable of reproducing the maneuvers performed by the human pilots who supplied the examples. In certain instances, these models even exhibited superior performance compared to the pilots themselves, a phenomenon referred to as the "clean-up effect". We also explore the application of transfer learning to adapt the developed controllers to various airplane models, revealing compelling evidence that transfer learning is effective for refining them for targeted aircraft. A comprehensive set of intricate maneuvers was executed through a meta-controller capable of orchestrating the fundamental maneuvers acquired through imitation. This undertaking yielded promising outcomes, demonstrating the proficiency of several Machine Learning models in successfully executing highly intricate aircraft maneuvers. This paper is an extended version of the previously ICCS 2023 published conference paper [1]. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Explainable proactive control of industrial processes.
- Author
-
Kuk, Edyta, Bobek, Szymon, and Nalepa, Grzegorz J.
- Subjects
PROCESS control systems ,ARTIFICIAL intelligence ,MANUFACTURING processes ,CONFERENCE papers ,INDUSTRY 4.0 - Abstract
One of the goals of Industry 4.0 is the adoption of data-driven models to enhance various aspects of the manufacturing process, such as monitoring equipment conditions, ensuring product quality, detecting failures, and preparing optimal maintenance plans. However, many machine-learning algorithms require a large amount of training data to reach desired performance. In numerous industrial applications, such data is either not available or its acquisition is a costly process. In such cases, simulation frameworks are employed to replicate the behavior of real-world facilities and generate data for further analysis. Simulation frameworks typically provide high-quality data but are often slow which can be problematic when real-time decision-making is required. Control approaches based on simulation-based data commonly face challenges related to inflexibility, particularly in dynamic production environments undergoing frequent reconfiguration and upgrades. This paper introduces a method that seeks to strike a balance between the reliance on simulated data and the limited robustness of simulation-based control methods. This balance is achieved by supplementing available data with additional expert knowledge, enabling the matching of similar data sources and their combination for reuse. Furthermore, we augment the methods with an explainability layer, facilitating collaboration between the human expert and the AI system, leading to informed and actionable decisions. The performance of the proposed solution is demonstrated through a case study on gas production from an underground reservoir resulting in reduced downtime, heightened process reliability, and enhanced overall performance. This paper builds upon our conference paper (Kuk et al., 2023), addressing the same problem with an extended, more generic methodology, and presenting entirely new results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Identification of organic chemical indicators for tracking pollution sources in groundwater by machine learning from GC-HRMS-based suspect and non-target screening data.
- Author
-
Ekpe, Okon Dominic, Choo, Gyojin, Kang, Jin-Kyu, Yun, Seong-Taek, and Oh, Jeong-Eun
- Subjects
- *
INDICATORS & test-papers , *GROUNDWATER pollution , *MICROPOLLUTANTS , *MACHINE learning , *ORGANIC compounds , *DATA integrity , *FECAL contamination , *FEATURE selection - Abstract
• 252 chemicals were identified by SNTS in groundwater from four regions with diverse contamination histories. • A novel and robust systematic machine learning-based workflow for predicting chemical indicators was proposed. • The proposed workflow showed good predictive ability (Q2) of 0.897. • 51 chemical indicators for tracking groundwater contamination sources were suggested. In this study, the strong analytical power of gas chromatography coupled to a high resolution mass spectrometry (GC-HRMS) in suspect and non-target screening (SNTS) of organic micropollutants was combined with machine learning tools for proposing a novel and robust systematic environmental forensics workflow, focusing on groundwater contamination. Groundwater samples were collected from four different regions with diverse contamination histories (namely oil [OC], agricultural [AGR], industrial [IND], and landfill [LF]), and a total of 252 organic micropollutants were identified, including pharmaceuticals, personal care products, pesticides, polycyclic aromatic hydrocarbons, plasticizers, phenols, organophosphate flame retardants, transformation products, and others, with detection frequencies ranging from 3 % to 100 %. Amongst the SNTS identified compounds, a total of 51 chemical indicators (i.e., OC: 13, LF: 12, AGR: 19, IND: 7) which included level 1 and 2 SNTS identified chemicals were pinpointed across all sampling regions by integrating a bootstrapped feature selection method involving the bootfs algorithm and a partial least squares discriminant analysis (PLS-DA) model to determine potential prevalent contamination sources. The proposed workflow showed good predictive ability (Q2) of 0.897, and the suggested contamination sources were gasoline, diesel, and/or other light petroleum products for the OC region, anthropogenic activities for the LF region, agricultural and human activities for the AGR region, and industrial/human activities for the IND region. These results suggest that the proposed workflow can select a subset of the most diagnostic features in the chemical space that can best distinguish a specific contamination source class. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Combination of cellulose tissue paper and bleach-treated graphene in stiffness reinforcement of polyvinyl alcohol film.
- Author
-
Abdullah, Abu Hannifa, Ismail, Zulhelmi, Idris, Wan Farhana W., Khusairi, Zulsyazwan Ahmad, and Zuhan, Mohd Khairul Nizam Mohd
- Subjects
- *
GRAPHENE , *POLYVINYL alcohol , *CELLULOSE , *POLYMER films , *MACHINE learning , *ELASTIC modulus - Abstract
A pre-treatment of graphene with bleach is considered one of the possible purification methods after liquid-phase exfoliation. However, the effect of this treatment on the mechanical reinforcement strategy for polymer film is yet to be investigated to date. In this full work, the influence of the C/O ratio, I D /I G, and volume of graphene after combination with cellulose tissue on the resulting stiffness of polyvinyl alcohol (PVA) composite film has been extensively studied. It is noticed that the incorporation of 30 ml graphene that had been pre-treated for 3 h into PVA had produced the best increment in elastic modulus (1.6 GPa against 0.4 GPa) while a shorter pre-treatment duration of graphene (1 h) would require more graphene volume (40 ml) to match the previous stiffness improvement level. By using the collected experimental data (90 samples), we further modeled the effect of tissue and PVA mass, C/O ratio, I D /I G , and graphene volume on modulus using machine learning (ML) algorithms. [Display omitted] • Combination of cellulose tissue and graphene as filler hybrid to combat poor dispersibility of bleach-treated graphene • Mechanical reinforcement effect was observed for graphene treated at 3 h due to the well-balanced C/O and I D /I G. • Addition of more tissue/graphene mass is required for graphene with a lower C/O to enhance the stiffness. • Machine learning study shows k-nearest neighbours with k = 1 is the best prediction model for composite stiffness. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. PSRMTE: Paper submission recommendation using mixtures of transformer.
- Author
-
Nguyen, Dac Huu, Huynh, Son Thanh, Dinh, Cuong Viet, Huynh, Phong Tan, and Nguyen, Binh Thanh
- Subjects
- *
COMPUTATIONAL mathematics , *RECOMMENDER systems , *MACHINE learning , *COMPUTER science , *ELECTRONIC journals , *APPLIED mathematics - Abstract
Nowadays, there has been a rapidly increasing number of scientific submissions in multiple research domains. A large number of journals have various acceptance rates, impact factors, and rankings in different publishers. It becomes time-consuming for many researchers to select the most suitable journal to submit their work with the highest acceptance rate. A paper submission recommendation system is more critical for the research community and publishers as it gives scientists another support to complete their submission conveniently. This paper investigates the submission recommendation system for two main research topics: computer science and applied mathematics. Unlike the previous works (Wang et al., 2018; Son et al., 2020) that extract TF–IDF and statistical features as well as utilize numerous machine learning algorithms (logistics regression and multiple perceptrons) for building the recommendation engine, we present an efficient paper submission recommendation algorithm by using different bidirectional transformer encoders and the Mixture of Transformer Encoders technique. We compare the performance between our methodology and other approaches by one dataset from Wang et al. (2018) with 14012 papers in computer science and another dataset collected by us with 223,782 articles in 178 Springer applied mathematics journals in terms of top K accuracy (K = 1 , 3 , 5 , 10). The experimental results show that our proposed method extensively outperforms other state-of-the-art techniques with a significant margin in all top K accuracy for both two datasets. We publish all datasets collected and our implementation codes for further references. 1 1 https://github.com/BinhMisfit/PSRMTE. • Bidirectional transformer encoders can improve the performance of the paper submission recommendation system. • The Mixture of Transformer Encoders framework shows the efficiency in the paper submission recommendation problem. • Proposed techniques can surpass other recent techniques on two datasets related. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Machine learning based urinary pH sensing using polyaniline deposited paper device and integration of smart web app interface: Theory to application.
- Author
-
Biswas, Souvik, Pal, Arijit, Chakraborty, Pratip, Chaudhury, Koel, and Das, Soumen
- Subjects
- *
WEB-based user interfaces , *SMART devices , *POLYANILINES , *MACHINE learning , *MACHINE theory , *ELECTRON transport , *LOCAL area networks , *STANDARD deviations - Abstract
The present study employs density functional theory-based first principle calculation to investigate the electron transport properties of polyaniline following exposure to acidic and alkaline pH. In-situ deposited polyaniline-based paper device maintains emeraldine salt form while it is exposed to acidic pH and converts to emeraldine base when it is subjected to alkaline pH solutions. These structural changes at acidic and alkaline pH are validated experimentally by Raman spectra. Furthermore, the Raman spectra computed from density functional theory are validated with the experimental spectra. The changes in the theoretical energy band gap of polyaniline obtained from first principle calculations were correlated with the changes in the experimental impedimetric response of the sensor after exposure to acidic and alkaline solutions. Finally, the impedimetric responses were used to predict urine pH through a machine learning based smart and interactive web application. Different machine learning based regression models were implemented to acquire the best possible outcome. Gradient Boosting Regressor with least square loss model was selected as it showed lowest mean square, mean absolute, and root mean square error than other models. The smart sensing platform successfully predicts the unknown pH of urine samples with an average accuracy of more than 98%. The locally deployed smart web app can be accessed within a local area network by the end-user, which holds promise towards effective detection of urinary pH. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Letter to the editor of radiotherapy and oncology regarding the paper titled "Impact on xerostomia for nasopharyngeal carcinoma patients treated with superficial parotid lobe-sparing intensity-modulated radiation therapy (SPLS-IMRT): A prospective phase II randomized controlled study." by Huang et al
- Author
-
Sarode, Gargi S., Sarode, Sachin C., and Anand, Rahul
- Subjects
- *
NASOPHARYNX cancer , *RADIOTHERAPY , *XEROSTOMIA , *ONCOLOGY , *DEEP learning - Published
- 2022
- Full Text
- View/download PDF
42. Development of a paper-based analytical device for the colourimetric detection of alanine transaminase and the application of deep learning for image analysis.
- Author
-
Resmi, P.E., Sachin Kumar, S., Alageswari, D., Suneesh, P.V., Ramachandran, T., Nair, Bipin G., and Satheesh Babu, T.G.
- Subjects
- *
DEEP learning , *IMAGE analysis , *IMAGE processing software , *MEMBRANE separation , *MACHINE learning - Abstract
A paper-based colourimetric assay for the detection of alanine transaminase has been developed. In the presence of alanine transaminase, 2,4-dinitrophenyl hydrazine changes to pyruvate hydrazone leading to a colour change from pale yellow to dark yellow. Reaction conditions were optimized using absorption spectroscopic studies. Hydrophobic patterns on the Whatman chromatographic paper were created by wax printing, and the reagents were drop cast at the reagent zone. On the paper device, the intensity of the yellow colour increases with ALT concentration in the range of 20–140 U/L in human serum. For the quantification of ALT, coloured images were captured using a digital camera and were processed with Image J software. The machine learning approach was also explored for the ALT analysis by training with colour images of the paper device and testing using a cross-validation procedure. The results obtained with real clinical samples on the paper device showed good accuracy of less than 5% relative error with the clinical lab results. Furthermore, the paper device shows high selectivity to ALT in the presence of various interfering species in blood serum with a sensitivity of 0.261 a.u/(U/L), a detection limit of 4.12 U/L, and precise results with an RSD of less than 7%. For the testing of whole blood, a plasma separation membrane was integrated with the patterned paper. [Display omitted] • Colourimetric detection of ALT on paper strip using 2,4-DNPH. • Negligible interference from other biomolecules. • Tested with serum and ALT spiked serum samples. • Plasma separation membrane incorporated strip used for testing of blood samples. • Quantification of ALT on paper strip using ImageJ and Machine learning approach. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. Paper-based platforms for microbial electrochemical cell-based biosensors: A review.
- Author
-
Chung, Tae Hyun and Dhar, Bipro Ranjan
- Subjects
- *
BIOSENSORS , *WATER quality monitoring , *BACTERIAL adhesion , *THREE-dimensional printing , *MACHINE learning - Abstract
The development of low-cost analytical devices for on-site water quality monitoring is a critical need, especially for developing countries and remote communities in developed countries with limited resources. Microbial electrochemical cell-based (MXC) biosensors have been quite promising for quantitative and semi-quantitative (often qualitative) measurements of various water quality parameters due to their low cost and simplicity compared to traditional analytical methods. However, conventional MXC biosensors often encounter challenges, such as the slow establishment of biofilms, low sensitivity, and poor recoverability, making them unable to be applied for practical cases. In response, MXC biosensors assembled with paper-based materials demonstrated tremendous potentials to enhance sensitivity and field applicability. Furthermore, the paper-based platforms offer many prominent features, including autonomous liquid transport, rapid bacterial adhesion, lowered resistance, low fabrication cost (<$1 in USD), and eco-friendliness. Therefore, this review aims to summarize the current trend and applications of paper-based MXC biosensors, along with critical discussions on their field applicability. Moreover, future advancements of paper-based MXC biosensors, such as developing a novel paper-based biobatteries, increasing the system performance using an unique biocatalyst, such as yeast, and integrating the biosensor system with other advanced tools, such as machine learning and 3D printing, are highlighted. [Display omitted] • Studies related to paper-based MXC biosensors are summarized and reviewed. • Benefits of using paper-based platforms over traditional materials are listed. • Current applications and challenges of paper-based MXC biosensors are provided. • Field applicability of paper-based MXC biosensors is highlighted. • Opportunities to integrate 3D printing and machine learning are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. Natural killer cell detection, quantification, and subpopulation identification on paper microfluidic cell chromatography using smartphone-based machine learning classification.
- Author
-
Zenhausern, Ryan, Day, Alexander S., Safavinia, Babak, Han, Seungmin, Rudy, Paige E., Won, Young-Wook, and Yoon, Jeong-Yeol
- Subjects
- *
MACHINE learning , *MICROFLUIDIC devices , *SMARTPHONES , *MICROFLUIDICS , *RANDOM forest algorithms , *CELL analysis , *CHROMATOGRAPHIC analysis , *KILLER cells - Abstract
Natural killer (NK) cells are immune cells that defend against viral infections and cancer and are used in cancer immunotherapies. Subpopulations of NK cells include CD56dim and CD56bright which either produce cytokines or cytotoxically kill cells directly. The absolute number and proportion of these cells in peripheral blood are tied to proper immune function. Current methods of cytokine detection and proportion of NK cell subpopulations require fluorescent dyes and highly specialized equipment, e.g., flow cytometry, thus rapid cell quantification and subpopulation analysis are needed in the clinical setting. Here, a smartphone-based device and a two-component paper microfluidic chip were used towards identifying NK cell subpopulation and inflammatory markers. One unit measured flow velocity via smartphone-captured video, determining cytokine (IL-2) and total NK cell concentrations in undiluted buffy coat blood samples. The other, single flow lane unit performs spatial separation of CD56dim and CD56bright and cells over its length using differential binding of anti-CD56 nanoparticles. A smartphone microscope combined with cloud-based machine learning predictive modeling (utilizing a random forest classification algorithm) analyzed both flow data and NK cell subpopulation differentiation. Limits of detection for cytokine and cell concentrations were 98 IU/mL and 68 cells/mL, respectively, and cell subpopulation analysis showed 89% accuracy. • First smartphone-based paper microfluidic cell chromatography that can identify cell subpopulation. • Machine learning predictive modeling for NK cell subpopulation differentiation. • Integration of both cell chromatography and flow rate analysis on a single platform. • Potential application to many other cytokines and cell subpopulation analyses. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Classification and feature selection methods based on fitting logistic regression to PU data.
- Author
-
Furmańczyk, Konrad, Paczutkowski, Kacper, Dudziński, Marcin, and Dziewa-Dawidczyk, Diana
- Subjects
MACHINE learning ,FEATURE selection ,CONFERENCE papers - Abstract
In our work, we examine the classification methods where the positive and unlabeled data are considered and where the conditional distribution of the true class label given the feature vector is governed by the model of logistic regression. Our first objective is to compute and compare the selected metrics allowing for the quality assessment of these methods. In this context, we investigate four methods of the posterior probability estimation, where the risk of logistic loss function is optimized: the naive approach, the weighted likelihood approach, as well as the quite recently proposed methods – the joint approach, and the LassoJoint method. The corresponding evaluations are basically performed for 13 machine learning models on some chosen – both low- and high-dimensional – datasets. Some of the mentioned machine learning model schemes have been directly borrowed from literature and some have been obtained through some modifications in the existing procedures. Our second goal is to establish the most stable and efficient approach for the posterior probability estimation. Moreover, we use the AdaSampling scheme for comparison of the considered classification methods. We also conducted comparisons of feature selection procedures – the Mutual Information-Based feature selection method and the LassoJoint approach. The current article is an enhancement of the conference paper Furmańczyk et al. (2022). • Metrics for PU-classifications obtained with use of the logistic model. • The joint method and the LassoJoint for low and high-dimensional real datasets. • Conducting the calibration of parameters of the LassoJoint method. • The joint method with the Mutual Information-Based Criterion. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Dynamic and quantitative trust modeling and real-time estimation in human-machine co-driving process.
- Author
-
Hu, Chuan, Huang, Siwei, Zhou, Yu, Ge, Sicheng, Yi, Binlin, Zhang, Xi, and Wu, Xiaodong
- Subjects
- *
MACHINE learning , *TRUST , *AT-risk behavior , *KALMAN filtering , *TRAFFIC safety - Abstract
• Real-time trust estimation model is proposed, which is dynamic and quantitative, considering the evolution pattern of driver's trust and the perceived risk; • Mathematical modeling and machine learning methods are combined; • A trust-based reminder strategy that aims to enhance the safety of human–machine co-driving is designed; • Driver-in-loop experiment validates the effectiveness in enhancing the safety, maintaining driver's trust and reducing trust biases in human–machine co-driving. The development of automated vehicles (AVs) will remain in the stage of human–machine co-driving for a long time. Trust is considered as an effective foundation of the interaction between the driver and the automated driving system (ADS). Driver's trust miscalibration, represented by under-trust and over-trust, is considered to be the potential cause of disuse and misuse of ADS, or even serious accidents. The estimation and calibration of trust are crucial to improve the safety of the driving process. This paper mainly consists of the following two aspects. Firstly, a dynamic and quantitative trust estimation model is established. A framework for trust estimation is constructed. Driver's perceived risk and behavior features were monitored and a Kalman filter was used to dynamically and quantitatively estimate the driver's trust. We conducted a driver-in-the-loop experiment and generated model parameters through a data-driven approach. The results demonstrated that the model exhibited precision in trust estimation, with the highest accuracy reaching 74.1%. Secondly, a reminder strategy to calibrate the over-trust of the driver is proposed based on the model from the first part. A scenario with four risky events was designed and the ADS would provide voice reminders to the driver when over-trust was detected. The results demonstrated that the reminder strategy proved to be beneficial for safety enhancement and moderate trust maintenance during the driving process. When the driver is over-trusting, the accident rates of the reminder group and the non-reminder group were 60.6% and 13.0%, respectively. Our contribution in this paper can be concluded by four points: (1) A real-time trust estimation model is proposed, which is dynamic and quantitative, considering the evolution pattern of driver's trust and the perceived risk; (2) Mathematical modeling and machine learning methods are combined; (3) A trust-based reminder strategy that aims to enhance the safety of human–machine co-driving is designed; (4) Driver-in-loop experiment validates the effectiveness in enhancing the safety, maintaining driver's trust and reducing trust biases in human–machine co-driving. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Generalized multilevel B-spline approximation for scattered data interpolation in image processing.
- Author
-
Chen, Juanjuan, Huang, Ting, Cai, Zhanchuan, and Huang, Wentao
- Subjects
- *
MACHINE learning , *BURST noise , *IMAGE processing , *APPROXIMATION error , *INTERPOLATION , *DEEP learning - Abstract
This paper proposes a Generalized Multilevel B-spline Approximation (GMBA) method, which addresses scattered data interpolation problems in image processing. Mathematically, the GMBA provides a better solution for the B-spline control lattice by superimposing identical level B-splines compared with traditional Multilevel B-spline Approximation (MBA). Specifically, the GMBA allows the spacing of next control lattice to be subdivided arbitrarily or remained unchanged, which is determined by a predefined spacing set or the current error level. These improvements bring higher approximation accuracy and more flexibility for algorithm design to avoid over-fitting. In this paper, basic GMBA algorithm and its refined algorithm are compiled for image processing. Finally, six relevant cases are involved to test the GMBA, including surface approximation, image enlargement, image completion, and Salt-and-Pepper (SAP) noise removal. The experimental results show that the GMBA has better performance than the MBA in surface approximation and image processing, performs comparatively fast with the best performance on more than half of the standard test images compared with traditional algorithms, and has partially better performance even than deep learning algorithms. The GMBA can effectively recover meaningful details in images contaminated with even extremely high SAP noise level (up to 99%). • We give a generalized multilevel B-spline approximation (GMBA) method. • GMBA allows the control spacing to be subdivided arbitrarily or remained unchanged. • GMBA offers higher approximation accuracy and greater flexibility than traditional MBA. • GMBA provides superior performance and less run time than many of the state-of-the-art methods for SAP noise removal. • GMBA can recovery meaningful detail at noise levels as high as 99% for SAP noise removal. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Conditional Information Gain Trellis.
- Author
-
Bicici, Ufuk Can, Meral, Tuna Han Salih, and Akarun, Lale
- Abstract
Conditional computing processes an input using only part of the neural network's computational units. Learning to execute parts of a deep convolutional network by routing individual samples has several advantages: This can facilitate the interpretability of the model, reduce the model complexity, and reduce the computational burden during training and inference. Furthermore, if similar classes are routed to the same path, that part of the network learns to discriminate between finer differences and better classification accuracies can be attained with fewer parameters. Recently, several papers have exploited this idea to select a particular child of a node in a tree-shaped network or to skip parts of a network. In this work, we follow a Trellis-based approach for generating specific execution paths in a deep convolutional neural network. We have designed routing mechanisms that use differentiable information gain-based cost functions to determine which subset of features in a convolutional layer will be executed. We call our method Conditional Information Gain Trellis (CIGT). We show that our conditional execution mechanism achieves comparable or better model performance compared to unconditional baselines, using only a fraction of the computational resources. We provide our code and model checkpoints used in the paper at: https://github.com/ufukcbicici/cigt/tree/prl/prl_scripts. • We introduce Conditional Information Gain Trellis (CIGT) for conditional computing. • We derive the CIGT loss function based on classification and information gain losses. • CIGT performs better or comparably using a fraction of the computational resources. • We give tests on MNIST, Fashion MNIST, and CIFAR 10, showing CIGT compares favorably. • Supplementary materials show that semantically similar classes are grouped together. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. A new intelligent charging strategy in a stationary hydrogen energy-based power plant for optimal demand side management of plug-in EVs.
- Author
-
Çakmak, Recep, Meral, Hasan, and Bayrak, Gökay
- Subjects
- *
LOAD management (Electric power) , *ELECTRIC power plants , *POWER plants , *PEAK load , *HYDROGEN , *MACHINE learning - Abstract
Stationary hydrogen energy-based power plants generating electricity to supply high-powered plug-in electric vehicles (PEVs) have recently become popular in renewable energy-based power plants. Besides, in a PEV charging station, various types of powered charge devices can be established such as DC fast chargers or 3.7 kW, 7.4 kW, 11 kW, and 22 kW AC chargers. This paper introduces a demand-side management-oriented optimal charging strategy that includes two stages for PEVs in a hydrogen energy-based microgrid. The paper focuses on two stages to execute an optimal charging of PEVs in compliance with their users' requests and satisfaction and considering the power system loading. It is assumed that there are three types of chargers in the PEV charging station and the users. In the first stage randomly created requests are classified by an ensemble learning classifier method that performs higher performance classification by combining the results from multiple classifiers in a machine learning classification. The second stage schedules the PEVs according to the classification results and users' requests. To test the proposed system, first random requests are created then they are sent to the classifier, and the results of classifiers are scheduled in each other. The demand-side management-oriented charge scheduling and managing strategy which includes the proposed two stages has been compared with non-managed cases. Case study results reveal that the proposed approach provides 52.1% peak load reduction and 72.3% valley filling improvement by the SOS algorithm. The results highlight the advantages of the proposed system in terms of peak reduction and valley filling. [Display omitted] • A new intelligent decision-maker method is proposed for DSM in a microgrid. • Proposed method is developed for a stationary hydrogen energy-based power plant. • The optimal DSM of EVs is investigated with a hybrid ML method. • RUSBoost and SOS Algorithms-based Hybrid Intelligent Decision Maker is developed. • It provides 52.1% peak load reduction and 72.3% valley filling improvement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. A modified geographical weighted regression model for better flood risk assessment and management of immovable cultural heritage sites at large spatial scales.
- Author
-
Liang, Long, Chen, Yunhao, Gong, Adu, and Sun, Hanyu
- Subjects
- *
MACHINE learning , *HISTORIC sites , *CULTURAL property , *CLIMATE extremes , *RANDOM forest algorithms , *FLOOD risk - Abstract
• A MGWR method was used to assess flood risk on immovable cultural heritages. • Both the spatial and building age properties were used for the construction of weight matrix in this model. • The proposed method had better accuracy than that in the common GWR model and several machine learning models. • Accuracies of predicted flood risk get larger improvements in the proposed model for the immovable cultural heritages with older building ages. With the increase in extreme climatic events globally in recent years, the increased frequency of flood hazards has had a great impact on immovable cultural heritage sites (ICHs) due to their prolonged exposure to the disaster environment. This poses a risk management challenge, especially on large scales. Most existing flood risk assessment models for ICHs are determined using common natural hazard methods directly and focus less on the characteristics of ICHs. In this paper, we propose a modified geographical weighted regression (MGWR) model to assess flood risk at ICHs, and this model considers the spatial and age properties of the ICHs. These two properties were used for the construction of the weight matrix in the MGWR model. Eleven selected indices and loss survey data with 417 sample points, including 5 types of ICHs, were utilized for model training and testing in Shanxi Province, China. The results showed that the MGWR model had good accuracy with an R2 of 0.928. A comparison between the MGWR and normal GWR models indicated that the accuracies of the older ICHs improved more in the MGWR than in the GWR. We also found that the proposed model performed better than the normal GWR model using age as an index. Moreover, in comparison with three machine learning methods (decision tree, logistic regression, and random forest), the MGWR model still performed better and was less limited by the number of training samples. This paper provides evidence that the characteristics of ICHs are crucial in the construction of flood risk assessment models, and the proposed model can benefit the risk management of various types of ICHs at large spatial scales. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.