298 results on '"reproducible"'
Search Results
2. An Introduction to Quantitative Text Analysis for Linguistics
- Author
-
Francom, Jerid
- Subjects
text analysis ,Jerid Francom ,quantitative text analysis ,R programming ,reproducible ,develop research skills ,programming skills ,applied linguistics ,real-world datasets ,pedagogical ,data literacy ,statistical methods ,statistics for linguists ,data analysis ,Textbook ,thema EDItEUR::D Biography, Literature and Literary studies::DS Literature: history and criticism ,thema EDItEUR::C Language and Linguistics::CF Linguistics ,thema EDItEUR::G Reference, Information and Interdisciplinary subjects::GP Research and information: general::GPS Research methods: general ,thema EDItEUR::C Language and Linguistics::CJ Language teaching and learning::CJA Language teaching theory and methods - Abstract
An Introduction to Quantitative Text Analysis for Linguistics: Reproducible Research Using R is a pragmatic textbook that equips students and researchers with the essential concepts and practical programming skills needed to conduct quantitative text analysis in a reproducible manner. Designed for undergraduate students and those new to the field, this book assumes no prior experience with statistics or programming, making it an accessible resource for anyone embarking on their journey into quantitative text analysis. Through a pedagogical approach which emphasizes intuitive understanding over technical details, readers will gain data literacy by learning to identify, interpret, and evaluate data analysis procedures and results. They will also develop research skills, enabling them to design, implement, and communicate quantitative text analysis projects effectively. The book places a strong emphasis on programming skills, guiding readers through interactive lessons, tutorials, and lab activities using the R programming language and real-world datasets. This practical textbook is enriched with features that facilitate learning, including thought and practical exercises, a companion website that includes programming demonstrations to develop and augment readers’ recognition of how programming strategies are implemented, and a GitHub repository that contains both a set of interactive R programming lessons and lab exercises, which guide readers through practical hands-on programming applications. This textbook is an essential companion to any linguist looking to learn how to incorporate quantitative data analysis into their work. The Open Access version of this book, available at http://www.taylorfrancis.com, has been made available under a Creative Commons Attribution-Non Commercial-No Derivatives (CC-BY-NC-ND) 4.0 license.
- Published
- 2025
- Full Text
- View/download PDF
3. Model-Assisted Spleen Contouring for Assessing Splenomegaly in Myelofibrosis: A Fast and Reproducible Approach to Evaluate Progression and Treatment Response.
- Author
-
Sharbatdaran, Arman, Cohen, Téa, Dev, Hreedi, Sattar, Usama, Bazojoo, Vahid, Wang, Yin, Hu, Zhongxiu, Zhu, Chenglin, He, Xinzi, Romano, Dominick, Scandura, Joseph M., and Prince, Martin R.
- Subjects
- *
BONE marrow , *SPLEEN , *VOLUME measurements , *DEEP learning , *THERAPEUTICS - Abstract
Background/Objectives: Accurate and reproducible spleen volume measurements are essential for assessing treatment response and disease progression in myelofibrosis. This study evaluates techniques for measuring spleen volume on abdominal MRI. Methods: In 20 patients with bone marrow biopsy-proven myelofibrosis, 5 observers independently measured spleen volume on 3 abdominal MRI pulse sequences, 3D-spoiled gradient echo T1, axial single-shot fast spin echo (SSFSE) T2, and coronal SSFSE T2, using ellipsoidal approximation, manual contouring, and 3D nnU-Net model-assisted contouring comparing coefficients of variation. Changes in spleen volume were compared to all information to assess which measurement technique tracked disease progression with the greatest accuracy. Results: The coefficient of variation in spleen volume measurements averaging over 3 sequences was significantly lower for model-assisted contouring, 1.6% and manual contouring, 3.5%, compared to ellipsoidal estimation from 3 dimensions measured on axial and coronal T2 images, 15, p < 0.001. In 4 subjects with divergent treatment response predictions, model-assisted contouring was consistent with all information while ellipsoidal estimation was not. Manual contouring tracked similarly to model-assisted contouring but required more operator time. Conclusions: Model-assisted segmentations provide efficient and more reproducible spleen volume measurements compared to estimates of spleen volume from ellipsoidal approximations and improve objective determinations of clinical trial enrollment eligibility based upon spleen volume as well as assessments of treatment response. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
4. Indications, Surgical Technique and Outcomes of Laparoscopic Splenectomy: A Retrospective Descriptive Study from Srinagar, India
- Author
-
Mushtaq Chalkoo, Mudasir Habib, Ashiq Hussain Raina, Firdous Hamid, Sajad Nazir Malla, and Imtiyaz Ahmad Malik
- Subjects
laparoscopy ,massive ,perioperative outcome ,reproducible ,Medicine - Abstract
Introduction: Laparoscopic splenectomy has become the preferred approach because it offers several advantages over traditional open surgery, including smaller incisions, decreased postoperative pain, reduced blood loss and shorter hospital stays. For mild to moderately enlarged spleens, it has become the procedure of choice. Although it can be more challenging for massively enlarged spleens, with proper planning and surgical expertise, the procedure can be performed safely. Aim: To assess the indications and outcomes of laparoscopic splenectomy across a range of spleen sizes, from normal to massively enlarged spleens, in a tertiary care hospital. Materials and Methods: A retrospective descriptive study was conducted in the Department of Surgery, Government Medical College, Srinagar, Jammu and Kashmir, India, from May 2018 to May 2023. A total of 41 patients were included in the study. Patients aged 18 to 65 years who had undergone elective laparoscopic splenectomies, with no cutoff for spleen size, were included. The endpoints of the study were to measure perioperative parameters such as operative time, blood loss, postoperative complications, hospital stay and analysis of indications. Continuous variables were expressed as mean±SD and categorical variables were summarised as frequencies and percentages. Results: The study group consisted of 20 (48.80%) males and 21 (51.20%) females. The mean age was 43.42±11.28 years. The most common indication for splenectomy was Immune Thrombocytopenic Purpura (ITP) (n=15), followed by hereditary spherocytosis (n=7). The mean spleen size of 26 patients was 12±1.62 cm; in seven patients, it was 17±0.76 cm; and in eight patients, it was 23.5±2.29 cm. The largest spleen operated on measured 28 cm in size. One patient required conversion to open surgery due to bleeding. Conclusion: Laparoscopic splenectomy is a safe method associated with a low risk of perioperative complications. Proper planning and a reproducible operative technique are critical for success in laparoscopic splenectomy, even for massive spleens.
- Published
- 2025
- Full Text
- View/download PDF
5. Model selection to achieve reproducible associations between resting state EEG features and autism
- Author
-
William E. Carson, Samantha Major, Harshitha Akkineni, Hannah Fung, Elias Peters, Kimberly L. H. Carpenter, Geraldine Dawson, and David E. Carlson
- Subjects
Autism ,Electroencephalography ,EEG ,Resting state ,Reproducible ,Reproducibility ,Medicine ,Science - Abstract
Abstract A concern in the field of autism electroencephalography (EEG) biomarker discovery is their lack of reproducibility. In the present study, we considered the problem of learning reproducible associations between multiple features of resting state (RS) neural activity and autism, using EEG data collected during a RS paradigm from 36 to 96 month-old children diagnosed with autism (N = 224) and neurotypical children (N = 69). Specifically, EEG spectral power and functional connectivity features were used as inputs to a regularized generalized linear model trained to predict diagnostic group (autism versus neurotypical). To evaluate our model, we proposed a procedure that quantified both the predictive generalization and reproducibility of learned associations produced by the model. When prioritizing both model predictive performance and reproducibility of associations, a highly reproducible profile of associations emerged. This profile revealed a distinct pattern of increased gamma power and connectivity in occipital and posterior midline regions associated with an autism diagnosis. Conversely, model selection based on predictive performance alone resulted in non-robust associations. Finally, we built a custom machine learning model that further empirically improved robustness of learned associations. Our results highlight the need for model selection criteria that maximize the scientific utility provided by reproducibility instead of predictive performance.
- Published
- 2024
- Full Text
- View/download PDF
6. Physiological signal analysis and open science using the Julia language and associated software.
- Author
-
Datseris, George and Zelko, Jacob S.
- Subjects
JULIA (Computer program language) ,COMPUTER software ,TIME series analysis ,DIGITAL technology ,SIGNAL processing - Abstract
In this mini review, we propose the use of the Julia programming language and its software as a strong candidate for reproducible, efficient, and sustainable physiological signal analysis. First, we highlight available software and Julia communities that provide top-of-the-class algorithms for all aspects of physiological signal processing despite the language's relatively young age. Julia can significantly accelerate both research and software development due to its high-level interactive language and high-performance code generation. It is also particularly suited for open and reproducible science. Openness is supported and welcomed because the overwhelming majority of Julia software programs are open source and developed openly on public platforms, primarily through individual contributions. Such an environment increases the likelihood that an individual not (originally) associated with a software program would still be willing to contribute their code, further promoting code sharing and reuse. On the other hand, Julia's exceptionally strong package manager and surrounding ecosystem make it easy to create self-contained, reproducible projects that can be instantly installed and run, irrespective of processor architecture or operating system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Model selection to achieve reproducible associations between resting state EEG features and autism.
- Author
-
Carson IV, William E., Major, Samantha, Akkineni, Harshitha, Fung, Hannah, Peters, Elias, Carpenter, Kimberly L. H., Dawson, Geraldine, and Carlson, David E.
- Subjects
MACHINE learning ,AUTISTIC children ,AUTISM in children ,FUNCTIONAL connectivity ,AUTISM - Abstract
A concern in the field of autism electroencephalography (EEG) biomarker discovery is their lack of reproducibility. In the present study, we considered the problem of learning reproducible associations between multiple features of resting state (RS) neural activity and autism, using EEG data collected during a RS paradigm from 36 to 96 month-old children diagnosed with autism (N = 224) and neurotypical children (N = 69). Specifically, EEG spectral power and functional connectivity features were used as inputs to a regularized generalized linear model trained to predict diagnostic group (autism versus neurotypical). To evaluate our model, we proposed a procedure that quantified both the predictive generalization and reproducibility of learned associations produced by the model. When prioritizing both model predictive performance and reproducibility of associations, a highly reproducible profile of associations emerged. This profile revealed a distinct pattern of increased gamma power and connectivity in occipital and posterior midline regions associated with an autism diagnosis. Conversely, model selection based on predictive performance alone resulted in non-robust associations. Finally, we built a custom machine learning model that further empirically improved robustness of learned associations. Our results highlight the need for model selection criteria that maximize the scientific utility provided by reproducibility instead of predictive performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. QUANTITATIVE ANALYSIS OF MIRABEGRON ER TABLETS: A COMPREHENSIVE RP-HPLC METHOD AND VALIDATION FOR THE IMPURITY DETERMINATION.
- Author
-
Kumar, G. T. Jyothesh, Andrews, B. S. A., Abbaraju, V. D. N. Kumar, V., Sreeram, Reddy, P. Sunil, and Kola, Avinash Rai
- Subjects
- *
GRADIENT elution (Chromatography) , *HIGH performance liquid chromatography , *QUALITY control , *ACETONITRILE , *DETECTION limit - Abstract
This research aimed to design as well as evaluate reverse-phase high-performance liquid chromatography (RPHPLC) process to precise quantification for potential impurities in Mirabegron (MIRA) ER Tablets. The developed method utilized an Inertsil C8 - 3, 150 x 4.6 mm, 3µm column with a gradient elution program utilizing Buffer and Acetonitrile as mobile phases. The chromatographic separation achieved on a C8 stationary phase demonstrated robustness and reproducibility, with detection at a wavelength of 240 nm and a constant column temperature of 40°C. The research study was focused on MIRA and four impurities (MIR-1, Deshydroxy, Diamide-1, and Diamide-2). The method exhibited specificity, precision, linearity, accuracy, robustness, ruggedness, and stability indicating characteristics. Correlation coefficients exceeding 0.997 for MIRA and its foreign substances showcased the accuracy and sensitivity of the method. Low detection and quantization limits (ranging from 0.06ppm to 0.20ppm) highlighted its high sensitivity. Accuracy is finalized by rates of recovery as 96.1% to 100.3% for all foreign substances. The stability-indicating is validated by forced degradation studies, meeting peak purity circumstances also achieving mass balance. Robustness studies further established the method's reliability, demonstrating resilience to minor chromatographic variations. This proposed RP-HPLC method accommodates a dependable, reproducible, highly accurate, and more sensitive means to quantifying MIRA-related substances in MIRA ER Tablets, making it a valuable tool for pharmaceutical quality control. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Radiomics and Deep Features: Robust Classification of Brain Hemorrhages and Reproducibility Analysis Using a 3D Autoencoder Neural Network.
- Author
-
Bijari, Salar, Sayfollahi, Sahar, Mardokh-Rouhani, Shiwa, Bijari, Sahar, Moradian, Sadegh, Zahiri, Ziba, and Rezaeijo, Seyed Masoud
- Subjects
- *
MACHINE learning , *INTRACRANIAL hemorrhage , *WILCOXON signed-rank test , *FEATURE selection , *RADIOMICS - Abstract
This study evaluates the reproducibility of machine learning models that integrate radiomics and deep features (features extracted from a 3D autoencoder neural network) to classify various brain hemorrhages effectively. Using a dataset of 720 patients, we extracted 215 radiomics features (RFs) and 15,680 deep features (DFs) from CT brain images. With rigorous screening based on Intraclass Correlation Coefficient thresholds (>0.75), we identified 135 RFs and 1054 DFs for analysis. Feature selection techniques such as Boruta, Recursive Feature Elimination (RFE), XGBoost, and ExtraTreesClassifier were utilized alongside 11 classifiers, including AdaBoost, CatBoost, Decision Trees, LightGBM, Logistic Regression, Naive Bayes, Neural Networks, Random Forest, Support Vector Machines (SVM), and k-Nearest Neighbors (k-NN). Evaluation metrics included Area Under the Curve (AUC), Accuracy (ACC), Sensitivity (SEN), and F1-score. The model evaluation involved hyperparameter optimization, a 70:30 train–test split, and bootstrapping, further validated with the Wilcoxon signed-rank test and q-values. Notably, DFs showed higher accuracy. In the case of RFs, the Boruta + SVM combination emerged as the optimal model for AUC, ACC, and SEN, while XGBoost + Random Forest excelled in F1-score. Specifically, RFs achieved AUC, ACC, SEN, and F1-scores of 0.89, 0.85, 0.82, and 0.80, respectively. Among DFs, the ExtraTreesClassifier + Naive Bayes combination demonstrated remarkable performance, attaining an AUC of 0.96, ACC of 0.93, SEN of 0.92, and an F1-score of 0.92. Distinguished models in the RF category included SVM with Boruta, Logistic Regression with XGBoost, SVM with ExtraTreesClassifier, CatBoost with XGBoost, and Random Forest with XGBoost, each yielding significant q-values of 42. In the DFs realm, ExtraTreesClassifier + Naive Bayes, ExtraTreesClassifier + Random Forest, and Boruta + k-NN exhibited robustness, with 43, 43, and 41 significant q-values, respectively. This investigation underscores the potential of synergizing DFs with machine learning models to serve as valuable screening tools, thereby enhancing the interpretation of head CT scans for patients with brain hemorrhages. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Optimized rat models better mimic patients with irinotecan-induced severe diarrhea.
- Author
-
Zheng, Zicong, Du, Ting, Gao, Song, Yin, Taijun, Li, Li, Zhu, Lijun, Singh, Rashim, Sun, Rongjin, and Hu, Ming
- Subjects
- *
RATS , *ANIMAL disease models , *DIARRHEA , *IRINOTECAN , *INTESTINAL injuries , *LABORATORY mice - Abstract
Irinotecan-induced severe diarrhea (IISD) not only limits irinotecan's application but also significantly affects patients' quality of life. However, existing animal models often inadequately represent the dynamics of IISD development, progression, and resolution across multiple chemotherapy cycles, yielding non-reproducible and highly variable response with limited clinical translation. Our studies aim to establish a reproducible and validated IISD model that better mimics the pathophysiology progression observed in patients, enhancing translational potential. We investigated the impact of dosing regimens (including different dose, infusion time, and two cycles of irinotecan administration), sex, age, tumor-bearing conditions, and irinotecan formulation on the IISD incidence and severity in mice and rats. Lastly, we investigated above factors' impact on pharmacokinetics of irinotecan, intestinal injury, and carboxylesterase activities. In summary, we successfully established a standard model establishment procedure for an optimized IISD model with highly reproducible severe diarrhea incidence rate (100%) and a low mortality rate (11%) in F344 rats. Additionally, the rats tolerated at least two cycles of irinotecan chemotherapy treatment. In contrast, the mouse model exhibited suboptimal IISD incidence rates (60%) and an extremely high mortality rate (100%). Notably, dosing regimen, age and tumor-bearing conditions of animals emerged as critical factors in IISD model establishment. In conclusion, our rat IISD model proves superior in mimicking pathophysiology progression and characteristics of IISD in patients, which stands as an effective tool for mechanism and efficacy studies in future chemotherapy-induced gut toxicity research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Towards Reliable and Practical SERS
- Author
-
Aljuhani, Wafaa, Zhang, Yingrui, Li, Chunchun, Xu, Yikai, Bell, Steven E. J., Procházka, Marek, editor, Kneipp, Janina, editor, Zhao, Bing, editor, and Ozaki, Yukihiro, editor
- Published
- 2024
- Full Text
- View/download PDF
12. Physiological signal analysis and open science using the Julia language and associated software
- Author
-
George Datseris and Jacob S. Zelko
- Subjects
digital signal processing ,physiological signals ,complexity measures ,Julia ,time series analysis ,reproducible ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
In this mini review, we propose the use of the Julia programming language and its software as a strong candidate for reproducible, efficient, and sustainable physiological signal analysis. First, we highlight available software and Julia communities that provide top-of-the-class algorithms for all aspects of physiological signal processing despite the language’s relatively young age. Julia can significantly accelerate both research and software development due to its high-level interactive language and high-performance code generation. It is also particularly suited for open and reproducible science. Openness is supported and welcomed because the overwhelming majority of Julia software programs are open source and developed openly on public platforms, primarily through individual contributions. Such an environment increases the likelihood that an individual not (originally) associated with a software program would still be willing to contribute their code, further promoting code sharing and reuse. On the other hand, Julia’s exceptionally strong package manager and surrounding ecosystem make it easy to create self-contained, reproducible projects that can be instantly installed and run, irrespective of processor architecture or operating system.
- Published
- 2024
- Full Text
- View/download PDF
13. Utilizing stability criteria in choosing feature selection methods yields reproducible results in microbiome data.
- Author
-
Jiang, Lingjing, Haiminen, Niina, Carrieri, Anna-Paola, Huang, Shi, Vázquez-Baeza, Yoshiki, Parida, Laxmi, Kim, Ho-Cheol, Swafford, Austin D, Knight, Rob, and Natarajan, Loki
- Subjects
Reproducibility of Results ,Algorithms ,Microbiota ,classification ,feature selection ,microbiome ,prediction ,reproducible ,stability ,Bioengineering ,Statistics ,Other Mathematical Sciences ,Statistics & Probability - Abstract
Feature selection is indispensable in microbiome data analysis, but it can be particularly challenging as microbiome data sets are high dimensional, underdetermined, sparse and compositional. Great efforts have recently been made on developing new methods for feature selection that handle the above data characteristics, but almost all methods were evaluated based on performance of model predictions. However, little attention has been paid to address a fundamental question: how appropriate are those evaluation criteria? Most feature selection methods often control the model fit, but the ability to identify meaningful subsets of features cannot be evaluated simply based on the prediction accuracy. If tiny changes to the data would lead to large changes in the chosen feature subset, then many selected features are likely to be a data artifact rather than real biological signal. This crucial need of identifying relevant and reproducible features motivated the reproducibility evaluation criterion such as Stability, which quantifies how robust a method is to perturbations in the data. In our paper, we compare the performance of popular model prediction metrics (MSE or AUC) with proposed reproducibility criterion Stability in evaluating four widely used feature selection methods in both simulations and experimental microbiome applications with continuous or binary outcomes. We conclude that Stability is a preferred feature selection criterion over model prediction metrics because it better quantifies the reproducibility of the feature selection method.
- Published
- 2022
14. rphylopic: An R package for fetching, transforming, and visualising PhyloPic silhouettes
- Author
-
William Gearty and Lewis A. Jones
- Subjects
accessible ,data visualisation ,database ,R programming ,reproducible ,Ecology ,QH540-549.5 ,Evolution ,QH359-425 - Abstract
Abstract Effective data visualisation is vital for data exploration, analysis and communication in research. In ecology and evolutionary biology, data are often associated with various taxonomic entities. Graphics of organisms associated with these taxa are valuable for framing results within a broader biological context. However, acquiring and using such resources can be challenging due to availability and licensing constraints. The PhyloPic database solves many of these issues by making organism silhouettes freely available. Tools that integrate this database with existing research workflows are needed to remove hurdles associated with data visualisation in the biological sciences. Here, we introduce rphylopic, an R package for fetching, transforming and visualising silhouettes of organisms from the PhyloPic database. In addition to making over 8000 organism silhouettes available within the R programming language, rphylopic empowers users to modify the appearance of these silhouettes for ultimate customisability when coding production–quality visualisations in both base R and ggplot2 workflows. In this work, we provide details about how the package can be installed, its implementation and potential use cases. For the latter, we showcase three examples across the ecology and evolutionary biology spectrum. Our hope is that rphylopic will make it easier for biologists to develop more accessible and engaging data visualisations by making external resources readily accessible, customisable and usable within R. In turn, by integrating into existing workflows, rphylopic helps to ensure that science is reproducible and accessible.
- Published
- 2023
- Full Text
- View/download PDF
15. palaeoverse: A community‐driven R package to support palaeobiological analysis
- Author
-
Lewis A. Jones, William Gearty, Bethany J. Allen, Kilian Eichenseer, Christopher D. Dean, Sofía Galván, Miranta Kouvari, Pedro L. Godoy, Cecily S. C. Nicholl, Lucas Buffan, Erin M. Dillon, Joseph T. Flannery‐Sutherland, and Alfio Alessandro Chiarenza
- Subjects
analytical palaeobiology ,computational palaeobiology ,R programming ,readable ,reproducible ,reusable ,Ecology ,QH540-549.5 ,Evolution ,QH359-425 - Abstract
Abstract The open‐source programming language ‘R' has become a standard tool in the palaeobiologist's toolkit. Its popularity within the palaeobiological community continues to grow, with published articles increasingly citing the usage of R and R packages. However, there are currently a lack of agreed standards for data preparation and available frameworks to support the implementation of such standards. Consequently, data preparation workflows are often unclear and not reproducible, even when code is provided. Moreover, due to a lack of code accessibility and documentation, palaeobiologists are often forced to ‘reinvent the wheel’ to find solutions to issues already solved by other members of the community. Here, we introduce palaeoverse, a community‐driven R package to aid data preparation and exploration for quantitative palaeobiological research. The package is freely available and has three core principles: (1) streamline data preparation and analyses; (2) enhance code readability; and (3) improve reproducibility of results. To develop these aims, we assessed the analytical needs of the broader palaeobiological community using an online survey, in addition to incorporating our own experiences. In this work, we first report the findings of the survey, which shaped the development of the package. Subsequently, we describe and demonstrate the functionality available in palaeoverse and provide usage examples. Finally, we discuss the resources we have made available for the community and our future plans for the broader Palaeoverse project. palaeoverse is a community‐driven R package for palaeobiology, developed with the intention of bringing palaeobiologists together to establish agreed standards for high‐quality quantitative research. The package provides a user‐friendly platform for preparing data for analysis with well‐documented open‐source code to enhance transparency. The functionality available in palaeoverse improves code reproducibility and accessibility, which is beneficial for both the review process and future research.
- Published
- 2023
- Full Text
- View/download PDF
16. Accuracy of a Commercial Large Language Model (ChatGPT) to Perform Disaster Triage of Simulated Patients Using the Simple Triage and Rapid Treatment (START) Protocol: Gage Repeatability and Reproducibility Study.
- Author
-
Franc, Jeffrey Micheal, Hertelendy, Attila Julius, Cheng, Lenard, Hata, Ryan, and Verde, Manuela
- Subjects
LANGUAGE models ,NATURAL language processing ,MEDICAL personnel ,ARTIFICIAL intelligence ,DISASTER medicine - Abstract
Background: The release of ChatGPT (OpenAI) in November 2022 drastically reduced the barrier to using artificial intelligence by allowing a simple web-based text interface to a large language model (LLM). One use case where ChatGPT could be useful is in triaging patients at the site of a disaster using the Simple Triage and Rapid Treatment (START) protocol. However, LLMs experience several common errors including hallucinations (also called confabulations) and prompt dependency. Objective: This study addresses the research problem: "Can ChatGPT adequately triage simulated disaster patients using the START protocol?" by measuring three outcomes: repeatability, reproducibility, and accuracy. Methods: Nine prompts were developed by 5 disaster medicine physicians. A Python script queried ChatGPT Version 4 for each prompt combined with 391 validated simulated patient vignettes. Ten repetitions of each combination were performed for a total of 35,190 simulated triages. A reference standard START triage code for each simulated case was assigned by 2 disaster medicine specialists (JMF and MV), with a third specialist (LC) added if the first two did not agree. Results were evaluated using a gage repeatability and reproducibility study (gage R and R). Repeatability was defined as variation due to repeated use of the same prompt. Reproducibility was defined as variation due to the use of different prompts on the same patient vignette. Accuracy was defined as agreement with the reference standard. Results: Although 35,102 (99.7%) queries returned a valid START score, there was considerable variability. Repeatability (use of the same prompt repeatedly) was 14% of the overall variation. Reproducibility (use of different prompts) was 4.1% of the overall variation. The accuracy of ChatGPT for START was 63.9% with a 32.9% overtriage rate and a 3.1% undertriage rate. Accuracy varied by prompt with a maximum of 71.8% and a minimum of 46.7%. Conclusions: This study indicates that ChatGPT version 4 is insufficient to triage simulated disaster patients via the START protocol. It demonstrated suboptimal repeatability and reproducibility. The overall accuracy of triage was only 63.9%. Health care professionals are advised to exercise caution while using commercial LLMs for vital medical determinations, given that these tools may commonly produce inaccurate data, colloquially referred to as hallucinations or confabulations. Artificial intelligence–guided tools should undergo rigorous statistical evaluation—using methods such as gage R and R—before implementation into clinical settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Infinite-Precision Inner Product and Sparse Matrix-Vector Multiplication Using Ozaki Scheme with Dot2 on Manycore Processors
- Author
-
Mukunoki, Daichi, Ozaki, Katsuhisa, Ogita, Takeshi, Imamura, Toshiyuki, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wyrzykowski, Roman, editor, Dongarra, Jack, editor, Deelman, Ewa, editor, and Karczewski, Konrad, editor
- Published
- 2023
- Full Text
- View/download PDF
18. rphylopic: An R package for fetching, transforming, and visualising PhyloPic silhouettes.
- Author
-
Gearty, William and Jones, Lewis A.
- Subjects
SILHOUETTES ,DATABASES ,PROGRAMMING languages ,SELF-efficacy ,COMMUNICATIONS research ,SYNTHETIC biology - Abstract
Effective data visualisation is vital for data exploration, analysis and communication in research. In ecology and evolutionary biology, data are often associated with various taxonomic entities. Graphics of organisms associated with these taxa are valuable for framing results within a broader biological context. However, acquiring and using such resources can be challenging due to availability and licensing constraints. The PhyloPic database solves many of these issues by making organism silhouettes freely available. Tools that integrate this database with existing research workflows are needed to remove hurdles associated with data visualisation in the biological sciences.Here, we introduce rphylopic, an R package for fetching, transforming and visualising silhouettes of organisms from the PhyloPic database. In addition to making over 8000 organism silhouettes available within the R programming language, rphylopic empowers users to modify the appearance of these silhouettes for ultimate customisability when coding production–quality visualisations in both base R and ggplot2 workflows.In this work, we provide details about how the package can be installed, its implementation and potential use cases. For the latter, we showcase three examples across the ecology and evolutionary biology spectrum.Our hope is that rphylopic will make it easier for biologists to develop more accessible and engaging data visualisations by making external resources readily accessible, customisable and usable within R. In turn, by integrating into existing workflows, rphylopic helps to ensure that science is reproducible and accessible. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. palaeoverse: A community‐driven R package to support palaeobiological analysis.
- Author
-
Jones, Lewis A., Gearty, William, Allen, Bethany J., Eichenseer, Kilian, Dean, Christopher D., Galván, Sofía, Kouvari, Miranta, Godoy, Pedro L., Nicholl, Cecily S. C., Buffan, Lucas, Dillon, Erin M., Flannery‐Sutherland, Joseph T., and Chiarenza, Alfio Alessandro
- Subjects
PROGRAMMING languages ,PUBLISHED articles ,PALEOBIOLOGY ,VIRTUAL communities ,QUANTITATIVE research - Abstract
Copyright of Methods in Ecology & Evolution is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
20. High-performance thin-layer chromatography fingerprint profile analysis and spectro-densitometric evaluation of antiproliferative antioxidants such as ellagic acid and gallic acid from four widely used Terminalia species.
- Author
-
Bidikar, Chaitrali M., Hurkadale, Pramod J., Nandanwadkar, Shrikrishna M., Hegde, Harsha V., Singh, Sneha, Khale, Abhijeet, and Phanse, Manjusha
- Abstract
The emerging need for quality control to assess the chemical composition and stability of herbal raw materials has grown significantly along with their demand. The proposed research focuses on phytochemical analysis of hydro-alcoholic bark and fruit extracts from Terminalia (arjuna, bellirica, chebula, and catappa) followed by high-performance thin-layer chromatography (HPTLC) technique. HPTLC fingerprinting carried out on precoated silica gel F
254 TLC revealed the presence of alkaloids, flavonoids, tannins, phenols, and antioxidants. The developed RP-HPTLC method was validated to quantify gallic acid (GA) and ellagic acid (EA). The calibration plot from linear regression analysis demonstrated a good polynomial regression relationship, with R values of 99.99% for the peak areas of GA and EA, respectively. The calibration range for GA and EA is 100–700 ng per band. Spectro-densitometric scanning verified GA and EA by four-way confirmation via RF , chromatogram, spectra, and visual comparison of chromatograms. The method was developed and validated in accordance with the International Council for Harmonisation guidelines. From the validation results, it can be concluded that this method would prove an excellent alternative to existing costly techniques such as gas chromatography and high-performance liquid chromatography. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
21. Determination of a positive response in the Ames Salmonella mutagenicity assay.
- Author
-
Zeiger, Errol
- Subjects
MUTAGENS ,MUTAGENICITY testing ,SALMONELLA ,GENETIC testing ,TOXICITY testing - Abstract
Genetic toxicology tests are used to categorize substances as genotoxic and potentially carcinogenic. In general, test results are designated as mutagenic, not mutagenic, or inconclusive and, depending on its potential use and applicable regulations, a mutagenic result can restrict or remove a substance from further development, or assign limits to its use. In these tests, mutation responses form a continuum without a clear delineation between an increase over the background, untreated, mutant frequency and a frequency that would define the test substance as a mutagen and a potential carcinogenic hazard. This situation is illustrated using the Salmonella mutagenicity (Ames) test which is the initial, and often only, test used to characterize substances as mutagenic or nonmutagenic. It has its widest use by industry and regulatory authorities to identify potential carcinogens among chemicals in development. The OECD Test Guideline No. 471 has been adopted by regulatory agencies internationally, and describes the minimum requirements for a negative response, but does not provide a specific approach for evaluating the test data. The most widely used criterion for making yes‐or‐no mutagenicity decisions is a 2‐ or 3‐fold increase over the background (solvent) mutant frequency. Other approaches rely on formal statistics and/or expert judgment. These approaches and recently proposed modifications are evaluated here. Recommendations are made that are in conformity with the OECD guideline and are based on biological relevance and the biology of the mutagenic response rather than on arbitrary decision points (e.g., ≥2‐fold increase or p ≤.05). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Longitudinal home-cage automated assessment of climbing behavior shows sexual dimorphism and aging-related decrease in C57BL/6J healthy mice and allows early detection of motor impairment in the N171-82Q mouse model of Huntington's disease.
- Author
-
Bains, Rasneer S., Forrest, Hamish, Sillito, Rowland R., Armstrong, J. Douglas, Stewart, Michelle, Nolan, Patrick M., and Wells, Sara E.
- Subjects
HUNTINGTON disease ,LABORATORY mice ,SEXUAL dimorphism ,ANIMAL disease models ,HUMAN sexuality - Abstract
Monitoring the activity of mice within their home cage is proving to be a powerful tool for revealing subtle and early-onset phenotypes in mouse models. Video-tracking, in particular, lends itself to automated machine-learning technologies that have the potential to improve the manual annotations carried out by humans. This type of recording and analysis is particularly powerful in objective phenotyping, monitoring behaviors with no experimenter intervention. Automated home-cage testing allows the recording of non-evoked voluntary behaviors, which do not require any contact with the animal or exposure to specialist equipment. By avoiding stress deriving from handling, this approach, on the one hand, increases the welfare of experimental animals and, on the other hand, increases the reliability of results excluding confounding effects of stress on behavior. In this study, we show that the monitoring of climbing on the wire cage lid of a standard individually ventilated cage (IVC) yields reproducible data reflecting complex phenotypes of individual mouse inbred strains and of a widely used model of neurodegeneration, the N171-82Q mouse model of Huntington's disease (HD). Measurements in the home-cage environment allowed for the collection of comprehensive motor activity data, which revealed sexual dimorphism, daily biphasic changes, and aging-related decrease in healthy C57BL/6J mice. Furthermore, home-cage recording of climbing allowed early detection of motor impairment in the N171-82Q HD mouse model. Integrating cage-floor activity with cage-lid activity (climbing) has the potential to greatly enhance the characterization of mouse strains, detecting early and subtle signs of disease and increasing reproducibility in preclinical studies. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Longitudinal home-cage automated assessment of climbing behavior shows sexual dimorphism and aging-related decrease in C57BL/6J healthy mice and allows early detection of motor impairment in the N171-82Q mouse model of Huntington’s disease
- Author
-
Rasneer S. Bains, Hamish Forrest, Rowland R. Sillito, J. Douglas Armstrong, Michelle Stewart, Patrick M. Nolan, and Sara E. Wells
- Subjects
automated ,neurodegeneration ,motor function ,reproducible ,welfare ,Huntington’s disease (HD) ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Monitoring the activity of mice within their home cage is proving to be a powerful tool for revealing subtle and early-onset phenotypes in mouse models. Video-tracking, in particular, lends itself to automated machine-learning technologies that have the potential to improve the manual annotations carried out by humans. This type of recording and analysis is particularly powerful in objective phenotyping, monitoring behaviors with no experimenter intervention. Automated home-cage testing allows the recording of non-evoked voluntary behaviors, which do not require any contact with the animal or exposure to specialist equipment. By avoiding stress deriving from handling, this approach, on the one hand, increases the welfare of experimental animals and, on the other hand, increases the reliability of results excluding confounding effects of stress on behavior. In this study, we show that the monitoring of climbing on the wire cage lid of a standard individually ventilated cage (IVC) yields reproducible data reflecting complex phenotypes of individual mouse inbred strains and of a widely used model of neurodegeneration, the N171-82Q mouse model of Huntington’s disease (HD). Measurements in the home-cage environment allowed for the collection of comprehensive motor activity data, which revealed sexual dimorphism, daily biphasic changes, and aging-related decrease in healthy C57BL/6J mice. Furthermore, home-cage recording of climbing allowed early detection of motor impairment in the N171-82Q HD mouse model. Integrating cage-floor activity with cage-lid activity (climbing) has the potential to greatly enhance the characterization of mouse strains, detecting early and subtle signs of disease and increasing reproducibility in preclinical studies.
- Published
- 2023
- Full Text
- View/download PDF
24. Recent progress in nanocomposites of carbon dioxide fixation derived reproducible biomedical polymers
- Author
-
Xin Liu, Zhiwen Jiang, Dejun Xing, Yan Yang, Zhiying Li, and Zhiqiang Sun
- Subjects
nanocomposites ,carbon dioxide fixation ,reproducible ,biomedical polymers ,decarbonization ,Chemistry ,QD1-999 - Abstract
In recent years, the environmental problems accompanying the extensive application of biomedical polymer materials produced from fossil fuels have attracted more and more attentions. As many biomedical polymer products are disposable, their life cycle is relatively short. Most of the used or overdue biomedical polymer products need to be burned after destruction, which increases the emission of carbon dioxide (CO2). Developing biomedical products based on CO2 fixation derived polymers with reproducible sources, and gradually replacing their unsustainable fossil-based counterparts, will promote the recycling of CO2 in this field and do good to control the greenhouse effect. Unfortunately, most of the existing polymer materials from renewable raw materials have some property shortages, which make them unable to meet the gradually improved quality and property requirements of biomedical products. In order to overcome these shortages, much time and effort has been dedicated to applying nanotechnology in this field. The present paper reviews recent advances in nanocomposites of CO2 fixation derived reproducible polymers for biomedical applications, and several promising strategies for further research directions in this field are highlighted.
- Published
- 2022
- Full Text
- View/download PDF
25. Two arms-three instruments robot-assisted laparoscopic hysterectomy: A reproducible technique
- Author
-
Rooma Sinha, Bana Rupa, and Girija Shankar Mohanty
- Subjects
Robotic ,Hysterectomy ,Technique ,Reproducible ,Surgery ,RD1-811 - Abstract
Although laparoscopic hysterectomy has been used for more than 3 decades, it is not universally adopted due to steep learning curve. The robotic platform can bridge this gap and reduce the need for open hysterectomy with enhanced dexterity and accurate depth perception by 3D vision and wristed intuitive movements. This technical note introduces a two arms-three instruments “Sinha-Apollo technique” for da Vinci Si system for performing robotic-assisted laparoscopic hysterectomy in simplified and reproducible steps.
- Published
- 2021
- Full Text
- View/download PDF
26. Reproducible BLAS Routines with Tunable Accuracy Using Ozaki Scheme for Many-Core Architectures
- Author
-
Mukunoki, Daichi, Ogita, Takeshi, Ozaki, Katsuhisa, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wyrzykowski, Roman, editor, Deelman, Ewa, editor, Dongarra, Jack, editor, and Karczewski, Konrad, editor
- Published
- 2020
- Full Text
- View/download PDF
27. Tacit Knowledge Revisited
- Author
-
Burke, Derek and Burke, Derek
- Published
- 2020
- Full Text
- View/download PDF
28. Proof-of-Concept Organ-on-Chip Study: Topical Cinnamaldehyde Exposure of Reconstructed Human Skin with Integrated Neopapillae Cultured under Dynamic Flow.
- Author
-
Vahav, Irit, Thon, Maria, van den Broek, Lenie J., Spiekstra, Sander W., Atac, Beren, Lindner, Gerd, Schimek, Katharina, Marx, Uwe, and Gibbs, Susan
- Subjects
- *
ALLERGENS , *SKIN , *EPIDERMIS , *LACTATE dehydrogenase , *PERSONAL care products industry - Abstract
Pharmaceutical and personal care industries require human representative models for testing to ensure the safety of their products. A major route of penetration into our body after substance exposure is via the skin. Our aim was to generate robust culture conditions for a next generation human skin-on-chip model containing neopapillae and to establish proof-of-concept testing with the sensitizer, cinnamaldehyde. Reconstructed human skin consisting of a stratified and differentiated epidermis on a fibroblast populated hydrogel containing neopapillae spheroids (RhS-NP), were cultured air-exposed and under dynamic flow for 10 days. The robustness of three independent experiments, each with up to 21 intra-experiment replicates, was investigated. The epidermis was seen to invaginate into the hydrogel towards the neopapille spheroids. Daily measurements of lactate dehydrogenase (LDH) and glucose levels within the culture medium demonstrated high viability and stable metabolic activity throughout the culture period in all three independent experiments and in the replicates within an experiment. Topical cinnamaldehyde exposure to RhS-NP resulted in dose-dependent cytotoxicity (increased LDH release) and elevated cytokine secretion of contact sensitizer specific IL-18, pro-inflammatory IL-1β, inflammatory IL-23 and IFN-γ, as well as anti-inflammatory IL-10 and IL-12p70. This study demonstrates the robustness and feasibility of complex next generation skin models for investigating skin immunotoxicity. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. VisU-HydRA: A Computational Toolbox for Groundwater Contaminant Transport to Support Risk-Based Decision Making
- Author
-
Maria Morvillo, Jinwoo Im, and Felipe P. J. de Barros
- Subjects
uncertainty quantification (UQ) ,stochastic hydrogeology ,reproducible ,decision making ,probabilistic risk analysis ,Science - Abstract
Obtaining accurate and deterministic predictions of the risks associated with the presence of contaminants in aquifers is an illusive goal given the presence of heterogeneity in hydrological properties and limited site characterization data. For such reasons, a probabilistic framework is needed to quantify the risks in groundwater systems. In this work, we present a computational toolbox VisU-HydRA that aims to statistically characterize and visualize metrics that are relevant in risk analysis with the ultimate goal of supporting decision making. The VisU-HydRA computational toolbox is an open-source Python package that can be linked to a series of existing codes such as MODFLOW and PAR2, a GPU-accelerated transport simulator. To illustrate the capabilities of the computational toolbox, we simulate flow and transport in a heterogeneous aquifer within a Monte Carlo framework. The computational toolbox allows to compute the probability of a contaminant’s concentration exceeding a safe threshold value as well as the uncertainty associated with the loss of resilience of the aquifer. To ensure consistency and a reproducible workflow, a step-by-step tutorial is provided and available on a GitHub repository.
- Published
- 2022
- Full Text
- View/download PDF
30. Application of Time Series Analysis to Estimate Drawdown From Multiple Well Fields
- Author
-
Davíd A. Brakenhoff, Martin A. Vonk, Raoul A. Collenteur, Marco Van Baar, and Mark Bakker
- Subjects
time series analysis ,groundwater ,decision support ,reproducible ,model selection ,Hantush response function ,Science - Abstract
In 2018–2020, meteorological droughts over Northwestern Europe caused severe declines in groundwater heads with significant damage to groundwater-dependent ecosystems and agriculture. The response of the groundwater system to different hydrological stresses is valuable information for decision-makers. In this paper, a reproducible, data-driven approach using open-source software is proposed to quantify the effects of different hydrological stresses on heads. A scripted workflow was developed using the open-source Pastas software for time series modeling of heads. For each head time series, the best model structure and relevant hydrological stresses (rainfall, evaporation, river stages, and pumping at one or more well fields) were selected iteratively. A new method was applied to model multiple well fields with a single response function, where the response was scaled by the distances between the pumping and observation wells. Selection of the best model structure was performed through reliability checking based on four criteria. The time series model of each observation well represents an independent estimate of the contribution of different hydrological stresses to the head and is based exclusively on observed data. The approach was applied to estimate the drawdown caused by nearby well fields to 250 observed head time series measured at 122 locations in the eastern part of the Netherlands, a country where summer droughts can cause problems, even though the country is better known for problems with too much water. Reliable models were obtained for 126 head time series of which 78 contain one or more well fields as a contributing stress. The spatial variation of the modeled responses to pumping at the well fields show the expected decline with distance from the well field, even though all responses were modeled independently. An example application at one well field showed how the head response to pumping varies per aquifer. Time series analysis was used to determine the feasibility of reducing pumping rates to mitigate large drawdowns during droughts, which depends on the magnitude and response time of the groundwater system to changes in pumping. This is salient information for decision-makers. This article is part of the special issue “Rapid, Reproducible, and Robust Environmental Modeling for Decision Support: Worked Examples and Open-Source Software Tools”.
- Published
- 2022
- Full Text
- View/download PDF
31. Sustainable computational science: the ReScience initiative
- Author
-
Rougier, Nicolas P, Hinsen, Konrad, Alexandre, Frédéric, Arildsen, Thomas, Barba, Lorena A, Benureau, Fabien CY, Brown, C Titus, de Buyl, Pierre, Caglayan, Ozan, Davison, Andrew P, Delsuc, Marc-André, Detorakis, Georgios, Diem, Alexandra K, Drix, Damien, Enel, Pierre, Girard, Benoît, Guest, Olivia, Hall, Matt G, Henriques, Rafael N, Hinaut, Xavier, Jaron, Kamil S, Khamassi, Mehdi, Klein, Almar, Manninen, Tiina, Marchesi, Pietro, McGlinn, Daniel, Metzner, Christoph, Petchey, Owen, Plesser, Hans Ekkehard, Poisot, Timothée, Ram, Karthik, Ram, Yoav, Roesch, Etienne, Rossant, Cyrille, Rostami, Vahid, Shifman, Aaron, Stachelek, Joseph, Stimberg, Marcel, Stollmeier, Frank, Vaggi, Federico, Viejo, Guillaume, Vitay, Julien, Vostinar, Anya E, Yurchak, Roman, and Zito, Tiziano
- Subjects
Information and Computing Sciences ,Software Engineering ,Networking and Information Technology R&D (NITRD) ,Computational science ,Open science ,Publication ,Reproducible ,Replicable ,Sustainable ,GitHub ,Open peer-review - Abstract
Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results; however, computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research is reproducible. But this is not exactly true. James Buckheit and David Donoho proposed more than two decades ago that an article about computational results is advertising, not scholarship. The actual scholarship is the full software environment, code, and data that produced the result. This implies new workflows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested and are hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. ReScience resides on GitHub where each new implementation of a computational study is made available together with comments, explanations, and software tests.
- Published
- 2017
32. Parameter Optimisation for FCϵRIγ Pathway to Two Different Datasets Using Least-Squares Optimisation.
- Author
-
Ismail, Nurul Izza
- Subjects
PROTEIN-tyrosine kinases ,CELL receptors ,GREEDY algorithms ,MODULAR design ,PARAMETER estimation ,PHOSPHATIDYLINOSITOL 3-kinases - Abstract
Syk is a tyrosine kinase important to bridge the receptor ligation and downstream signallings such as Ca
2+ and PI3K. Once the cell receptor binds with the ligand, FCϵRIγ (ITAM receptor) is recruited and phosphorylated by Lyn. The phosphorylated ITAM then recruits protein tyrosine kinase (Syk). The previously developed FCϵRIγ (FCϵ) model contained a greater level of complexity. This study aims to build a simple model of signalling of FCϵ that still represents biological understanding. The parameter estimation is addressed using least-squares optimisation, which implements the Levenburg-Marquardt gradient method (greedy algorithm) to minimise an objective function. More importantly, this model was fitted to two data sets that captured a temporal FCϵ, Syk and Grb2 phosphorylation. Model uncertainty often has done as an analysis that is carried out after model construction and calibration have been completed. This study assessed for sensitivity to parameter choices and model uncertainty to perform the analysis. The modular design principles are applied to the construction of the model. The model is designed to be reproducible. In other words, the model can be effectively applied in simulation conditions or optimised to new datasets for new experimental situations. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
33. Intuitive, reproducible high-throughput molecular dynamics in Galaxy: a tutorial
- Author
-
Simon A. Bray, Tharindu Senapathi, Christopher B. Barnett, and Björn A. Grüning
- Subjects
Galaxy ,Molecular Dynamics ,Reproducible ,Information technology ,T58.5-58.64 ,Chemistry ,QD1-999 - Abstract
Abstract This paper is a tutorial developed for the data analysis platform Galaxy. The purpose of Galaxy is to make high-throughput computational data analysis, such as molecular dynamics, a structured, reproducible and transparent process. In this tutorial we focus on 3 questions: How are protein-ligand systems parameterized for molecular dynamics simulation? What kind of analysis can be carried out on molecular trajectories? How can high-throughput MD be used to study multiple ligands? After finishing you will have learned about force-fields and MD parameterization, how to conduct MD simulation and analysis for a protein-ligand system, and understand how different molecular interactions contribute to the binding affinity of ligands to the Hsp90 protein.
- Published
- 2020
- Full Text
- View/download PDF
34. Repeatable Perovskite Solar Cells through Fully Automated Spin-Coating and Quenching.
- Author
-
Baumann DO, Laufer F, Roger J, Singh R, Gholipoor M, and Paetzold UW
- Abstract
Enhancing reproducibility, repeatability, as well as facilitating transferability between laboratories will accelerate the progress in many material domains, wherein perovskite-based optoelectronics are a prime use case. This study presents fully automated perovskite thin film processing using a commercial spin-coating robot in an inert atmosphere. We successfully apply this novel processing method to antisolvent quenching. This process is typically difficult to reproduce and transfer and is now enhanced to exceptional repeatability in comparison to manual processing. Champion perovskite solar cells demonstrate power conversion efficiencies as high as 19.9%, proving the transferability of established manual spin-coating processes to automatic setups. Comparison with human experts reveals that the performance is already on par, while automated processing yields improved homogeneity across the substrate surface. This work demonstrates that fully automated perovskite thin film processing improves repeatability. Such systems bear the potential to become a foundation for autonomous optimization and greatly improve transferability between laboratories.
- Published
- 2024
- Full Text
- View/download PDF
35. Developing a Rational, Optimized Product of Centella asiatica for Examination in Clinical Trials: Real World Challenges
- Author
-
Kirsten M. Wright, Janis McFerrin, Armando Alcázar Magaña, Joanne Roberts, Maya Caruso, Doris Kretzschmar, Jan F. Stevens, Claudia S. Maier, Joseph F. Quinn, and Amala Soumyanath
- Subjects
placebo ,translation ,Centella asiatica ,botanical ,dietary supplement ,reproducible ,Nutrition. Foods and food supply ,TX341-641 - Abstract
Botanical products are frequently sold as dietary supplements and their use by the public is increasing in popularity. However, scientific evaluation of their medicinal benefits presents unique challenges due to their chemical complexity, inherent variability, and the involvement of multiple active components and biological targets. Translation away from preclinical models, and developing an optimized, reproducible botanical product for use in clinical trials, presents particular challenges for phytotherapeutic agents compared to single chemical entities. Common deficiencies noted in clinical trials of botanical products include limited characterization of the product tested, inadequate placebo control, and lack of rationale for the type of product tested, dose used, outcome measures or even the study population. Our group has focused on the botanical Centella asiatica due to its reputation for enhancing cognition in Eastern traditional medicine systems. Our preclinical studies on a Centella asiatica water extract (CAW) and its bioactive components strongly support its potential as a phytotherapeutic agent for cognitive decline in aging and Alzheimer's disease through influences on antioxidant response, mitochondrial activity, and synaptic density. Here we describe our robust, scientific approach toward developing a rational phytotherapeutic product based on Centella asiatica for human investigation, addressing multiple factors to optimize its valid clinical evaluation. Specific aspects covered include approaches to identifying an optimal dose range for clinical assessment, design and composition of a dosage form and matching placebo, sourcing appropriate botanical raw material for product manufacture (including the evaluation of active compounds and contaminants), and up-scaling of laboratory extraction methods to available current Good Manufacturing Practice (cGMP) certified industrial facilities. We also address the process of obtaining regulatory approvals to proceed with clinical trials. Our study highlights the complexity of translational research on botanicals and the importance of identifying active compounds and developing sound analytical and bioanalytical methods for their determination in botanical materials and biological samples. Recent Phase I pharmacokinetic studies of our Centella asiatica product in humans (NCT03929250, NCT03937908) have highlighted additional challenges associated with designing botanical bioavailability studies, including specific dietary considerations that need to be considered.
- Published
- 2022
- Full Text
- View/download PDF
36. Heritability in corpus callosum morphology and its association with tool use skill in chimpanzees (Pan troglodytes): Reproducibility in two genetically isolated populations.
- Author
-
Hopkins, William D., Westerhausen, René, Schapiro, Steve, and Sherwood, Chet C.
- Subjects
- *
CORPUS callosum , *WHITE matter (Nerve tissue) , *HERITABILITY , *CEREBRAL hemispheres , *MORPHOLOGY , *SURFACE area , *CHIMPANZEES - Abstract
The corpus callosum (CC) is the major white matter tract connecting the left and right cerebral hemispheres. It has been hypothesized that individual variation in CC morphology is negatively associated with forebrain volume (FBV) and this accounts for variation in behavioral and brain asymmetries as well as sex differences. To test this hypothesis, CC surface area and thickness as well as FBV was quantified in 221 chimpanzees with known pedigrees. CC surface area, thickness and FBV were significantly heritable and phenotypically associated with each other; however, no significant genetic association was found between FBV, CC surface area and thickness. The CC surface area and thickness measures were also found to be significantly heritable in both chimpanzee cohorts as were phenotypic associations with variation in asymmetries in tool use skill, suggesting that these findings are reproducible. Finally, significant phenotypic and genetic associations were found between hand use skill and region‐specific variation in CC surface area and thickness. These findings suggest that common genes may underlie individual differences in chimpanzee tool use skill and interhemispheric connectivity as manifest by variation in surface area and thickness within the anterior region of the CC. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Credible Mendelian Randomization Studies in the Presence of Selection Bias Using Control Exposures
- Author
-
Zhao Yang, C. Mary Schooling, and Man Ki Kwok
- Subjects
causal estimates ,control exposures ,Mendelian randomization ,reproducible ,selection bias ,Genetics ,QH426-470 - Abstract
Selection bias is increasingly acknowledged as a limitation of Mendelian randomization (MR). However, few methods exist to assess this issue. We focus on two plausible causal structures relevant to MR studies and illustrate the data-generating process underlying selection bias via simulation studies. We conceptualize the use of control exposures to validate MR estimates derived from selected samples by detecting potential selection bias and reproducing the exposure–outcome association of primary interest based on subject matter knowledge. We discuss the criteria for choosing the control exposures. We apply the proposal in an MR study investigating the potential effect of higher transferrin with stroke (including ischemic and cardioembolic stroke) using transferrin saturation and iron status as control exposures. Theoretically, selection bias affects associations of genetic instruments with the outcome in selected samples, violating the exclusion-restriction assumption and distorting MR estimates. Our applied example showing inconsistent effects of genetically predicted higher transferrin and higher transferrin saturation on stroke suggests the potential selection bias. Furthermore, the expected associations of genetically predicted higher iron status on stroke and longevity indicate no systematic selection bias. The routine use of control exposures in MR studies provides a valuable tool to validate estimated causal effects. Like the applied example, an antagonist, decoy, or exposure with similar biological activity as the exposure of primary interest, which has the same potential selection bias sources as the exposure–outcome association, is suggested as the control exposure. An additional or a validated control exposure with a well-established association with the outcome is also recommended to explore possible systematic selection bias.
- Published
- 2021
- Full Text
- View/download PDF
38. Credible Mendelian Randomization Studies in the Presence of Selection Bias Using Control Exposures.
- Author
-
Yang, Zhao, Schooling, C. Mary, and Kwok, Man Ki
- Subjects
ISCHEMIC stroke ,TRANSFERRIN - Abstract
Selection bias is increasingly acknowledged as a limitation of Mendelian randomization (MR). However, few methods exist to assess this issue. We focus on two plausible causal structures relevant to MR studies and illustrate the data-generating process underlying selection bias via simulation studies. We conceptualize the use of control exposures to validate MR estimates derived from selected samples by detecting potential selection bias and reproducing the exposure–outcome association of primary interest based on subject matter knowledge. We discuss the criteria for choosing the control exposures. We apply the proposal in an MR study investigating the potential effect of higher transferrin with stroke (including ischemic and cardioembolic stroke) using transferrin saturation and iron status as control exposures. Theoretically, selection bias affects associations of genetic instruments with the outcome in selected samples, violating the exclusion-restriction assumption and distorting MR estimates. Our applied example showing inconsistent effects of genetically predicted higher transferrin and higher transferrin saturation on stroke suggests the potential selection bias. Furthermore, the expected associations of genetically predicted higher iron status on stroke and longevity indicate no systematic selection bias. The routine use of control exposures in MR studies provides a valuable tool to validate estimated causal effects. Like the applied example, an antagonist, decoy, or exposure with similar biological activity as the exposure of primary interest, which has the same potential selection bias sources as the exposure–outcome association, is suggested as the control exposure. An additional or a validated control exposure with a well-established association with the outcome is also recommended to explore possible systematic selection bias. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. Cactus: A user-friendly and reproducible ATAC-Seq and mRNA-Seq analysis pipeline for data preprocessing, differential analysis, and enrichment analysis.
- Author
-
Salignon, Jérôme, Millan-Ariño, Lluís, Garcia, Maxime U., and Riedel, Christian G.
- Subjects
- *
CACTUS , *DATA analysis , *NUCLEOTIDE sequencing , *DNA data banks , *BINDING sites , *GENE expression , *RESEARCH personnel , *ONTOLOGIES (Information retrieval) - Abstract
The ever decreasing cost of Next-Generation Sequencing coupled with the emergence of efficient and reproducible analysis pipelines has rendered genomic methods more accessible. However, downstream analyses are basic or missing in most workflows, creating a significant barrier for non-bioinformaticians. To help close this gap, we developed Cactus, an end-to-end pipeline for analyzing ATAC-Seq and mRNA-Seq data, either separately or jointly. Its Nextflow-, container-, and virtual environment-based architecture ensures efficient and reproducible analyses. Cactus preprocesses raw reads, conducts differential analyses between conditions, and performs enrichment analyses in various databases, including DNA-binding motifs, ChIP-Seq binding sites, chromatin states, and ontologies. We demonstrate the utility of Cactus in a multi-modal and multi-species case study as well as by showcasing its unique capabilities as compared to other ATAC-Seq pipelines. In conclusion, Cactus can assist researchers in gaining comprehensive insights from chromatin accessibility and gene expression data in a quick, user-friendly, and reproducible manner. • Cactus is a new pipeline for comprehensive ATAC-Seq and mRNA-Seq data analysis. • The pipeline architecture ensures efficient, consistent, and reproducible analysis. • Preprocesses raw reads, conducts differential analyses, and enrichment analyses. • Demonstrates robust multi-modal and multi-species applicability in a case study. • Offers unique capabilities over other pipelines, enhancing biological insights. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. The varying openness of digital open science tools [version 2; peer review: 1 approved, 1 approved with reservations]
- Author
-
Louise Bezuidenhout and Johanna Havemann
- Subjects
Research Article ,Articles ,Open Science ,digital ,reproducible ,low/middle-income countries - Abstract
Background: Digital tools that support open science practices play a key role in the seamless accumulation, archiving and dissemination of scholarly data, outcomes and conclusions. Despite their integration into open science practices, the providence and design of these digital tools are rarely explicitly scrutinized. This means that influential factors, such as the funding models of the parent organizations, their geographic location, and the dependency on digital infrastructures are rarely considered. Suggestions from literature and anecdotal evidence already draw attention to the impact of these factors, and raise the question of whether the open science ecosystem can realise the aspiration to become a truly “unlimited digital commons” in its current structure. Methods: In an online research approach, we compiled and analysed the geolocation, terms and conditions as well as funding models of 242 digital tools increasingly being used by researchers in various disciplines. Results: Our findings indicate that design decisions and restrictions are biased towards researchers in North American and European scholarly communities. In order to make the future open science ecosystem inclusive and operable for researchers in all world regions including Africa, Latin America, Asia and Oceania, those should be actively included in design decision processes. Conclusions: Digital open science tools carry the promise of enabling collaboration across disciplines, world regions and language groups through responsive design. We therefore encourage long term funding mechanisms and ethnically as well as culturally inclusive approaches serving local prerequisites and conditions to tool design and construction allowing a globally connected digital research infrastructure to evolve in a regionally balanced manner.
- Published
- 2021
- Full Text
- View/download PDF
41. The Impact of Visual Impairment in Stroke (IVIS) Study – Evidence of Reproducibility.
- Author
-
Rowe, Fiona J. and Hepworth, Lauren R.
- Subjects
- *
VISION disorders , *ISCHEMIC stroke , *STROKE patients , *VISUAL perception , *EYE movements - Abstract
Reporting generalisable data across stroke populations is important. We aimed to evaluate the Impact of Visual Impairment after Stroke (IVIS) visual assessment protocol in a different UK geographical area. This was a single-centre acute stroke unit, prospective study (IVIS-extension (IVIS-e) study) with comparison to a multi-centre acute stroke cohort (IVIS study). Orthoptists reviewed all stroke survivors with a standardised assessment of visual acuity, visual fields, ocular alignment, ocular motility, visual inattention and visual perception including a standardised follow-up strategy. 123 stroke survivors underwent visual screening: 42% women, 58% men, mean age 63.6 years and 86% ischaemic strokes. Ethnicity consisted of 68.3% white British and 28.5% being Pakistani, Indian, Caribbean, Bangladeshi, Black and Chinese. Two died and 28 could not be assessed. Of the 93 remaining, 10 stroke survivors (10.8%) had a normal visual assessment and 83 (89.2%) had visual impairments detected. Fifty-seven stroke survivors were assessed at their first orthoptic visit within 3 days of stroke onset; the remainder being assessed at subsequent orthoptic visits to the stroke unit. The visual profile was similar across the IVIS-e and original IVIS cohorts for most types of visual impairment although, overall, more visual impairment was detected in IVIS-e. Differences between the cohorts were primarily related to lower age and smaller white British ethnicity in the IVIS-e cohort. This likely relates to the differing population demographics for the two cohort geographical areas. Further roll-out of the IVIS assessment protocol to other regions and countries would improve detection of post-stroke visual impairment. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. Urogenital atrophy: The 'unknown factors' challenging current practice.
- Author
-
Briggs, Paula and Hapangama, Dharani K
- Subjects
ATROPHY ,MEDICAL research ,HOT flashes ,MEDICAL personnel ,BACTERIAL vaginitis ,DIAGNOSIS ,SYMPTOMS ,VULVA ,SYNDROMES ,VAGINA ,MENOPAUSE - Abstract
Urogenital atrophy occurs as a result of the effect of estrogen deficiency on the tissue quality in the vulva, vagina, urethra and bladder. It is a common consequence of the menopause, with possibly up to 80% of women experiencing symptoms. Despite a number of different diagnostic methods, there is no validated objective method by which to confirm the diagnosis in clinical practice and research settings. Education, for women and clinicians, is called for to support diagnosis and treatment. However, before this can be of global benefit, development of an accessible and reproducible diagnostic test is required. Current assessment methods include routine history and clinical examination, with the clinician's opinion based on their subjective observations. A vaginal smear to assess the ratio of superficial to parabasal cells and measurement of the pH of the vaginal secretions is more commonly used in research settings. A number of formulae have been postulated to facilitate the diagnosis including the Vaginal Health Index, the Vulval Health Index, the Genitourinary Syndrome of the Menopause assessment tool, the Genital Health Clinical Evaluation and vaginal biopsy and assessment of the vaginal microbiome. However, none of these potential methods of assessment has been validated. This article focuses on what we do not know about urogenital atrophy including the prevalence, the most appropriate terminology, aetiology, pathogenesis and the most objective and reproducible method of assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. The varying openness of digital open science tools [version 1; peer review: 1 approved with reservations, 1 not approved]
- Author
-
Louise Bezuidenhout and Johanna Havemann
- Subjects
Research Article ,Articles ,Open Science ,digital ,reproducible ,low/middle-income countries - Abstract
Background: Digital tools that support open science practices play a key role in the seamless accumulation, archiving and dissemination of scholarly data, outcomes and conclusions. Despite their integration into open science practices, the providence and design of these digital tools are rarely explicitly scrutinized. This means that influential factors, such as the funding models of the parent organizations, their geographic location, and the dependency on digital infrastructures are rarely considered. Suggestions from literature and anecdotal evidence already draw attention to the impact of these factors, and raise the question of whether the open science ecosystem can realise the aspiration to become a truly “unlimited digital commons” in its current structure. Methods: In an online research approach, we compiled and analysed the geolocation, terms and conditions as well as funding models of 242 digital tools increasingly being used by researchers in various disciplines. Results: Our findings indicate that design decisions and restrictions are biased towards researchers in North American and European scholarly communities. In order to make the future open science ecosystem inclusive and operable for researchers in all world regions including Africa, Latin America, Asia and Oceania, those should be actively included in design decision processes. Conclusions: Digital open science tools carry the promise of enabling collaboration across disciplines, world regions and language groups through responsive design. We therefore encourage long term funding mechanisms and ethnically as well as culturally inclusive approaches serving local prerequisites and conditions to tool design and construction allowing a globally connected digital research infrastructure to evolve in a regionally balanced manner.
- Published
- 2020
- Full Text
- View/download PDF
44. The importance of open science for biological assessment of aquatic environments
- Author
-
Marcus W. Beck, Casey O’Hara, Julia S. Stewart Lowndes, Raphael D. Mazor, Susanna Theroux, David J. Gillett, Belize Lane, and Gregory Gearheart
- Subjects
Applied science ,Bioassessment ,Open data ,Open science ,Reproducible ,Medicine ,Biology (General) ,QH301-705.5 - Abstract
Open science principles that seek to improve science can effectively bridge the gap between researchers and environmental managers. However, widespread adoption has yet to gain traction for the development and application of bioassessment products. At the core of this philosophy is the concept that research should be reproducible and transparent, in addition to having long-term value through effective data preservation and sharing. In this article, we review core open science concepts that have recently been adopted in the ecological sciences and emphasize how adoption can benefit the field of bioassessment for both prescriptive condition assessments and proactive applications that inform environmental management. An example from the state of California demonstrates effective adoption of open science principles through data stewardship, reproducible research, and engagement of stakeholders with multimedia applications. We also discuss technical, sociocultural, and institutional challenges for adopting open science, including practical approaches for overcoming these hurdles in bioassessment applications.
- Published
- 2020
- Full Text
- View/download PDF
45. Preparation of Hollow Polyaniline Micro/Nanospheres and Their Removal Capacity of Cr (VI) from Wastewater
- Author
-
Honge Wu, Qing Wang, Guang Tao Fei, Shao Hui Xu, Xiao Guo, and Li De Zhang
- Subjects
Reproducible ,Toxic Cr (VI) ,Removal ,Wastewater ,Hollow polyaniline micro/nanospheres ,Materials of engineering and construction. Mechanics of materials ,TA401-492 - Abstract
Abstract The hollow polyaniline (PANI) micro/nanospheres are obtained through a simple monomer polymerization in alkaline solution with Triton X-100 Micelles as soft templates. The hollow PANI micro/nanospheres demonstrate rapid and effective removal ability for Chromium (VI) (Cr (VI)) in a wide pH range, and the maximum removal capacity can reach 127.88 mg/g at pH 3. After treated with acid, the used hollow PANI micro/nanospheres have about the similar removal capacity of Cr (VI) from wastewater.
- Published
- 2018
- Full Text
- View/download PDF
46. Testing the Fitness Effects of Kung Fu Gymnastics and the Reproducibility of These Effects.
- Author
-
Meng Fan, Hiroto Takizawa, Naoyuki Yamashita, Ryo Ito, ChengZhong Zhang, and Hiroki Matsuoka
- Subjects
GYMNASTICS ,CHINESE martial arts ,MUSCLE strength ,GRIP strength ,BROAD jump ,PHYSICAL fitness ,PHYSICAL fitness testing - Abstract
The purpose of this study was to test the fitness effects of Kung Fu Gymnastics and the reproducibility of these effects. A 6-wk program of Kung Fu Gymnastics that consisted of 12 one-hour sessions held 2 times·wk
-1 was conducted in 2015. The fitness program had a total of 16 attendees. The same program was conducted in 2016 with 11 other attendees. Five fitness indicators were measured at pre- and post-intervention: (a) grip strength; (b) standing long jump; (c) modified sit-and-reach; (d) side steps; and (e) sit ups. Pre-post changes were observed in the scores for standing long jump and side steps. The scores were found to be significantly higher at postintervention than at pre-intervention. Further, none of the fitness items exhibited inter-group difference for the rate of pre-post changes. The results indicate that the 6-wk Kung Fu Gymnastics program, conducted under the same conditions for both Groups, resulted in similar benefits for each group in terms of lower-limb instantaneous muscular strength and agility. These findings confirm that the fitness effects of the intervention are reproducible. [ABSTRACT FROM AUTHOR]- Published
- 2021
47. The importance of open science for biological assessment of aquatic environments.
- Author
-
Beck, Marcus W., O'Hara, Casey, Lowndes, Julia S. Stewart, Ma-zor, Raphael D., Theroux, Susanna, Gillett, David J., Lane, Belize, and Gearheart, Gregory
- Subjects
LIFE sciences ,ECOLOGY ,ENVIRONMENTAL management ,APPLIED sciences ,REPRODUCIBLE research - Abstract
Open science principles that seek to improve science can effectively bridge the gap between researchers and environmental managers. However, widespread adoption has yet to gain traction for the development and application of bioassessment products. At the core of this philosophy is the concept that research should be reproducible and transparent, in addition to having long-term value through effective data preservation and sharing. In this article, we review core open science concepts that have recently been adopted in the ecological sciences and emphasize how adoption can benefit the field of bioassessment for both prescriptive condition assessments and proactive applications that inform environmental management. An example from the state of California demonstrates effective adoption of open science principles through data stewardship, reproducible research, and engagement of stakeholders with multimedia applications. We also discuss technical, sociocultural, and institutional challenges for adopting open science, including practical approaches for overcoming these hurdles in bioassessment applications. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
48. Portable and platform‐independent MR pulse sequence programs.
- Author
-
Cordes, Cristoffer, Konstandin, Simon, Porter, David, and Günther, Matthias
- Subjects
FLOWGRAPHS ,RAPID prototyping ,DATA structures ,HIGH performance computing - Abstract
Purpose: To introduce a new sequence description format for vendor‐independent MR sequences that include all calculation logic portably. To introduce a new MRI sequence development approach which utilizes flexibly reusable modules. Methods: The proposed sequence description contains a sequence module hierarchy for loop and group logic, which is enhanced by a novel strategy for performing efficient parameter and pulse shape calculation. These calculations are powered by a flow graph structure. By using the flow graph, all calculations are performed with no redundancy and without requiring preprocessing. The generation of this interpretable structure is a separate step that combines MRI techniques while actively considering their context. The driver interface is slim and highly flexible through scripting support. The sequences do not require any vendor‐specific compiling or processing step. A vendor‐independent frontend for sequence configuration can be used. Tests that ensure physical feasibility of the sequence are integrated into the calculation logic. Results: The framework was used to define a set of standard sequences. Resulting images were compared to respective images acquired with sequences provided by the device manufacturer. Images were acquired using a standard commercial MRI system. Conclusions: The approach produces configurable, vendor‐independent sequences, whose configurability enables rapid prototyping. The transparent data structure simplifies the process of sharing reproducible sequences, modules, and techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
49. How to Design Computer Security Experiments
- Author
-
Peisert, Sean and Bishop, Matt
- Subjects
OS and Networks ,computer ,security ,scientific method ,falsifiable ,control ,variable ,experiment ,reproducible ,measurable - Abstract
In this paper, we discuss the scientific method and how it can be applied to computer security experiments. We reiterate a number of general scientific principles, such as falsifiable hypotheses, scientific controls, reproducible results, and data quality.
- Published
- 2007
50. The Development of Reproducible and Selective Uric Acid Biosensor by Using Electrodeposited Polytyramine as Matrix Polymer
- Author
-
Manihar Situmorang and Isnaini Nurwahyuni
- Subjects
uric acid biosensor ,electrodeposited polytyramine ,polymer matrix ,reproducible ,selective ,Chemistry ,QD1-999 - Abstract
A versatile method for the construction of reproducible and high selective uric acid biosensor is explained. Electrodeposited polytyramine is used as biosensor matrixes due to its compatibility to immobilize enzyme uric oxidase in the membrane electrode. The precise control over the charge passed during deposition of polytyramine allows concomitant control over the thickness of the deposited enzyme layers onto the surface of the electrode. The uric acid biosensor showed a sensitive response to uric acid with a linear calibration curve lies in the concentration range of 0.1–2.5 mM, slope 0.066 µA mM-1, and the limit detection was 0.01 mM uric acid (S/N = 3). The biosensor shown excellent reproducibility, the variation between response curves for uric acid lies between RSD 1% at low concentrations and up to RSD 6% at saturation concentration. Uric acid biosensor is free from normal interference. The biosensor showed good stability and to be applicable to determine uric acid in real samples. Analysis of uric acid in the reference standard serum samples by the biosensor method are all agreed with the real value from supplier. Standard samples were also analyzed independently by two methods: the present biosensor method and the standard UV-Vis spectrophotometry method, gave a correlation coefficient of 0.994. This result confirms that the biosensor method meets the rigid demands expected for uric acid in real samples.
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.