98 results on '"C. MARKUS"'
Search Results
2. ANTAGONISM IS THE PREDOMINANT EFFECT OF HERBICIDE MIXTURES USED FOR IMIDAZOLINONE-RESISTANT BARNYARDGRASS (Echinochloa crus-galli) CONTROL
- Author
-
F. O. MATZENBACHER, A. KALSING, G. DALAZEN, C. MARKUS, and A. MEROTTO JR
- Subjects
ALS inhibitors ,irrigated rice ,interaction ,synergism ,Biology (General) ,QH301-705.5 ,Botany ,QK1-989 - Abstract
ABSTRACTHerbicides mixtures are used in many situations without the adequate knowledge related with the effect on major target weeds. The objective of this study was to evaluate the effects of different herbicides mixtures used in irrigated rice in order to establish the adequate combinations for the prevention and management of herbicide resistance in barnyardgrass (Echinochloa crus-galli). Three experiments were performed at field conditions with all major post-emergent herbicides used in irrigated rice in Brazil. The first experiment was performed with barnyardgrass resistant to imidazolinone herbicides and herbicides applied at label rates. The second and third experiments were performed with barnyardgrass resistant and susceptible to imidazolinone herbicides applied at doses of 50 or 75% of the label rates. The occurrence of additive, synergistic and antagonistic effects was identified at 18, 18 and 64%, respectively, among the total of 50 different associations of herbicide and rates evaluated. In general, the mixture of ACCase inhibitors with ALS inhibitors, quinclorac, clomazone + propanil or thiobencarb resulted in antagonism. Sinergic mixtures were found in clomazone with propanil + thiobencarb, profoxydim with cyhalofop-butyl or clomazone, and quinclorac with imazapyr + imazapic, bispyribac-sodium or cyhalofop-butyl. The mixtures of quinclorac with profoxydim were antagonic. Rice grain yield varied according to the efficiency of weed control. Seveveral mixtures were effective for imidazolinone resistant barnyardgrass control.
- Published
- 2015
- Full Text
- View/download PDF
3. Long-range correlations and stride pattern variability in recreational and elite distance runners during a prolonged run
- Author
-
Brahms, C. Markus, Zhao, Yang, Gerhard, David, and Barden, John M.
- Published
- 2022
- Full Text
- View/download PDF
4. Stride length determination during overground running using a single foot-mounted inertial measurement unit
- Author
-
Brahms, C. Markus, Zhao, Yang, Gerhard, David, and Barden, John M.
- Published
- 2018
- Full Text
- View/download PDF
5. Pfalz Werla - Die Ausgrabung von Tor II aus dem 10. Jahrhundert und seine Visualisierung 2012.
- Author
-
Michael, Blaich, Christoph, Geschwinde, and C. Markus, Lowes
- Published
- 2013
6. Prozessmanagement bei axillären Plexusblockaden.
- Author
-
U. Schwemmer, A. Schleppers, C. Markus, M. Kredel, S. Kirschner, and N. Roewer
- Abstract
Unter der Prämisse eines fallbezogenen Entgeltsystems kommt der Analyse von medizinischen Leistungen in zunehmendem Maß Bedeutung zu. Dabei spielen die Qualität der Behandlung und auch der dafür benötigte Zeitaufwand eine wichtige Rolle. Anästhesieverfahren erfordern ein hohes Maß an Qualität und Sicherheit, zudem sind sie sehr personalintensiv. Im Bereich der Regionalanästhesie lassen neue Verfahren, wie z. B. der Einsatz von hochauflösendem Ultraschall bei Nervenblockaden, einen möglichen Zeitgewinn bei verbesserter Qualität erkennen. Ziel der vorliegenden Untersuchung war die Analyse der Zeitabläufe und der Ergebnisse bei der Verwendung der Verfahren Ultraschall und Nervenstimulation bei axillären Plexusblockaden. Für einen Zeitraum von 9 Monaten wurden anhand der Anästhesiedokumentation die ultraschallgeführte Plexusanästhesie (Sono) und die Neurostimulationsmethode (NStim) bei handchirurgischen Patienten untersucht. Aufgenommen wurden nur die Fälle, bei denen axilläre Plexusblockaden durchgeführt worden waren. Als Medikation wurde 1,5%iges Mepivacain verwendet. Unvollständige Protokolle wurden ausgeschlossen. Es erfüllten 130 handchirurgische Patienten die Kriterien der Untersuchung. Erfasst wurden die Erfolgsraten und alle Zeiten sowie die daraus resultierenden Zeitabläufe. Alle Daten wurden als Excel-Tabelle gespeichert und statistisch ausgewertet. Die Ergebnisse der Untersuchung zeigen eine signifikante Steigerung der Erfolgsrate der Plexusblockaden bei der Patientengruppe mit Ultraschallführung (98,2% Sono vs. 83,1% NStim). Die Freigabe zur Operation erfolgte in der Gruppe Sono 15 min früher (5 min vs. 20 min; p<0,013); der Operationsbeginn war 20 min früher (25 min vs. 45 min, p<0,001). Des Weiteren waren die Anästhesiedauer signifikant kürzer (85 min vs. 120 min, p<0,001) und die Notwendigkeit der postoperativen Überwachung geringer (5,4% vs. 32,4%, p<0,001). Die Daten der Untersuchung zeigen, dass die axilläre Blockade des Plexus brachii durch die Verwendung des Ultraschalls zur Identifikation der Nerven unter qualitativen und zeitlichen Gesichtspunkten deutlich verbessert werden kann. [ABSTRACT FROM AUTHOR]
- Published
- 2006
7. Effect of creatine supplementation and drop-set resistance training in untrained aging adults.
- Author
-
Johannsmeyer, Sarah, Candow, Darren G., Brahms, C. Markus, Michel, Deborah, and Zello, Gordon A.
- Subjects
- *
PHYSIOLOGICAL effects of creatine , *RESISTANCE training , *PHYSIOLOGICAL aspects of aging , *MUSCLE protein metabolism , *PHYSICAL fitness - Abstract
Objective To investigate the effects of creatine supplementation and drop-set resistance training in untrained aging adults. Participants were randomized to one of two groups: Creatine (CR: n = 14, 7 females, 7 males; 58.0 ± 3.0 yrs, 0.1 g/kg/day of creatine + 0.1 g/kg/day of maltodextrin) or Placebo (PLA: n = 17, 7 females, 10 males; age: 57.6 ± 5.0 yrs, 0.2 g/kg/day of maltodextrin) during 12 weeks of drop-set resistance training (3 days/week; 2 sets of leg press, chest press, hack squat and lat pull-down exercises performed to muscle fatigue at 80% baseline 1-repetition maximum [1-RM] immediately followed by repetitions to muscle fatigue at 30% baseline 1-RM). Methods Prior to and following training and supplementation, assessments were made for body composition, muscle strength, muscle endurance, tasks of functionality, muscle protein catabolism and diet. Results Drop-set resistance training improved muscle mass, muscle strength, muscle endurance and tasks of functionality ( p < 0.05). The addition of creatine to drop-set resistance training significantly increased body mass ( p = 0.002) and muscle mass ( p = 0.007) compared to placebo. Males on creatine increased muscle strength (lat pull-down only) to a greater extent than females on creatine ( p = 0.005). Creatine enabled males to resistance train at a greater capacity over time compared to males on placebo ( p = 0.049) and females on creatine ( p = 0.012). Males on creatine ( p = 0.019) and females on placebo ( p = 0.014) decreased 3-MH compared to females on creatine. Conclusions The addition of creatine to drop-set resistance training augments the gains in muscle mass from resistance training alone. Creatine is more effective in untrained aging males compared to untrained aging females. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
8. Perioperative Enhanced Recovery Concepts Significantly Improve Postoperative Outcome in Patients with Crohn`s Disease.
- Author
-
Kelm M, Wagner L, Widder A, Pistorius R, Wagner JC, Schlegel N, Markus C, Meybohm P, Germer CT, Schwenk W, and Flemming S
- Subjects
- Humans, Female, Male, Prospective Studies, Adult, Middle Aged, Patient Readmission statistics & numerical data, Perioperative Care methods, Crohn Disease surgery, Crohn Disease rehabilitation, Postoperative Complications prevention & control, Postoperative Complications epidemiology, Postoperative Complications etiology, Enhanced Recovery After Surgery, Length of Stay statistics & numerical data
- Abstract
Background and Aims: Despite recent advancements in medical and surgical techniques in patients suffering from Crohn`s disease [CD], postoperative morbidity remains relevant due to a long-standing, non-curable disease burden. As demonstrated for oncological patients, perioperative enhanced recovery concepts provide great potential to improve postoperative outcome. However, robust evidence about the effect of perioperative enhanced recovery concepts in the specific cohort of CD patients is lacking., Methods: In a prospective, single-centre study, all patients receiving ileocaecal resection due to CD between 2020 and 2023 were included. A specific, perioperative, enhanced recovery concept [ERC] was implemented and patients were divided into two groups [before and after implementation]. The primary outcome focused on postoperative complications as measured by the Comprehensive Complication Index [CCI], secondary endpoints were severe complications, length of hospital stay, and rates of re-admission., Results: Of 83 patients analysed, 33 patients participated in the enhanced recovery programme [post-ERC]. Whereas patient characteristics were comparable between both groups, ERC resulted in significantly decreased rates of overall and severe postoperative complications [CCI: 21.4 versus 8.4, p = 0.0036; Clavien Dindo > 2: 38% versus 3.1%, p = 0.0002]. Additionally, post-ERC-patients were ready earlier for discharge [5 days versus 6.5 days, p = 0.001] and rates of re-admission were significantly lower [3.1% versus 20%, p = 0.03]. In a multivariate analysis, the recovery concept was identified as independent factor to reduce severe postoperative complications [p = 0.019]., Conclusion: A specific, perioperative, enhanced recovery concept significantly improves the postoperative outcome of patients suffering from Crohn`s disease., (© The Author(s) 2024. Published by Oxford University Press on behalf of European Crohn’s and Colitis Organisation. All rights reserved. For commercial re-use, please contact reprints@oup.com for reprints and translation rights for reprints. All other permissions can be obtained through our RightsLink service via the Permissions link on the article page on our site—for further information please contact journals.permissions@oup.com.)
- Published
- 2024
- Full Text
- View/download PDF
9. Measuring bile acids: Are we all talking the same language?
- Author
-
Markus C and Hague WB
- Abstract
In this paper, we discuss the Bile Acid Comparison and Harmonisation project, a sub-study of the Trial of URsodeoxycholic acid vs RIFampicin in early-onset severe Intrahepatic Cholestasis of pregnancy, giving an overview of the current state of affairs for total bile acid measurements., Competing Interests: The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article., (© The Author(s) 2024.)
- Published
- 2024
- Full Text
- View/download PDF
10. Linearity assessment: deviation from linearity and residual of linear regression approaches.
- Author
-
Lim CY, Lee X, Tran MTC, Markus C, Loh TP, Ho CS, Theodorsson E, Greaves RF, Cooke BR, and Zakaria R
- Abstract
In this computer simulation study, we examine four different statistical approaches of linearity assessment, including two variants of deviation from linearity (individual (IDL) and averaged (AD)), along with detection capabilities of residuals of linear regression (individual and averaged). From the results of the simulation, the following broad suggestions are provided to laboratory practitioners when performing linearity assessment. A high imprecision can challenge linearity investigations by producing a high false positive rate or low power of detection. Therefore, the imprecision of the measurement procedure should be considered when interpreting linearity assessment results. In the presence of high imprecision, the results of linearity assessment should be interpreted with caution. Different linearity assessment approaches examined in this study performed well under different analytical scenarios. For optimal outcomes, a considered and tailored study design should be implemented. With the exception of specific scenarios, both ADL and IDL methods were suboptimal for the assessment of linearity compared. When imprecision is low (3 %), averaged residual of linear regression with triplicate measurements and a non-linearity acceptance limit of 5 % produces <5 % false positive rates and a high power for detection of non-linearity of >70 % across different types and degrees of non-linearity. Detection of departures from linearity are difficult to identify in practice and enhanced methods of detection need development., (© 2024 Walter de Gruyter GmbH, Berlin/Boston.)
- Published
- 2024
- Full Text
- View/download PDF
11. Reply to "The trade-off between bias and imprecision: debunking the myth of bias as the primary culprit in patient result misclassification".
- Author
-
Loh TP, Lim CY, and Markus C
- Subjects
- Humans, Diagnostic Errors, Bias
- Published
- 2024
- Full Text
- View/download PDF
12. [Personalized treatment of pheochromocytoma].
- Author
-
Schlegel N, Meir M, Reibetanz J, Markus C, Wiegering A, and Fassnacht M
- Subjects
- Humans, Precision Medicine, Adrenal Glands pathology, Metanephrine, Pheochromocytoma diagnosis, Pheochromocytoma genetics, Pheochromocytoma surgery, Adrenal Gland Neoplasms diagnosis, Adrenal Gland Neoplasms genetics, Adrenal Gland Neoplasms surgery
- Abstract
Background: Pheochromocytoma is a rare but severe disease of the adrenal glands. The aim of this study is to present and discuss recent developments in the diagnosis and treatment of pheochromocytoma., Material and Methods: A narrative review article based on the most recent literature is presented., Results and Discussion: The proportion of pheochromocytomas as tumors of adrenal origin is about 5% of incidentally discovered adrenal tumors. The classical symptomatic triad of headaches, sweating, and palpitations occurs in only about 20% of patients, while almost all patients show at least 1 of these symptoms. To diagnose pheochromocytoma, levels of free plasma metanephrines or alternatively, fractionated metanephrines in a 24‑h urine collection is required in a first step. In the second step an imaging procedure, computed tomography (CT) or magnetic resonance imaging (MRI), is performed to localize the adrenal tumor. Functional imaging is also recommended to preoperatively detect potential metastases. Genetic testing should always be offered during the course of treatment as 30-40% of pheochromocytomas are associated with genetic mutations. The dogma of preoperative alpha blockade is increasingly being questioned and has been controversially discussed in recent years. Minimally invasive removal of the adrenal tumor is the standard surgical procedure to cure patients with pheochromocytoma. The transabdominal and retroperitoneal laparoscopic approaches are considered equivalent. The choice of the minimally invasive procedure depends on the expertise and experience of the surgeon and should be tailored accordingly. Individualized and regular follow-up care is important after surgery., (© 2023. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
13. The LEAP Checklist for Laboratory Evaluation and Analytical Performance Characteristics Reporting of Clinical Measurement Procedures.
- Author
-
Loh TP, Cooke BR, Tran TCM, Markus C, Zakaria R, Ho CS, Theodorsson E, and Greaves RF
- Subjects
- Humans, Reference Standards, Laboratories, Checklist, Clinical Laboratory Services
- Abstract
Reporting a measurement procedure and its analytical performance following method evaluation in a peer-reviewed journal is an important means for clinical laboratory practitioners to share their findings. It also represents an important source of evidence base to help others make informed decisions about their practice. At present, there are significant variations in the information reported in laboratory medicine journal publications describing the analytical performance of measurement procedures. These variations also challenge authors, readers, reviewers, and editors in deciding the quality of a submitted manuscript. The International Federation of Clinical Chemistry and Laboratory Medicine Working Group on Method Evaluation Protocols (IFCC WG-MEP) developed a checklist and recommends its adoption to enable a consistent approach to reporting method evaluation and analytical performance characteristics of measurement procedures in laboratory medicine journals. It is envisioned that the Laboratory Evaluation and Analytical Performance Characteristics (LEAP) checklist will improve the standardisation of journal publications describing method evaluation and analytical performance characteristics, improving the quality of the evidence base that is relied upon by practitioners.
- Published
- 2024
- Full Text
- View/download PDF
14. Impact of analytical imprecision and bias on patient classification.
- Author
-
Loh TP, Markus C, and Lim CY
- Subjects
- Humans, Laboratories, Bias, Patients classification
- Abstract
Objectives: An increase in analytical imprecision and/or the introduction of bias can affect the interpretation of quantitative laboratory results. In this study, we explore the impact of varying assay imprecision and bias introduction on the classification of patients based on fixed thresholds., Methods: Simple spreadsheets (Microsoft Excel) were constructed to simulate conditions of assay deterioration, expressed as coefficient of variation and bias (in percentages). The impact on patient classification was explored based on fixed interpretative limits. A combined matrix of imprecision and bias of 0%, 2%, 4%, 6%, 8%, and 10% (tool 1) as well as 0%, 2%, 5%, 10%, 15%, and 20% (tool 2) was simulated, respectively. The percentage of patients who were reclassified following the addition of simulated imprecision and bias was summarized and presented in tables and graphs., Results: The percentage of patients who were reclassified increased with increasing/decreasing magnitude of imprecision and bias. The impact of imprecision lessens with increasing bias such that at high biases, the bias becomes the dominant cause for reclassification., Conclusions: The spreadsheet tools, available as Supplemental Material, allow laboratories to visualize the impact of additional analytical imprecision and bias on the classification of their patients when applied to locally extracted historical results., (© The Author(s) 2023. Published by Oxford University Press on behalf of American Society for Clinical Pathology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.)
- Published
- 2024
- Full Text
- View/download PDF
15. The LEAP checklist for Laboratory Evaluation and Analytical Performance Characteristics reporting of clinical measurement procedures.
- Author
-
Loh TP, Cooke BR, Tran TCM, Markus C, Zakaria R, Ho CS, Theodorsson E, and Greaves RF
- Subjects
- Humans, Reference Standards, Laboratories, Laboratories, Clinical, Checklist, Clinical Laboratory Services
- Abstract
Reporting a measurement procedure and its analytical performance following method evaluation in a peer-reviewed journal is an important means for clinical laboratory practitioners to share their findings. It also represents an important source of evidence base to help others make informed decisions about their practice. At present, there are significant variations in the information reported in laboratory medicine journal publications describing the analytical performance of measurement procedures. These variations also challenge authors, readers, reviewers, and editors in deciding the quality of a submitted manuscript.The International Federation of Clinical Chemistry and Laboratory Medicine Working Group on Method Evaluation Protocols (IFCC WG-MEP) developed a checklist and recommends its adoption to enable a consistent approach to reporting method evaluation and analytical performance characteristics of measurement procedures in laboratory medicine journals. It is envisioned that the LEAP checklist will improve the standardisation of journal publications describing method evaluation and analytical performance characteristics, improving the quality of the evidence base that is relied upon by practitioners.
- Published
- 2023
- Full Text
- View/download PDF
16. Advances in internal quality control.
- Author
-
Loh TP, Lim CY, Sethi SK, Tan RZ, and Markus C
- Abstract
Quality control practices in the modern laboratory are the result of significant advances over the many years of the profession. Major advance in conventional internal quality control has undergone a philosophical shift from a focus solely on the statistical assessment of the probability of error identification to more recent thinking on the capability of the measurement procedure (e.g. sigma metrics), and most recently, the risk of harm to the patient (the probability of patient results being affected by an error or the number of patient results with unacceptable analytical quality). Nonetheless, conventional internal quality control strategies still face significant limitations, such as the lack of (proven) commutability of the material with patient samples, the frequency of episodic testing, and the impact of operational and financial costs, that cannot be overcome by statistical advances. In contrast, patient-based quality control has seen significant developments including algorithms that improve the detection of specific errors, parameter optimization approaches, systematic validation protocols, and advanced algorithms that require very low numbers of patient results while retaining sensitive error detection. Patient-based quality control will continue to improve with the development of new algorithms that reduce biological noise and improve analytical error detection. Patient-based quality control provides continuous and commutable information about the measurement procedure that cannot be easily replicated by conventional internal quality control. Most importantly, the use of patient-based quality control helps laboratories to improve their appreciation of the clinical impact of the laboratory results produced, bringing them closer to the patients.Laboratories are encouraged to implement patient-based quality control processes to overcome the limitations of conventional internal quality control practices. Regulatory changes to recognize the capability of patient-based quality approaches, as well as laboratory informatics advances, are required for this tool to be adopted more widely.
- Published
- 2023
- Full Text
- View/download PDF
17. Intrahepatic cholestasis of pregnancy - Diagnosis and management: A consensus statement of the Society of Obstetric Medicine of Australia and New Zealand (SOMANZ): Executive summary.
- Author
-
Hague WM, Briley A, Callaway L, Dekker Nitert M, Gehlert J, Graham D, Grzeskowiak L, Makris A, Markus C, Middleton P, Peek MJ, Shand A, Stark M, and Waugh J
- Abstract
Intrahepatic cholestasis of pregnancy (ICP) is a pregnancy liver disease, characterised by pruritus and increased total serum bile acids (TSBA), Australian incidence 0.6-0.7%. ICP is diagnosed by non-fasting TSBA ≥19 μmol/L in a pregnant woman with pruritus without rash without a known pre-existing liver disorder. Peak TSBA ≥40 and ≥100 μmol/L identify severe and very severe disease respectively, associated with spontaneous preterm birth when severe, and with stillbirth, when very severe. Benefit-vs-risk for iatrogenic preterm birth in ICP remains uncertain. Ursodeoxycholic acid remains the best pharmacotherapy preterm, improving perinatal outcome and reducing pruritus, although it has not been shown to reduce stillbirth., (© 2023 The Authors. Australian and New Zealand Journal of Obstetrics and Gynaecology published by John Wiley & Sons Australia, Ltd on behalf of Royal Australian and New Zealand College of Obstetricians and Gynaecologists.)
- Published
- 2023
- Full Text
- View/download PDF
18. Functional Reference Limits: Describing Physiological Relationships and Determination of Physiological Limits for Enhanced Interpretation of Laboratory Results.
- Author
-
Chuah TY, Lim CY, Tan RZ, Pratumvinit B, Loh TP, Vasikaran S, and Markus C
- Subjects
- Humans, Reference Values, Biomarkers, Clinical Laboratory Techniques, Laboratories
- Abstract
Functional reference limits describe key changes in the physiological relationship between a pair of physiologically related components. Statistically, this can be represented by a significant change in the curvature of a mathematical function or curve (e.g., an observed plateau). The point at which the statistical relationship changes significantly is the point of curvature inflection and can be mathematically modeled from the relationship between the interrelated biomarkers. Conceptually, they reside between reference intervals, which describe the statistical boundaries of a single biomarker within the reference population, and clinical decision limits that are often linked to the risk of morbidity or mortality and set as thresholds. Functional reference limits provide important physiological and pathophysiological insights that can aid laboratory result interpretation. Laboratory professionals are in a unique position to harness data from laboratory information systems to derive clinically relevant values. Increasing research on and reporting of functional reference limits in the literature will enhance their contribution to laboratory medicine and widen the evidence base used in clinical decision limits, which are currently almost exclusively contributed to by clinical trials. Their inclusion in laboratory reports will enhance the intellectual value of laboratory professionals in clinical care beyond the statistical boundaries of a healthy reference population and pave the way to them being considered in shaping clinical decision limits. This review provides an overview of the concepts related to functional reference limits, clinical examples of their use, and the impetus to include them in laboratory reports.
- Published
- 2023
- Full Text
- View/download PDF
19. Solar Energy-driven Land-cover Change Could Alter Landscapes Critical to Animal Movement in the Continental United States.
- Author
-
Levin MO, Kalies EL, Forester E, Jackson ELA, Levin AH, Markus C, McKenzie PF, Meek JB, and Hernandez RR
- Subjects
- Animals, United States, Biodiversity, Climate Change, Electricity, Forecasting, Ecosystem, Conservation of Natural Resources, Solar Energy
- Abstract
The United States may produce as much as 45% of its electricity using solar energy technology by 2050, which could require more than 40,000 km
2 of land to be converted to large-scale solar energy production facilities. Little is known about how such development may impact animal movement. Here, we use five spatially explicit projections of solar energy development through 2050 to assess the extent to which ground-mounted photovoltaic solar energy expansion in the continental United States may impact land-cover and alter areas important for animal movement. Our results suggest that there could be a substantial overlap between solar energy development and land important for animal movement: across projections, 7-17% of total development is expected to occur on land with high value for movement between large protected areas, while 27-33% of total development is expected to occur on land with high value for climate-change-induced migration. We also found substantial variation in the potential overlap of development and land important for movement at the state level. Solar energy development, and the policies that shape it, may align goals for biodiversity and climate change by incorporating the preservation of animal movement as a consideration in the planning process.- Published
- 2023
- Full Text
- View/download PDF
20. Between and within calibration variation: implications for internal quality control rules.
- Author
-
Lim CY, Lee JJS, Choy KW, Badrick T, Markus C, and Loh TP
- Subjects
- Male, Humans, Calibration, Quality Control, Bias, Laboratories, Prostate-Specific Antigen
- Abstract
The variability between calibrations can be larger than the within calibration variation for some measurement procedures, that is a large CV
between :CVwithin ratio. In this study, we examined the false rejection rate and probability of bias detection of quality control (QC) rules at varying calibration CVbetween :CVwithin ratios. Historical QC data for six representative routine clinical chemistry serum measurement procedures (calcium, creatinine, aspartate aminotransferase, thyrotrophin, prostate specific antigen and gentamicin) were extracted to derive the CVbetween :CVwithin ratios using analysis of variance. Additionally, the false rejection rate and probability of bias detection of three 'Westgard' QC rules (2:2S, 4:1S, 10X) at varying CVbetween :CVwithin ratios (0.1-10), magnitudes of bias, and QC events per calibration (5-80) were examined through simulation modelling. The CVbetween :CVwithin ratios for the six routine measurement procedures ranged from 1.1 to 34.5. With ratios >3, false rejection rates were generally above 10%. Similarly for QC rules involving a greater number of consecutive results, false rejection rates increased with increasing ratios, while all rules achieved maximum bias detection. Laboratories should avoid the 2:2S, 4:1S and 10X QC rules when calibration CVbetween :CVwithin ratios are elevated, particularly for those measurement procedures with a higher number of QC events per calibration., (Copyright © 2023 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.)- Published
- 2023
- Full Text
- View/download PDF
21. Difference- and regression-based approaches for detection of bias.
- Author
-
Lim CY, Markus C, Greaves R, and Loh TP
- Subjects
- Humans, Computer Simulation, Probability, Bias, Research Design
- Abstract
Objective: This simulation study was undertaken to assess the statistical performance of six commonly used rejection criteria for bias detection., Methods: The false rejection rate (i.e. rejection in the absence of simulated bias) and the probability of bias detection were assessed for the following: difference in measurements for individual sample pair, the mean of the paired differences, t-statistics (paired t-test), slope < 0.9 or > 1.1, intercept > 50% of the lower limit of measurement range, and coefficient of determination (R
2 ) > 0.95. The linear regressions evaluated were ordinary least squares, weighted least squares and Passing-Bablok regressions. A bias detection rate of < 50% and false rejection rates of >10% are considered unacceptable for the purpose of this study., Results: Rejection criteria based on regression slope, intercept and paired difference (10%) for individual samples have high false rejection rates and/ or low probability of bias detection. T-statistics (α = 0.05) performed best in low range ratio (lowest-to-highest concentration in measurement range) and low imprecision scenarios. Mean difference (10%) performed better in all other range ratio and imprecision scenarios. Combining mean difference and paired-t test improves the power of bias detection but carries higher false rejection rates., Conclusions: This study provided objective evidence on commonly used rejection criteria to guide laboratory on the experimental design and statistical assessment for bias detection during method evaluation or reagent lot verification., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2023 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.)- Published
- 2023
- Full Text
- View/download PDF
22. A national programme to scale-up decentralised hepatitis C point-of-care testing and treatment in Australia.
- Author
-
Grebely J, Markus C, Causer LM, Silk D, Comben S, Lloyd AR, Martinez M, Cunningham EB, O'Flynn M, Dore GJ, and Matthews S
- Subjects
- Humans, Hepacivirus genetics, Point-of-Care Testing, Australia epidemiology, Hepatitis C diagnosis, Hepatitis C drug therapy, Hepatitis C epidemiology
- Abstract
Competing Interests: JG a consultant and advisor and has received research grants from AbbVie, Camurus, Cepheid, Gilead, Hologic, Indivior, and Merck. GJD has received research grants from AbbVie, Gilead, and Merck. ARL has received testing equipment and tests from Cepheid.
- Published
- 2023
- Full Text
- View/download PDF
23. [ECMO therapy for COVID-19 ARDS (Acute Respiratory Distress Syndrome) during pregnancy enables preservation of pregnancy and full-term delivery].
- Author
-
Sitter M, Fröhlich C, Kranke P, Markus C, Wöckel A, Rehn M, Bartmann C, Frieauff E, Meybohm P, Pecks U, and Röder D
- Subjects
- Female, Pregnancy, Humans, SARS-CoV-2, Preservation, Biological, COVID-19, Extracorporeal Membrane Oxygenation, Respiratory Distress Syndrome therapy
- Published
- 2023
- Full Text
- View/download PDF
24. Lot-to-lot difference: a new approach to evaluate regression studies.
- Author
-
Lim CY, Markus C, and Loh TP
- Subjects
- Humans, Reagent Kits, Diagnostic
- Published
- 2023
- Full Text
- View/download PDF
25. Lot-to-lot reagent changes and commutability of quality testing materials for total bile acid measurements.
- Author
-
Markus C, Coat S, Marschall HU, Matthews S, Loh TP, Rankin W, and Hague WM
- Subjects
- Humans, Quality Control, Reagent Kits, Diagnostic, Bile Acids and Salts
- Published
- 2023
- Full Text
- View/download PDF
26. Calibration frequency and analytical variability of laboratory measurements.
- Author
-
Lim CY, Ow Yang S, Markus C, and Loh TP
- Subjects
- Humans, Least-Squares Analysis, Computer Simulation, Quality Control, Uncertainty, Calibration
- Abstract
Background: There is uncertainty whether increased frequency of calibrations may affect the overall analytical variability of a measurement procedure as reflected in quality control (QC) performance. In this simulation study, we examined the impact of calibration frequencies on the variability of laboratory measurements., Methods: A 5-point calibration curve was modeled with simulated concentrations ranging from 10 to 10,000 mmol/l, and signal intensities with CVs of 3 % around the mean, under a Gaussian distribution. 3 levels of QC (20, 150, 600 mmol/l) interspersed within the analytical measurement range were also simulated., Results: The CV of the 3 QC levels remained stable across the different calibration frequencies simulated (5, 10, 15 and 30 QC measurements per recalibration episode). The imprecision was greatest (18 %) at the lowest concentration of 20 mmol/l, when the calibration curve was derived using ordinary least squares regression, reducing to 3.5 % and 3.8 % at 150 and 600 mmol/l, respectively. The CV of all 3 QC concentrations remained constant at 3.4 % and close the predefined CV (3 %) when weighted least squares regression was used to derive the calibration model. Similar findings were observed with 2-point calibrations using WLS models at narrower concentration ranges (50 and 100 mmol/l as well as 50 and 500 mmol/l)., Discussion: Within the parameters of the simulation study, an increased frequency of calibration events does not adversely impact the overall analytical performance of a measurement procedure under most circumstances., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2022 Elsevier B.V. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
27. Calibration Practices in Clinical Mass Spectrometry: Review and Recommendations.
- Author
-
Cheng WL, Markus C, Lim CY, Tan RZ, Sethi SK, and Loh TP
- Subjects
- Chromatography, Liquid methods, Humans, Mass Spectrometry, Reference Standards, Reproducibility of Results, Calibration
- Abstract
Background: Calibration is a critical component for the reliability, accuracy, and precision of mass spectrometry measurements. Optimal practice in the construction, evaluation, and implementation of a new calibration curve is often underappreciated. This systematic review examined how calibration practices are applied to liquid chromatography-tandem mass spectrometry measurement procedures., Methods: The electronic database PubMed was searched from the date of database inception to April 1, 2022. The search terms used were "calibration," "mass spectrometry," and "regression." Twenty-one articles were identified and included in this review, following evaluation of the titles, abstracts, full text, and reference lists of the search results., Results: The use of matrix-matched calibrators and stable isotope-labeled internal standards helps to mitigate the impact of matrix effects. A higher number of calibration standards or replicate measurements improves the mapping of the detector response and hence the accuracy and precision of the regression model. Constructing a calibration curve with each analytical batch recharacterizes the instrument detector but does not reduce the actual variability. The analytical response and measurand concentrations should be considered when constructing a calibration curve, along with subsequent use of quality controls to confirm assay performance. It is important to assess the linearity of the calibration curve by using actual experimental data and appropriate statistics. The heteroscedasticity of the calibration data should be investigated, and appropriate weighting should be applied during regression modeling., Conclusions: This review provides an outline and guidance for optimal calibration practices in clinical mass spectrometry laboratories.
- Published
- 2023
- Full Text
- View/download PDF
28. Delta checks.
- Author
-
Loh TP, Tan RZ, Sethi SK, Lim CY, and Markus C
- Abstract
Delta check is an electronic error detection tool. It compares the difference in sequential results within a patient against a predefined limit, and when exceeded, the delta check rule is considered triggered. The patient results should be withheld for review and troubleshooting before releasing to the clinical team for patient management. Delta check was initially developed as a tool to detect wrong-blood-in-tube (sample misidentification) errors. It is now applied to detect errors more broadly within the total testing process. Recent advancements in the theoretical understanding of delta check has allowed for more precise application of this tool to achieve the desired clinical performance and operational set up. In this Chapter, we review the different pre-implementation considerations, the foundation concepts of delta check, the process of setting up key delta check parameters, performance verification and troubleshooting of a delta check flag., (Copyright © 2023. Published by Elsevier Inc.)
- Published
- 2023
- Full Text
- View/download PDF
29. Lot-to-lot variation and verification.
- Author
-
Loh TP, Markus C, Tan CH, Tran MTC, Sethi SK, and Lim CY
- Subjects
- Humans, Quality Control, Laboratories, Reagent Kits, Diagnostic, Chemistry, Clinical
- Abstract
Lot-to-lot verification is an integral component for monitoring the long-term stability of a measurement procedure. The practice is challenged by the resource requirements as well as uncertainty surrounding experimental design and statistical analysis that is optimal for individual laboratories, although guidance is becoming increasingly available. Collaborative verification efforts as well as application of patient-based monitoring are likely to further improve identification of any differences in performance in a relatively timely manner. Appropriate follow up actions of failed lot-to-lot verification is required and must balance potential disruptions to clinical services provided by the laboratory. Manufacturers need to increase transparency surrounding release criteria and work closer with laboratory professionals to ensure acceptable reagent lots are released to end users. A tripartite collaboration between regulatory bodies, manufacturers, and laboratory medicine professional bodies is key to developing a balanced system where regulatory, manufacturing, and clinical requirements of laboratory testing are met, to minimize differences between reagent lots and ensure patient safety. Clinical Chemistry and Laboratory Medicine has served as a fertile platform for advancing the discussion and practice of lot-to-lot verification in the past 60 years and will continue to be an advocate of this important topic for many more years to come., (© 2022 Walter de Gruyter GmbH, Berlin/Boston.)
- Published
- 2022
- Full Text
- View/download PDF
30. Method evaluation in the clinical laboratory.
- Author
-
Loh TP, Cooke BR, Markus C, Zakaria R, Tran MTC, Ho CS, and Greaves RF
- Subjects
- Humans, Clinical Laboratory Techniques, Laboratories, Laboratories, Clinical, Clinical Laboratory Services
- Abstract
Method evaluation is one of the critical components of the quality system that ensures the ongoing quality of a clinical laboratory. As part of implementing new methods or reviewing best practices, the peer-reviewed published literature is often searched for guidance. From the outset, Clinical Chemistry and Laboratory Medicine ( CCLM ) has a rich history of publishing methods relevant to clinical laboratory medicine. An insight into submissions, from editors' and reviewers' experiences, shows that authors still struggle with method evaluation, particularly the appropriate requirements for validation in clinical laboratory medicine. Here, we consider through a series of discussion points an overview of the status, challenges, and needs of method evaluation from the perspective of clinical laboratory medicine. We identify six key high-level aspects of clinical laboratory method evaluation that potentially lead to inconsistency. 1. Standardisation of terminology, 2. Selection of analytical performance specifications, 3. Experimental design of method evaluation, 4. Sample requirements of method evaluation, 5. Statistical assessment and interpretation of method evaluation data, and 6. Reporting of method evaluation data. Each of these areas requires considerable work to harmonise the practice of method evaluation in laboratory medicine, including more empirical studies to be incorporated into guidance documents that are relevant to clinical laboratories and are freely and widely available. To further close the loop, educational activities and fostering professional collaborations are essential to promote and improve the practice of method evaluation procedures., (© 2022 the author(s), published by De Gruyter, Berlin/Boston.)
- Published
- 2022
- Full Text
- View/download PDF
31. The Validity of the SEEV Model as a Process Measure of Situation Awareness: The Example of a Simulated Endotracheal Intubation.
- Author
-
Grundgeiger T, Hohm A, Michalek A, Egenolf T, Markus C, and Happel O
- Subjects
- Anesthesiologists, Humans, Intubation, Intratracheal, Process Assessment, Health Care, Anesthesiology, Awareness
- Abstract
Objective: In the context of anesthesiology, we investigated whether the salience effort expectancy value (SEEV) model fit is associated with situation awareness and perception scores., Background: The distribution of visual attention is important for situation awareness-that is, understanding what is going on-in safety-critical domains. Although the SEEV model has been suggested as a process situation awareness measure, the validity of the model as a predictor of situation awareness has not been tested., Method: In a medical simulation, 31 senior and 30 junior anesthesiologists wore a mobile eye tracker and induced general anesthesia into a simulated patient. When inserting a breathing tube into the mannequin's trachea (endotracheal intubation), the scenario included several clinically relevant events for situation awareness and general events in the environment. Both were assessed using direct awareness measures., Results: The overall SEEV model fit was good with no difference between junior and senior anesthesiologists. Overall, the situation awareness scores were low. As expected, the SEEV model fits showed significant positive correlations with situation awareness level 1 scores., Conclusion: The SEEV model seems to be suitable as a process situation awareness measure to predict and investigate the perception of changes in the environment (situation awareness level 1). The situation awareness scores indicated that anesthesiologists seem not to perceive the environment well during endotracheal intubation., Application: The SEEV model fit can be used to capture and assess situation awareness level 1. During endotracheal intubation, anesthesiologists should be supported by technology or staff to notice changes in the environment.
- Published
- 2022
- Full Text
- View/download PDF
32. Setting Bias Specifications Based on Qualitative Assays With a Quantitative Cutoff Using COVID-19 as a Disease Model.
- Author
-
Lim CY, Chang WZ, Markus C, Horvath AR, and Loh TP
- Subjects
- Bias, COVID-19 Testing, Humans, Polymerase Chain Reaction, Predictive Value of Tests, COVID-19 diagnosis
- Abstract
Objectives: Automated qualitative serology assays often measure quantitative signals that are compared against a manufacturer-defined cutoff for qualitative (positive/negative) interpretation. The current general practice of assessing serology assay performance by overall concordance in a qualitative manner may not detect the presence of analytical shift/drift that could affect disease classifications., Methods: We describe an approach to defining bias specifications for qualitative serology assays that considers minimum positive predictive values (PPVs) and negative predictive values (NPVs). Desirable minimum PPVs and NPVs for a given disease prevalence are projected as equi-PPV and equi-NPV lines into the receiver operator characteristic curve space of coronavirus disease 2019 serology assays, and the boundaries define the allowable area of performance (AAP)., Results: More stringent predictive values produce smaller AAPs. When higher NPVs are required, there is lower tolerance for negative biases. Conversely, when higher PPVs are required, there is less tolerance for positive biases. As prevalence increases, so too does the allowable positive bias, although the allowable negative bias decreases. The bias specification may be asymmetric for positive and negative direction and should be method specific., Conclusions: The described approach allows setting bias specifications in a way that considers clinical requirements for qualitative assays that measure signal intensity (eg, serology and polymerase chain reaction)., (© The Author(s) 2022. Published by Oxford University Press on behalf of American Society for Clinical Pathology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.)
- Published
- 2022
- Full Text
- View/download PDF
33. Bile acid reference intervals for evidence-based practice.
- Author
-
Ovadia C, Mitchell AL, Markus C, Hague WM, and Williamson C
- Subjects
- Humans, Reference Values, Bile Acids and Salts, Evidence-Based Practice
- Published
- 2022
- Full Text
- View/download PDF
34. Lot-to-lot reagent verification: Effect of sample size and replicate measurement on linear regression approaches.
- Author
-
Koh NWX, Markus C, Loh TP, and Lim CY
- Subjects
- Humans, Indicators and Reagents, Least-Squares Analysis, Linear Models, Sample Size, Laboratories
- Abstract
Background: We investigate the simulated impact of varying sample size and replicate number using ordinary least squares (OLS) and Deming regression (DR) in both weighted and unweighted forms, when applied to paired measurements in lot-to-lot verification., Methods: Simulation parameter investigated in this study were: range ratio, analytical coefficient of variation, sample size, replicates, alpha (level of significance) and constant and proportional biases. For each simulation scenario, 10,000 iterations were performed, and the average probability of bias detection was determined., Results: Generally, the weighted forms of regression significantly outperformed the unweighted forms for bias detection. At the low range ratio (1:10), for both weighted OLS and DR, improved bias detection was observed with greater number of replicates, than increasing the number of comparison samples. At the high range ratio (1:1000), for both weighted OLS and DR, increasing the number of replicates above two is only slightly more advantageous in the scenarios examined. Increasing the numbers of comparison samples resulted in better detection of smaller biases between reagent lots., Conclusions: The results of this study allow laboratories to determine a tailored approach to lot-to-lot verification studies, balancing the number of replicates and comparison samples with the analytical performance of measurement procedures involved., (Copyright © 2022 Elsevier B.V. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
35. An Objective Approach to Deriving the Clinical Performance of Autoverification Limits.
- Author
-
Loh TP, Tan RZ, Lim CY, and Markus C
- Subjects
- Humans, Laboratories
- Abstract
This study describes an objective approach to deriving the clinical performance of autoverification rules to inform laboratory practice when implementing them. Anonymized historical laboratory data for 12 biochemistry measurands were collected and Box-Cox-transformed to approximate a Gaussian distribution. The historical laboratory data were assumed to be error-free. Using the probability theory, the clinical specificity of a set of autoverification limits can be derived by calculating the percentile values of the overall distribution of a measurand. The 5th and 95th percentile values of the laboratory data were calculated to achieve a 90% clinical specificity. Next, a predefined tolerable total error adopted from the Royal College of Pathologists of Australasia Quality Assurance Program was applied to the extracted data before subjecting to Box-Cox transformation. Using a standard normal distribution, the clinical sensitivity can be derived from the probability of the Z-value to the right of the autoverification limit for a one-tailed probability and multiplied by two for a two-tailed probability. The clinical sensitivity showed an inverse relationship with between-subject biological variation. The laboratory can set and assess the clinical performance of its autoverification rules that conforms to its desired risk profile.
- Published
- 2022
- Full Text
- View/download PDF
36. Comparison of two (data mining) indirect approaches for between-subject biological variation determination.
- Author
-
Tan RZ, Markus C, Vasikaran S, and Loh TP
- Subjects
- Computer Simulation, Data Mining, Humans, Reference Values, Biological Variation, Population, Laboratories
- Abstract
Background: Between-subject biological variation (CV
g ) is an important parameter in several aspects of laboratory practice, including setting of analytical performance specification, delta checks and calculation of index of individuality. Using simulations, we compare the performance of two indirect (data mining) approaches for deriving CVg ., Methods: The expected mean squares (EMS) method was compared against that proposed by Harris and Fraser. Using numerical simulations, d the percentage difference in the mean between the non-pathological and pathological populations, CVi the within-subject coefficient of variation of the non-pathological distribution, f the fraction of pathological values, and e the relative increase in CVi of the pathological distribution were varied for a total of 320 conditions to examine the impact on the relative fractional of error of the recovered CVg compared to the true value., Results: Comparing the two methods, the EMS and Harris and Fraser's approaches yielded similar performance of 158 conditions and 157 conditions within ± 0.20 fractional error of the true underlying CVg , for the normal and lognormal distributions, respectively. It is observed that both EMS and Harris and Fraser's method performed better using the calculated CVi rather than the actual ('presumptive') CVi . The number of conditions within 0.20 fractional error of the true underlying CVg did not differ significantly between the normal and lognormal distributions. The estimation of CVg improved with decreasing values of f, d and CVi CVg ., Discussions: The two statistical approaches included in this study showed reliable performance under the simulation conditions examined., (Copyright © 2022 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.)- Published
- 2022
- Full Text
- View/download PDF
37. Performance of four regression frameworks with varying precision profiles in simulated reference material commutability assessment.
- Author
-
Markus C, Tan RZ, Lim CY, Rankin W, Matthews SJ, Loh TP, and Hague WM
- Subjects
- Humans, Reference Standards
- Abstract
Objectives: One approach to assessing reference material (RM) commutability and agreement with clinical samples (CS) is to use ordinary least squares or Deming regression with prediction intervals. This approach assumes constant variance that may not be fulfilled by the measurement procedures. Flexible regression frameworks which relax this assumption, such as quantile regression or generalized additive models for location, scale, and shape (GAMLSS), have recently been implemented, which can model the changing variance with measurand concentration., Methods: We simulated four imprecision profiles, ranging from simple constant variance to complex mixtures of constant and proportional variance, and examined the effects on commutability assessment outcomes with above four regression frameworks and varying the number of CS, data transformations and RM location relative to CS concentration. Regression framework performance was determined by the proportion of false rejections of commutability from prediction intervals or centiles across relative RM concentrations and was compared with the expected nominal probability coverage., Results: In simple variance profiles (constant or proportional variance), Deming regression, without or with logarithmic transformation respectively, is the most efficient approach. In mixed variance profiles, GAMLSS with smoothing techniques are more appropriate, with consideration given to increasing the number of CS and the relative location of RM. In the case where analytical coefficients of variation profiles are U-shaped, even the more flexible regression frameworks may not be entirely suitable., Conclusions: In commutability assessments, variance profiles of measurement procedures and location of RM in respect to clinical sample concentration significantly influence the false rejection rate of commutability., (© 2022 Walter de Gruyter GmbH, Berlin/Boston.)
- Published
- 2022
- Full Text
- View/download PDF
38. Comparison of six regression-based lot-to-lot verification approaches.
- Author
-
Koh NWX, Markus C, Loh TP, and Lim CY
- Subjects
- Bias, Computer Simulation, Humans, Indicators and Reagents, Laboratories
- Abstract
Objectives: Detection of between-lot reagent bias is clinically important and can be assessed by application of regression-based statistics on several paired measurements obtained from the existing and new candidate lot. Here, the bias detection capability of six regression-based lot-to-lot reagent verification assessments, including an extension of the Bland-Altman with regression approach are compared., Methods: Least squares and Deming regression (in both weighted and unweighted forms), confidence ellipses and Bland-Altman with regression (BA-R) approaches were investigated. The numerical simulation included permutations of the following parameters: differing result range ratios (upper:lower measurement limits), levels of significance (alpha), constant and proportional biases, analytical coefficients of variation (CV), and numbers of replicates and sample sizes. The sample concentrations simulated were drawn from a uniformly distributed concentration range., Results: At a low range ratio (1:10, CV 3%), the BA-R performed the best, albeit with a higher false rejection rate and closely followed by weighted regression approaches. At larger range ratios (1:1,000, CV 3%), the BA-R performed poorly and weighted regression approaches performed the best. At higher assay imprecision (CV 10%), all six approaches performed poorly with bias detection rates <50%. A lower alpha reduced the false rejection rate, while greater sample numbers and replicates improved bias detection., Conclusions: When performing reagent lot verification, laboratories need to finely balance the false rejection rate (selecting an appropriate alpha) with the power of bias detection (appropriate statistical approach to match assay performance characteristics) and operational considerations (number of clinical samples and replicates, not having alternate reagent lot)., (© 2022 Walter de Gruyter GmbH, Berlin/Boston.)
- Published
- 2022
- Full Text
- View/download PDF
39. Comparison of 8 methods for univariate statistical exclusion of pathological subpopulations for indirect reference intervals and biological variation studies.
- Author
-
Tan RZ, Markus C, Vasikaran S, and Loh TP
- Subjects
- Humans, Reference Values, Laboratories, Research Design
- Abstract
Background: Indirect reference intervals and biological variation studies heavily rely on statistical methods to separate pathological and non-pathological subpopulations within the same dataset. In recognition of this, we compare the performance of eight univariate statistical methods for identification and exclusion of values originating from pathological subpopulations., Methods: The eight approaches examined were: Tukey's rule with and without Box-Cox transformation; median absolute deviation; double median absolute deviation; Gaussian mixture models; van der Loo (Vdl) methods 1 and 2; and the Kosmic approach. Using four scenarios including lognormal distributions and varying the conditions through the number of pathological populations, central location, spread and proportion for a total of 256 simulated mixed populations. A performance criterion of ± 0.05 fractional error from the true underlying lower and upper reference interval was chosen., Results: Overall, the Kosmic method was a standout with the highest number of scenarios lying within the acceptable error, followed by Vdl method 1 and Tukey's rule. Kosmic and Vdl method 1 appears to discriminate better the non-pathological reference population in the case of log-normal distributed data. When the proportion and spread of pathological subpopulations is high, the performance of statistical exclusion deteriorated considerably., Discussions: It is important that laboratories use a priori defined clinical criteria to minimise the proportion of pathological subpopulation in a dataset prior to analysis. The curated dataset should then be carefully examined so that the appropriate statistical method can be applied., (Copyright © 2022 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
40. Letter to the Editor: On moving average and internal quality control.
- Author
-
Lim CY, Markus C, Tan RZ, and Loh TP
- Subjects
- Humans, Quality Control
- Published
- 2022
- Full Text
- View/download PDF
41. Assessment of analytical bias in ferritin assays and impact on functional reference limits.
- Author
-
Choy KW, Sezgin G, Wijeratne N, Calleja J, Liwayan R, Rathnayake G, McFarlane R, McNeil A, Doery JCG, Lu Z, Markus C, and Loh TP
- Subjects
- Australia, Bias, Humans, Ferritins
- Abstract
Serum ferritin is currently the recommended laboratory test to investigate iron deficiency. There have been efforts to standardise serum ferritin assays with implementation of traceability to the World Health Organization reference standard. We evaluate the analytical bias among five widely used commercial ferritin assays in Australia. The relationship between serum ferritin and erythrocyte parameters was recently explored to derive functional reference limits. Residual patient serum specimens were analysed by five participating laboratories that utilised a different ferritin assay, Abbott, Beckman Coulter, Roche, Siemens, and Ortho. Using data mining approach, functional reference limits for Siemens, Abbott, and Ortho serum ferritin methods were derived and compared. At clinically relevant ferritin decision points, compared to the Beckman method, the Roche assay showed higher results ranging from 6 μg/L (31%) at the lowest decision point to 575 μg/L (57%) at the highest decision point. In contrast, the Ortho method underestimated ferritin results at lower decision points of 20 and 30 μg/L, with estimated ferritin results of 16 μg/L (-19%) and 27 μg/L (-12%), respectively. The Abbott and Siemens assays showed a positive bias which was introduced at differing decision points. The comparison of the Siemens and Ortho methods presents similar inflection points between the two assays in the establishment of functional reference limits for serum ferritin. There remain significant biases among some of the commonly used commercial ferritin assays in Australia. More studies are needed to assess if functional reference limits are a way to overcome method commutability issues., (Copyright © 2021 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
42. Comparison of four indirect (data mining) approaches to derive within-subject biological variation.
- Author
-
Tan RZ, Markus C, Vasikaran S, and Loh TP
- Subjects
- Computer Simulation, Humans, Laboratories, Reference Values, Data Mining, Research Design
- Abstract
Objectives: Within-subject biological variation ( CV
i ) is a fundamental aspect of laboratory medicine, from interpretation of serial results, partitioning of reference intervals and setting analytical performance specifications. Four indirect (data mining) approaches in determination of CVi were directly compared., Methods: Paired serial laboratory results for 5,000 patients was simulated using four parameters, d the percentage difference in the means between the pathological and non-pathological populations, CVi the within-subject coefficient of variation for non-pathological values, f the fraction of pathological values, and e the relative increase in CVi of the pathological distribution. These parameters resulted in a total of 128 permutations. Performance of the Expected Mean Squares method (EMS), the median method, a result ratio method with Tukey's outlier exclusion method and a modified result ratio method with Tukey's outlier exclusion were compared., Results: Within the 128 permutations examined in this study, the EMS method performed the best with 101/128 permutations falling within ±0.20 fractional error of the 'true' simulated CVi , followed by the result ratio method with Tukey's exclusion method for 78/128 permutations. The median method grossly under-estimated the CVi . The modified result ratio with Tukey's rule performed best overall with 114/128 permutations within allowable error., Conclusions: This simulation study demonstrates that with careful selection of the statistical approach the influence of outliers from pathological populations can be minimised, and it is possible to recover CVi values close to the 'true' underlying non-pathological population. This finding provides further evidence for use of routine laboratory databases in derivation of biological variation components., (© 2022 Walter de Gruyter GmbH, Berlin/Boston.)- Published
- 2022
- Full Text
- View/download PDF
43. Dual-room twin-CT scanner in multiple trauma care: first results after implementation in a level one trauma centre.
- Author
-
Kippnich M, Schorscher N, Kredel M, Markus C, Eden L, Gassenmaier T, Lock J, and Wurmb T
- Subjects
- Humans, Injury Severity Score, Retrospective Studies, Tomography, X-Ray Computed, Multiple Trauma diagnostic imaging, Multiple Trauma surgery, Trauma Centers
- Abstract
Purpose: The trauma centre of the Wuerzburg University Hospital has integrated a pioneering dual-room twin-CT scanner in a multiple trauma pathway. For concurrent treatment of two trauma patients, two carbon CT examination and intervention tables are positioned head to head with one sliding CT-Gantry in the middle. The focus of this study is the process of trauma care with the time to CT (tCT) and the time to operation (tOR) as quality indicator., Methods: All patients with suspected multiple trauma, who required emergency surgery and who were initially diagnosed by the CT trauma protocol between 05/2018 and 12/2018 were included. Data relating to time spans (tCT and tOR), severity of injury and outcome was obtained., Results: 110 of the 589 screened trauma patients had surgery immediately after finishing primary assessment in the ER. The ISS was 17 (9-34) (median and interquartile range, IQR). tCT was 15 (11-19) minutes (median and IQR) and tOR was 96.5 (75-119) minutes (median and IQR). In the first 30 days, seven patients died (6.4%) including two within the first 24 h (2%). There were two ICU days (1-6) (median and IQR) and one (0-1) (median and IQR) ventilator day., Conclusion: The twin-CT technology is a fascinating tool to organize high-quality trauma care for two multiple trauma patients simultaneously., (© 2020. The Author(s).)
- Published
- 2021
- Full Text
- View/download PDF
44. The safener isoxadifen does not increase herbicide resistance evolution in recurrent selection with fenoxaprop.
- Author
-
Gonsiorkiewicz Rigon CA, Cutti L, Angonese PS, Sulzbach E, Markus C, Gaines TA, and Merotto A Junior
- Subjects
- Brazil, Gene Expression Regulation, Plant, Genes, Plant, Genetic Variation, Genotype, Weed Control, Echinochloa genetics, Echinochloa physiology, Herbicide Resistance genetics, Herbicide Resistance physiology, Herbicides metabolism, Oxazoles metabolism, Propionates metabolism
- Abstract
Safeners are chemical compounds used to improve selectivity and safety of herbicides in crops by activating genes that enhance herbicide metabolic detoxification. The genes activated by safeners in crops are similar to the genes causing herbicide resistance through increased metabolism in weeds. This work investigated the effect of the safener isoxadifen-ethyl (IS) in combination with fenoxaprop-p-ethyl (FE) on the evolution of herbicide resistance in Echinochloa crus-galli under recurrent selection. Reduced susceptibility was observed in the progeny after recurrent selection with both FE alone and with FE + IS for two generations (G2) compared to the parental population (G0). The resistance index found in G2 after FE + IS selection was similar as when FE was used alone, demonstrating that the safener did not increase the rate or magnitude of herbicide resistance evolution. G2 progeny selected with FE alone and the combination of FE + IS had increased survival to herbicides from other mechanisms of action relative to the parental G0 population. One biotype of G2 progeny had increased constitutive expression of glutathione-S-transferase (GST1) after recurrent selection with FE + IS. G2 progeny had increased expression of two P450 genes (CYP71AK2 and CYP72A122) following treatment with FE, while G2 progeny had increased expression of five P450 genes (CYP71AK2, CYP72A258, CYP81A12, CYP81A14 and CYP81A21) after treatment with FE + IS. Repeated selection with low doses of FE with or without the safener IS decreased E. crus-galli control and showed potential for cross-resistance evolution. Addition of safener did not further decrease herbicide sensitivity in second generation progeny; however, the recurrent use of safener in combination with FE resulted in safener-induced increased expression of several CYP genes. This is the first report using safener as an additional factor to study herbicide resistance evolution in weeds under experimental recurrent selection., (Copyright © 2021 Elsevier B.V. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
45. Internal quality control: Moving average algorithms outperform Westgard rules.
- Author
-
Poh DKH, Lim CY, Tan RZ, Markus C, and Loh TP
- Subjects
- Humans, Algorithms, Laboratories, Models, Theoretical, Programming Languages, Quality Control
- Abstract
Introduction: Internal quality control (IQC) is traditionally interpreted against predefined control limits using multi-rules or 'Westgard rules'. These include the commonly used 1:3s and 2:2s rules. Either individually or in combination, these rules have limited sensitivity for detection of systematic errors. In this proof-of-concept study, we directly compare the performance of three moving average algorithms with Westgard rules for detection of systematic error., Methods: In this simulation study, 'error-free' IQC data (control case) was generated. Westgard rules (1:3s and 2:2s) and three moving average algorithms (simple moving average (SMA), weighted moving average (WMA), exponentially weighted moving average (EWMA); all using ±3SD as control limits) were applied to examine the false positive rates. Following this, systematic errors were introduced to the baseline IQC data to evaluate the probability of error detection and average number of episodes for error detection (ANEed)., Results: From the power function graphs, in comparison to Westgard rules, all three moving average algorithms showed better probability of error detection. Additionally, they also had lower ANEed compared to Westgard rules. False positive rates were comparable between the moving average algorithms and Westgard rules (all <0.5%). The performance of the SMA algorithm was comparable to the weighted algorithms forms (i.e. WMA and EWMA)., Conclusion: Application of an SMA algorithm on IQC data improves systematic error detection compared to Westgard rules. Application of SMA algorithms can simplify laboratories IQC strategy., (Copyright © 2021 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
46. Precision Verification: Effect of Experiment Design on False Acceptance and False Rejection Rates.
- Author
-
Lim CY, Markus C, and Loh TP
- Subjects
- Humans, Pathology, Research Design
- Abstract
Objectives: We examined the false acceptance rate (FAR) and false rejection rate (FRR) of varying precision verification experimental designs., Methods: Analysis of variance was applied to derive the subcomponents of imprecision (ie, repeatability, between-run, between-day imprecision) for complex matrix experimental designs (day × run × replicate; day × run). For simple nonmatrix designs (1 day × multiple replicates or multiday × 1 replicate), ordinary standard deviations were calculated. The FAR and FRR in these different scenarios were estimated., Results: The FRR increased as more samples were included in the precision experiment. The application of an upper verification limit, which seeks to cap FRR at 5% for multiple experiments, significantly increased the FAR. The FRR decreases as the observed imprecision increases relative to the claimed imprecision and when a greater number of days, runs, or replicates are included in the verification design. Increasing the number or days, runs, or replicates also reduces the FAR for between-day imprecision and repeatability., Conclusions: Design of verification experiments should incorporate the local availability of resources and analytical expertise. The largest imprecision component should be targeted with a greater number of measurements. Consideration of both FAR and FRR should be given when committing a platform into service., (© American Society for Clinical Pathology, 2021. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.)
- Published
- 2021
- Full Text
- View/download PDF
47. Negative cross-resistance to clomazone in imazethapyr-resistant Echinochloa crus-galli caused by increased metabolization.
- Author
-
Cutti L, Rigon CAG, Kaspary TE, Turra GM, Markus C, and Merotto A
- Subjects
- Herbicide Resistance genetics, Isoxazoles, Oxazolidinones, Acetolactate Synthase genetics, Echinochloa genetics, Herbicides toxicity, Nicotinic Acids toxicity
- Abstract
Herbicide resistance is frequently reported in E. crus-galli globally with target and non-target site resistance mechanism to acetolactate synthase (ALS)-inhibiting herbicides. However, resistance to certain herbicides can result in increased sensitivity to other herbicides, a phenomenon called negative cross-resistance. The objective of this study is to identify the occurrence of negative cross-resistance (NCR) to the pro-herbicide clomazone in populations of E. crus-galli resistant to ALS inhibitors due to increased metabolization. Clomazone dose-response curves, with and without malathion, were performed in imazethapyr-resistant and -susceptible E. crus-galli biotypes. CYPs genes expression and antioxidant enzymes activity were also evaluated. The effective dose to reduce 50% (ED
50 ) of dry shoot weight obtained in the clomazone dose-response curves of the metabolic based imazethapyr-resistant and -susceptible biotypes groups were 22.712 and 58.745 g ha-1 , respectively, resulting in a resistance factor (RF) of 0.37, indicating the occurrence of NCR. The application of malathion prior to clomazone increased the resistance factor from 0.60 to 1.05, which indicate the reversion of the NCR. Some CYP genes evaluated were expressed in a higher level, ranging from 2.6-9.1 times according to the biotype and the gene, in the imazethapyr-resistant than in -susceptible biotypes following clomazone application. Antioxidant enzyme activity was not associated with NCR. This study is the first report of NCR directly related to the mechanism of resistance increased metabolization in plants. The occurrence of NCR to clomazone in E. crus-galli can help delay the evolution of herbicide resistance., (Copyright © 2021 Elsevier Inc. All rights reserved.)- Published
- 2021
- Full Text
- View/download PDF
48. The BACH project protocol: an international multicentre total Bile Acid Comparison and Harmonisation project and sub-study of the TURRIFIC randomised trial.
- Author
-
Markus C, Coat S, Marschall HU, Williamson C, Dixon P, Fuller M, Matthews S, Rankin W, Metz M, and Hague WM
- Subjects
- Bile Acids and Salts, Cholagogues and Choleretics therapeutic use, Female, Humans, Multicenter Studies as Topic, Pregnancy, Randomized Controlled Trials as Topic, Ursodeoxycholic Acid therapeutic use, Cholestasis, Intrahepatic diagnosis, Cholestasis, Intrahepatic drug therapy, Pregnancy Complications diagnosis
- Abstract
Objectives: Multicentre international trials relying on diagnoses derived from biochemical results may overlook the importance of assay standardisation from the participating laboratories. Here we describe a study protocol aimed at harmonising results from total bile acid determinations within the context of an international randomised controlled Trial of two treatments, URsodeoxycholic acid and RIFampicin, for women with severe early onset Intrahepatic Cholestasis of pregnancy (TURRIFIC), referred to as the Bile Acid Comparison and Harmonisation (BACH) study, with the aims of reducing inter-laboratory heterogeneity in total bile acid assays., Methods: We have simulated laboratory data to determine the feasibility of total bile acid recalibration using a reference set of patient samples with a consensus value approach and subsequently used regression-based techniques to transform the data., Results: From these simulations, we have demonstrated that mathematical recalibration of total bile acid results is plausible, with a high probability of successfully harmonising results across participating laboratories., Conclusions: Standardisation of bile acid results facilitates the commutability of laboratory results and collation for statistical analysis. It may provide the momentum for broader application of the described techniques in the setting of large-scale multinational clinical trials dependent on results from non-standardised assays., (© 2021 Walter de Gruyter GmbH, Berlin/Boston.)
- Published
- 2021
- Full Text
- View/download PDF
49. Pregnancy-specific continuous reference intervals for haematology parameters from an Australian dataset: A step toward dynamic continuous reference intervals.
- Author
-
Markus C, Flores C, Saxon B, and Osborn K
- Subjects
- Australia, Bayes Theorem, Female, Gestational Age, Humans, Pregnancy, Reference Values, Hematology
- Abstract
Background: Pregnancy is a time of dynamic physiological changes occurring as a continuous spectrum. Smoothed centile curves describe the distribution of measurements as a function of age. There has been no application of centile charts in pregnancy for haematological parameters., Aims: To derive gestational age-specific centile curves for six haematological parameters and compare these with published reference intervals., Materials and Methods: An LMS approach was used with haematology results from an obstetric hospital laboratory database. After application of exclusion criteria, smoothed centiles conditional on gestational age were obtained by a two-step process: (i) finding the best model within four response distributions using Bayesian information criteria, and (ii) selecting the best model among the response distributions based on test-dataset global deviance., Results: In total, 11 255 deliveries were extracted from 10 813 patients. There was little difference between distributions, and Box-Cox power exponential was selected overall. Red cell parameters showed similar trends: values fell until the second trimester and increased thereafter. Leukocyte and neutrophil counts rapidly increased and plateaued around 15 weeks. Platelets exhibited a gradual fall with advancing gestation., Conclusions: This is the first study to use an LMS approach to model gestational age-dependent variations in haematological parameters. Proposed haemoglobin reference intervals were lower than those published but reflect our patient population. Serial monitoring of antenatal patients, as is the standard of care, in conjunction with these centile charts, may highlight trends in red cell changes with advancing gestation, allowing early identification of adverse pregnancy outcomes and evolving anaemia., (© 2020 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.)
- Published
- 2021
- Full Text
- View/download PDF
50. Insights into the Role of Transcriptional Gene Silencing in Response to Herbicide-Treatments in Arabidopsis thaliana .
- Author
-
Markus C, Pecinka A, and Merotto A Jr
- Subjects
- 2,4-Dichlorophenoxyacetic Acid pharmacology, Acyltransferases genetics, Arabidopsis Proteins genetics, Chromatin chemistry, Chromatography, High Pressure Liquid, DNA Demethylation, DNA Methylation, Mutation, Nicotinic Acids pharmacology, Nuclear Proteins genetics, RNA, Plant genetics, RNA-Seq, Transcription, Genetic, Arabidopsis drug effects, Arabidopsis genetics, Gene Expression Regulation, Plant, Gene Silencing, Herbicides pharmacology
- Abstract
Herbicide resistance is broadly recognized as the adaptive evolution of weed populations to the intense selection pressure imposed by the herbicide applications. Here, we tested whether transcriptional gene silencing (TGS) and RNA-directed DNA Methylation (RdDM) pathways modulate resistance to commonly applied herbicides. Using Arabidopsis thaliana wild-type plants exposed to sublethal doses of glyphosate, imazethapyr, and 2,4-D, we found a partial loss of TGS and increased susceptibility to herbicides in six out of 11 tested TGS/RdDM mutants. Mutation in REPRESSOR OF SILENCING 1 ( ROS1 ), that plays an important role in DNA demethylation, leading to strongly increased susceptibility to all applied herbicides, and imazethapyr in particular. Transcriptomic analysis of the imazethapyr-treated wild type and ros1 plants revealed a relation of the herbicide upregulated genes to chemical stimulus, secondary metabolism, stress condition, flavonoid biosynthesis, and epigenetic processes. Hypersensitivity to imazethapyr of the flavonoid biosynthesis component TRANSPARENT TESTA 4 ( TT4 ) mutant plants strongly suggests that ROS1-dependent accumulation of flavonoids is an important mechanism for herbicide stress response in A. thaliana . In summary, our study shows that herbicide treatment affects transcriptional gene silencing pathways and that misregulation of these pathways makes Arabidopsis plants more sensitive to herbicide treatment.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.