2,537 results on '"Quality Control"'
Search Results
2. A Comment about Analytical Performance Specifications for the Combined Measurement Uncertainty Budget in the Implementation of Metrological Traceability of Parathyroid Hormone.
- Author
-
Panteghini M
- Subjects
- Humans, Uncertainty, Reference Standards, Quality Control, Parathyroid Hormone
- Published
- 2024
- Full Text
- View/download PDF
3. In Reply to A Comment about Analytical Performance Specifications for the Combined Measurement Uncertainty Budget in the Implementation of Metrological Traceability of Parathyroid Hormone.
- Author
-
Cavalier E and Farré-Segura J
- Subjects
- Humans, Uncertainty, Reference Standards, Quality Control, Parathyroid Hormone
- Published
- 2024
- Full Text
- View/download PDF
4. Regression-Adjusted Real-Time Quality Control
- Author
-
Kwok Leung Yiu, Baishen Pan, Qian Yu, Wenhai Jiang, Yin Zhao, Luo Lei, Xincen Duan, Kim Thiam Chin, Beili Wang, Chunyan Zhang, Wei Guo, J. Zhou, Jing Zhu, and Wenqi Shao
- Subjects
Quality Control ,Computer science ,Biochemistry (medical) ,Clinical Biochemistry ,Autocorrelation ,Data transformation (statistics) ,Contrast (statistics) ,Statistical process control ,Regression ,Constant false alarm rate ,Research Design ,Moving average ,Statistics ,Humans ,Laboratories ,Error detection and correction ,Algorithms - Abstract
Background Patient-based real-time quality control (PBRTQC) has gained increasing attention in the field of clinical laboratory management in recent years. Despite the many upsides that PBRTQC brings to the laboratory management system, it has been questioned for its performance and practical applicability for some analytes. This study introduces an extended method, regression-adjusted real-time quality control (RARTQC), to improve the performance of real-time quality control protocols. Methods In contrast to the PBRTQC, RARTQC has an additional regression adjustment step before using a common statistical process control algorithm, such as the moving average, to decide whether an analytical error exists. We used all patient test results of 4 analytes in 2019 from Zhongshan Hospital, Fudan University, to compare the performance of the 2 frameworks. Three types of analytical error were added in the study to compare the performance of PBRTQC and RARTQC protocols: constant, random, and proportional errors. The false alarm rate and error detection charts were used to assess the protocols. Results The study showed that RARTQC outperformed PBRTQC. RARTQC, compared with the PBRTQC, improved the trimmed average number of patients affected before detection (tANPed) at total allowable error by about 50% for both constant and proportional errors. Conclusions The regression step in the RARTQC framework removes autocorrelation in the test results, allows researchers to add additional variables, and improves data transformation. RARTQC is a powerful framework for real-time quality control research.
- Published
- 2021
5. Multiplexing Homocysteine into First-Tier Newborn Screening Mass Spectrometry Assays Using Selective Thiol Derivatization.
- Author
-
Pickens CA, Courtney E, Isenberg SL, Cuthbert C, and Petritis K
- Subjects
- Humans, Infant, Newborn, Tandem Mass Spectrometry methods, Quality Control, Flow Injection Analysis, Homocysteine, Neonatal Screening methods, Homocystinuria diagnosis
- Abstract
Background: Classical homocystinuria (HCU) results from deficient cystathionine β-synthase activity, causing elevated levels of Met and homocysteine (Hcy). Newborn screening (NBS) aims to identify HCU in pre-symptomatic newborns by assessing Met concentrations in first-tier screening. However, unlike Hcy, Met testing leads to a high number of false-positive and -negative results. Therefore, screening for Hcy directly in first-tier screening would be a better biomarker for use in NBS., Methods: Dried blood spot (DBS) quality control and residual clinical specimens were used in analyses. Several reducing and maleimide reagents were investigated to aid in quantification of total Hcy (tHcy). The assay which was developed and validated was performed by flow injection analysis-tandem mass spectrometry (FIA-MS/MS)., Results: Interferents of tHcy measurement were identified, so selective derivatization of Hcy was employed. Using N-ethylmaleimide (NEM) to selectively derivatize Hcy allowed interferent-free quantification of tHcy by FIA-MS/MS in first-tier NBS. The combination of tris(2-carboxyethyl)phosphine (TCEP) and NEM yielded significantly less matrix effects compared to dithiothreitol (DTT) and NEM. Analysis of clinical specimens demonstrated that the method could distinguish between HCU-positive, presumptive normal newborns, and newborns receiving total parenteral nutrition., Conclusions: Here we present the first known validated method capable of screening tHcy in DBS during FIA-MS/S first-tier NBS., (Published by Oxford University Press on behalf of American Association for Clinical Chemistry 2023.)
- Published
- 2023
- Full Text
- View/download PDF
6. Molecular and Serological Assays for SARS-CoV-2: Insights from Genome and Clinical Characteristics
- Author
-
Runling Zhang, Jiping Shi, Rui Zhang, Dongsheng Han, and Jinming Li
- Subjects
Quality Control ,0301 basic medicine ,molecular assays ,Disease stages ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Pneumonia, Viral ,Clinical Biochemistry ,Review Article ,quality assurance ,Genome, Viral ,Computational biology ,Antibodies, Viral ,Genome ,Serology ,Betacoronavirus ,03 medical and health sciences ,0302 clinical medicine ,Humans ,Medicine ,serological assays ,030212 general & internal medicine ,Laboratory assay ,Antigens, Viral ,Pandemics ,Phylogeny ,Biochemistry, medical ,biology ,SARS-CoV-2 ,Reverse Transcriptase Polymerase Chain Reaction ,business.industry ,Transmission (medicine) ,Biochemistry (medical) ,COVID-19 ,Outbreak ,biology.organism_classification ,030104 developmental biology ,RNA, Viral ,Coronavirus Infections ,business ,application - Abstract
Background The ongoing outbreak of the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has posed a challenge for worldwide public health. A reliable laboratory assay is essential both to confirm suspected patients and to exclude patients infected with other respiratory viruses, thereby facilitating the control of global outbreak scenarios. Content In this review, we focus on the genomic, transmission, and clinical characteristics of SARS-CoV-2, and comprehensively summarize the principles and related details of assays for SARS-CoV-2. We also explore the quality assurance measures for these assays. Summary SARS-CoV-2 has some unique gene sequences and specific transmission and clinical features that can inform the conduct of molecular and serological assays in many aspects, including the design of primers, the selection of specimens, and testing strategies at different disease stages. Appropriate quality assurance measures for molecular and serological assays are needed to maintain testing proficiency. Because serological assays have the potential to identify later stages of the infection and to confirm highly suspected cases with negative molecular assay results, a combination of these two assays is needed to achieve a reliable capacity to detect SARS-CoV-2.
- Published
- 2020
7. Interpreting EQA-Understanding Why Commutability of Materials Matters
- Author
-
Tony Badrick, W Greg Miller, Mauro Panteghini, Vincent Delatour, Heidi Berghall, Finlay MacKenzie, and Graham Jones
- Subjects
Quality Control ,Biochemistry (medical) ,Clinical Biochemistry ,Humans ,Reference Standards - Published
- 2021
8. Sex Hormones and Adrenal Steroids: Biological Variation Estimated Using Direct and Indirect Methods.
- Author
-
Røys EÅ, Guldhaug NA, Viste K, Jones GD, Alaour B, Sylte MS, Torsvik J, Kellmann R, Strand H, Theodorsson E, Marber M, Omland T, and Aakre KM
- Subjects
- Male, Humans, Hydrocortisone, Bayes Theorem, Gonadal Steroid Hormones, Luteinizing Hormone, Follicle Stimulating Hormone, Estradiol, Steroids, Testosterone, Sex Hormone-Binding Globulin, Androstenedione, Cortisone
- Abstract
Background: Biological variation (BV) data may be used to develop analytical performance specifications (APS), reference change values (RCV), and support the applicability of population reference intervals. This study estimates within-subject BV (CVI) for several endocrine biomarkers using 3 different methodological approaches., Methods: For the direct method, 30 healthy volunteers were sampled weekly for 10 consecutive weeks. Samples were analyzed in duplicate for 17-hydroxyprogesterone (17-OHP), androstenedione, cortisol, cortisone, estradiol, follicle-stimulating hormone (FSH), luteinizing hormone (LH), sex hormone-binding globulin (SHBG), and testosterone. A CV-ANOVA with outlier removal and a Bayesian model were applied to derive the CVI. For estradiol, FSH and LH, only the male subgroup was included. In the indirect method, using the same analytes and groups, pairs of sequential results were extracted from the laboratory information system. The total result variation for individual pairs was determined by identifying a central gaussian distribution in the ratios of the result pairs. The CVI was then estimated by removing the effect of analytical variation., Results: The estimated CVI from the Bayesian model (μCVP(i)) in the total cohort was: 17-OHP, 23%; androstenedione, 20%; cortisol, 18%; cortisone, 11%; SHBG, 7.4%; testosterone, 16%; and for the sex hormones in men: estradiol, 14%; FSH, 8%; and LH, 26%. CVI-heterogeneity was present for most endocrine markers. Similar CVI data were estimated using the CV-ANOVA and the indirect method., Conclusions: Similar CVI data were obtained using 2 different direct and one indirect method. The indirect approach is a low-cost alternative ensuring implementation of CVI data applicable for local conditions., (© American Association for Clinical Chemistry 2022.)
- Published
- 2023
- Full Text
- View/download PDF
9. Patient-Based Real-Time Quality Control: Review and Recommendations
- Author
-
Mark A. Cervinski, Andreas Bietenbeck, Huub H. van Rossum, Alex Katayev, Tze Ping Loh, and Tony Badrick
- Subjects
Quality Control ,030213 general clinical medicine ,Analyte ,Patients ,media_common.quotation_subject ,Clinical Biochemistry ,Control (management) ,Population ,Clinical Chemistry Tests ,030204 cardiovascular system & hematology ,Sensitivity and Specificity ,03 medical and health sciences ,0302 clinical medicine ,Reference Values ,Humans ,Quality (business) ,Diagnostic Errors ,education ,media_common ,education.field_of_study ,Biochemistry (medical) ,Reproducibility of Results ,Quality control ,Reference Standards ,Reliability engineering ,Term (time) ,Control limits ,restrict ,Chemistry, Clinical ,Algorithms ,Total Quality Management - Abstract
For many years the concept of patient-based quality control (QC) has been discussed and implemented in hematology laboratories; however, the techniques have not been widely implemented in clinical chemistry. This is mainly because of the complexity of this form of QC, as it needs to be optimized for each population and often for each analyte. However, the clear advantages of this form of QC, together with the ongoing realization of the shortcomings of “conventional” QC, have driven a need to provide guidance to laboratories to assist in deploying patient-based QC. This overview describes the components of a patient-based QC system (calculation algorithm, block size, truncation limits, control limits) and the relationship of these to the analyte being controlled. We also discuss the need for patient-based QC system optimization using patient data from the individual testing laboratory to reliably detect systematic errors while ensuring that there are few false alarms. The term patient-based real-time quality control covers many activities that use data from patient samples to detect analytical errors. These activities include the monitoring of patient population parameters such as the mean or median analyte value or using single within-patient changes such as the delta check. In this report, we will restrict the discussion to population-based parameters. This overview is intended to serve as a guide for the implementation of a patient-based QC system. The report does not cover the clinical evaluation of the population.
- Published
- 2019
10. Average of Patient Deltas: Patient-Based Quality Control Utilizing the Mean Within-Patient Analyte Variation
- Author
-
Mark A. Cervinski, Qian Xu, and George S. Cembrowski
- Subjects
Systematic error ,Quality Control ,Analyte ,Biochemistry (medical) ,Clinical Biochemistry ,Sodium ,030204 cardiovascular system & hematology ,03 medical and health sciences ,0302 clinical medicine ,Moving average ,030220 oncology & carcinogenesis ,Statistics ,Potassium ,Humans ,Truncation (statistics) ,Plasma Albumin ,Alanine aminotransferase ,Error detection and correction ,Laboratories ,Algorithms ,Total protein ,Mathematics - Abstract
Background Because traditional QC is discontinuous, laboratories use additional strategies to detect systematic error. One strategy, the delta check, is best suited to detect large systematic error. The moving average (MA) monitors the mean patient analyte value but cannot equitably detect systematic error in skewed distributions. Our study combines delta check and MA to develop an average of deltas (AoD) strategy that monitors the mean delta of consecutive, intrapatient results. Methods Arrays of the differences (delta) between paired patient results collected within 20–28 h of each other were generated from historical data. AoD protocols were developed using a simulated annealing algorithm in MatLab (Mathworks) to select the number of patient delta values to average and truncation limits to eliminate large deltas. We simulated systematic error by adding bias to arrays for plasma albumin, alanine aminotransferase, alkaline phosphatase, amylase, aspartate aminotransferase, bicarbonate, bilirubin (total and direct), calcium, chloride, creatinine, lipase, sodium, phosphorus, potassium, total protein, and magnesium. The average number of deltas to detection (ANDED) was then calculated in response to induced systematic error. Results ANDED varied by combination of assay and AoD protocol. Errors in albumin, lipase, and total protein were detected with a mean of 6 delta pairs. The highest ANDED was calcium, with a positive 0.6-mg/dL shift detected with an ANDED of 75. However, a negative 0.6-mg/dL calcium shift was detected with an ANDED of 25. Conclusions AoD detects systematic error with relatively few paired patient samples and is a patient-based QC technique that will enhance error detection.
- Published
- 2020
11. Preparedness and Rapid Implementation of External Quality Assessment Helped Quickly Increase COVID-19 Testing Capacity in the Republic of Korea
- Author
-
Hyukmin Lee, Cheon Kwon Yoo, Won Ki Min, Wee Gyo Lee, Myung Guk Han, Heungsup Sung, Sail Chun, and Sang Won Lee
- Subjects
Quality Control ,2019-20 coronavirus outbreak ,Quality Assurance, Health Care ,Coronavirus disease 2019 (COVID-19) ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Pneumonia, Viral ,Clinical Biochemistry ,Real-Time Polymerase Chain Reaction ,Betacoronavirus ,Coronavirus Envelope Proteins ,Government Agencies ,Viral Envelope Proteins ,Republic of Korea ,External quality assessment ,Humans ,Medicine ,Letter to the Editor ,Pandemics ,Environmental planning ,Biochemistry, medical ,SARS-CoV-2 ,business.industry ,Biochemistry (medical) ,COVID-19 ,RNA-Dependent RNA Polymerase ,Preparedness ,RNA, Viral ,Coronavirus Infections ,business ,Quality assurance - Published
- 2020
12. The Importance of Reliable Quality Control Materials for Noninvasive Prenatal Testing
- Author
-
Erik A. Sistermans
- Subjects
Quality Control ,0301 basic medicine ,medicine.medical_specialty ,Fetal dna ,Noninvasive Prenatal Testing ,media_common.quotation_subject ,Clinical Biochemistry ,Mothers ,Chorionic villus sampling ,030204 cardiovascular system & hematology ,Cell Line ,Miscarriage ,03 medical and health sciences ,0302 clinical medicine ,Biomimetics ,Pregnancy ,Prenatal Diagnosis ,medicine ,Humans ,Quality (business) ,Child ,media_common ,Massive parallel sequencing ,medicine.diagnostic_test ,business.industry ,Obstetrics ,Biochemistry (medical) ,medicine.disease ,030104 developmental biology ,Amniocentesis ,Female ,business ,Trisomy ,Risk assessment - Abstract
In this issue of Clinical Chemistry , Zhang and colleagues (1) describe the generation of quality control materials for noninvasive prenatal testing (NIPT)2. This is a very important and much-needed step forward in increasing the quality of NIPT, a step that had been largely bypassed during the very rapid development and clinical introduction of this test. Twenty-two years after the discovery of the presence of cell-free fetal DNA in maternal plasma (2) and 11 years after the first description of trisomy 21 detection using massively parallel sequencing to analyze cell-free fetal DNA (3, 4), NIPT has become the standard method of prenatal screening for trisomies 21, 13, and 18. It is not difficult to understand the reasons behind this extremely fast introduction of a clinical screening test. NIPT has 2 major advantages compared with the main existing screening tests, invasive testing and the first trimester combined screening test (FCT). Invasive testing (chorionic villus sampling and amniocentesis) is the most reliable but has the main disadvantage that it is associated with a small risk of approximately 2 in 1000 of inducing a miscarriage (5). FCT is safe but will provide pregnant women with only a risk calculation, indicating whether additional testing is needed. As a consequence, the sensitivity and specificity of FCT are relatively low, depending on the cutoff used for additional testing (6). Furthermore, FCT can be performed only during a limited time window, whereas NIPT can be performed from 10 weeks gestation onward. The market for prenatal testing is huge. According to the WHO, the number of pregnancies worldwide exceeds 200 million each year. This, in combination with the better quality and higher safety of NIPT, has led to its very rapid and often commercially driven introduction on the clinical market. Unfortunately, introduction of a clinical …
- Published
- 2019
13. Interpreting EQA-Understanding Why Commutability of Materials Matters.
- Author
-
Badrick T, Miller WG, Panteghini M, Delatour V, Berghall H, MacKenzie F, and Jones G
- Subjects
- Humans, Reference Standards, Quality Control
- Published
- 2022
- Full Text
- View/download PDF
14. Understanding Patient-Based Real-Time Quality Control Using Simulation Modeling
- Author
-
Alex Katayev, Andreas Bietenbeck, Tony Badrick, Huub H. van Rossum, Mark A Cervinski, and Tze Ping Loh
- Subjects
Quality Control ,030213 general clinical medicine ,Percentile ,Internet ,Computer science ,Clinical Laboratory Techniques ,Biochemistry (medical) ,Clinical Biochemistry ,Estimator ,030204 cardiovascular system & hematology ,03 medical and health sciences ,0302 clinical medicine ,Bias ,Moving average ,Control limits ,Statistics ,Humans ,Computer Simulation ,Truncation (statistics) ,Error detection and correction ,Block size ,Algorithms ,Block (data storage) - Abstract
Background Patient-based real-time quality control (PBRTQC) avoids limitations of traditional quality control methods based on the measurement of stabilized control samples. However, PBRTQC needs to be adapted to the individual laboratories with parameters such as algorithm, truncation, block size, and control limit. Methods In a computer simulation, biases were added to real patient results of 10 analytes with diverse properties. Different PBRTQC methods were assessed on their ability to detect these biases early. Results The simulation based on 460 000 historical patient measurements for each analyte revealed several recommendations for PBRTQC. Control limit calculation with “percentiles of daily extremes” led to effective limits and allowed specification of the percentage of days with false alarms. However, changes in measurement distribution easily increased false alarms. Box–Cox but not logarithmic transformation improved error detection. Winsorization of outlying values often led to a better performance than simple outlier removal. For medians and Harrell–Davis 50 percentile estimators (HD50s), no truncation was necessary. Block size influenced medians substantially and HD50s to a lesser extent. Conversely, a change of truncation limits affected means and exponentially moving averages more than a change of block sizes. A large spread of patient measurements impeded error detection. PBRTQC methods were not always able to detect an allowable bias within the simulated 1000 erroneous measurements. A web application was developed to estimate PBRTQC performance. Conclusions Computer simulations can optimize PBRTQC but some parameters are generally superior and can be taken as default.
- Published
- 2019
15. Microbiome Diagnostics
- Author
-
Robert Schlaberg
- Subjects
Quality Control ,0303 health sciences ,Bacteria ,030306 microbiology ,Microbiota ,Biochemistry (medical) ,Clinical Biochemistry ,Drug Resistance ,Fungi ,High-Throughput Nucleotide Sequencing ,Mitochondria ,03 medical and health sciences ,RNA, Ribosomal, 16S ,Humans ,Metagenomics ,030304 developmental biology - Abstract
BACKGROUNDDuring the past decade, breakthroughs in sequencing technology and computational biology have provided the basis for studies of the myriad ways in which microbial communities (“microbiota”) in and on the human body influence human health and disease. In almost every medical specialty, there is now a growing interest in accurate and replicable profiling of the microbiota for use in diagnostic and therapeutic application.CONTENTThis review provides an overview of approaches, challenges, and considerations for diagnostic applications borrowing from other areas of molecular diagnostics, including clinical metagenomics. Methodological considerations and evolving approaches for microbiota profiling from mitochondrially encoded 16S rRNA-based amplicon sequencing to metagenomics and metatranscriptomics are discussed. To improve replicability, at least the most vulnerable steps in testing workflows will need to be standardized and continuous efforts needed to define QC standards. Challenges such as purity of reagents and consumables, improvement of reference databases, and availability of diagnostic-grade data analysis solutions will require joint efforts across disciplines and with manufacturers.SUMMARYThe body of literature supporting important links between the microbiota at different anatomic sites with human health and disease is expanding rapidly and therapeutic manipulation of the intestinal microbiota is becoming routine. The next decade will likely see implementation of microbiome diagnostics in diagnostic laboratories to fully capitalize on technological and scientific advances and apply them in routine medical practice.
- Published
- 2019
16. External Quality Assessment Testing Near the Limit of Detection for High-Sensitivity Cardiac Troponin Assays
- Author
-
Peter A. Kavsak
- Subjects
Quality Control ,Cardiac troponin ,Clinical Biochemistry ,030204 cardiovascular system & hematology ,Matrix (chemical analysis) ,03 medical and health sciences ,0302 clinical medicine ,Troponin complex ,Limit of Detection ,0502 economics and business ,External quality assessment ,Proficiency testing ,Humans ,Medicine ,Volume concentration ,Detection limit ,business.industry ,Troponin I ,05 social sciences ,Biochemistry (medical) ,Reproducibility of Results ,Female ,050211 marketing ,business ,Nuclear medicine ,Sensitivity (electronics) - Abstract
To the Editor: The latest laboratory recommendations on high-sensitivity cardiac troponin (hs-cTn)1 endorse imprecision ≤10% at the 99th percentile with a total analytic error of
- Published
- 2018
17. Adapting an Established Clinical Chemistry Quality Control Measure for Droplet Generation Performance in Digital PCR
- Author
-
Alexander Dobrovic, Su Kah Goh, Hongdo Do, Boris Ka Leong Wong, Vijayaragavan Muralidharan, and Christopher Christophi
- Subjects
Quality Control ,0301 basic medicine ,Biochemistry (medical) ,Clinical Biochemistry ,Chemistry Measurement ,Polymerase Chain Reaction ,03 medical and health sciences ,030104 developmental biology ,Workflow ,Control measure ,Chemistry, Clinical ,Molecular targets ,Digital polymerase chain reaction ,Sensitivity (control systems) ,Biological system ,Volume concentration - Abstract
To the Editor: Droplet digital PCR (ddPCR)1 is an emerging platform that is being increasingly adopted by clinical laboratories. The partitioning of a PCR reaction into discrete droplets enables the accurate detection and quantification of molecular targets. Single molecule analysis by this manner is accurate, cost-effective, and readily performed compared with next-generation sequencing and mass spectrometry. The typical ddPCR workflow comprises 4 key steps: ( a ) preparation of the PCR reaction, ( b ) droplet generation (DG), ( c ) endpoint PCR, and ( d ), droplet analysis. The process of DG is a critical step of the workflow because this step is considered to be most prone to droplet loss (1). It is thus important to recognize that a loss of droplets during DG will affect the number of droplets recoverable for droplet analysis in an optimized assay. Consequently, lower droplet numbers can lead to a loss of analytical sensitivity and may confound the evaluation of samples for molecular targets that are present in very low concentrations. To date, informative measures to evaluate DG performance within the ddPCR workflow remain unaddressed. The Levey–Jennings chart in conjunction with the “Westgard rules” is the staple of quality management in clinical chemistry measurement systems. Here, we outline an approach that uses the Levey–Jennings chart and a set of adapted Westgard rules to monitor DG performance in …
- Published
- 2018
18. Stronger Together: Aggregated Z-values of Traditional Quality Control Measurements and Patient Medians Improve Detection of Biases
- Author
-
Markus Thaler, Andreas Bietenbeck, Peter B. Luppa, and Frank Klawonn
- Subjects
Quality Control ,media_common.quotation_subject ,Clinical Biochemistry ,Control (management) ,Sample (statistics) ,030204 cardiovascular system & hematology ,03 medical and health sciences ,0302 clinical medicine ,Software ,Bias ,Statistics ,Humans ,Quality (business) ,media_common ,Median ,business.industry ,Biochemistry (medical) ,Aggregate (data warehouse) ,Sample size determination ,Chemistry, Clinical ,030220 oncology & carcinogenesis ,Bias detection ,Laboratories ,business ,Algorithms - Abstract
BACKGROUNDIn clinical chemistry, quality control (QC) often relies on measurements of control samples, but limitations, such as a lack of commutability, compromise the ability of such measurements to detect out-of-control situations. Medians of patient results have also been used for QC purposes, but it may be difficult to distinguish changes observed in the patient population from analytical errors. This study aims to combine traditional control measurements and patient medians for facilitating detection of biases.METHODSThe software package “rSimLab” was developed to simulate measurements of 5 analytes. Internal QC measurements and patient medians were assessed for detecting impermissible biases. Various control rules combined these parameters. A Westgard-like algorithm was evaluated and new rules that aggregate Z-values of QC parameters were proposed.RESULTSMathematical approximations estimated the required sample size for calculating meaningful patient medians. The appropriate number was highly dependent on the ratio of the spread of sample values to their center. Instead of applying a threshold to each QC parameter separately like the Westgard algorithm, the proposed aggregation of Z-values averaged these parameters. This behavior was found beneficial, as a bias could affect QC parameters unequally, resulting in differences between their Z-transformed values. In our simulations, control rules tended to outperform the simple QC parameters they combined. The inclusion of patient medians substantially improved bias detection for some analytes.CONCLUSIONSPatient result medians can supplement traditional QC, and aggregations of Z-values are novel and beneficial tools for QC strategies to detect biases.
- Published
- 2017
19. Laboratory-Developed Tests in the New European Union 2017/746 Regulation: Opportunities and Risks.
- Author
-
Vogeser M, Brüggemann M, Lennerz J, Stenzinger A, and Gassner UM
- Subjects
- Humans, European Union
- Published
- 2021
- Full Text
- View/download PDF
20. Generation of Highly Biomimetic Quality Control Materials for Noninvasive Prenatal Testing Based on Enzymatic Digestion of Matched Mother-Child Cell Lines
- Author
-
Peng Gao, Jinming Li, Zhang Rui, Ping Tan, Ziyang Li, and Jiansheng Ding
- Subjects
0301 basic medicine ,Quality Control ,Massive parallel sequencing ,Enzymatic digestion ,Concordance ,Biochemistry (medical) ,Clinical Biochemistry ,Mothers ,Computational biology ,Biology ,Internal quality ,03 medical and health sciences ,030104 developmental biology ,0302 clinical medicine ,Human plasma ,Biomimetics ,Pregnancy ,030220 oncology & carcinogenesis ,Prenatal Diagnosis ,Proficiency testing ,Humans ,Micrococcal Nuclease ,Female ,Child ,Cell-Free Nucleic Acids - Abstract
BACKGROUND Noninvasive prenatal testing (NIPT) based on cell-free DNA (cfDNA) is widely used. However, biomimetic quality control materials that have properties identical to clinical samples and that are applicable to a wide range of methodologies are still not available to support assay development, internal quality control, and proficiency testing. METHODS We developed a set of dual enzyme-digested NIPT quality control materials (DENQCMs) that comprise simulated human plasma and mixtures of mother cell line-derived cfDNA based on DNA fragmentation factor digestion (D-cfDNA) and the matched child cell line-derived cfDNA based on micrococcal nuclease digestion (M-cfDNA). Serially diluted samples positive for trisomies 21, 18, and 13 were included in the materials. To evaluate the biomimetics, DENQCMs were analyzed using random massively parallel sequencing (MPS), targeted MPS, and imaging single DNA molecule methods, and the estimated fetal fractions (FFs) were compared with expected FFs. Genome-wide analysis of cfDNA fragmentation patterns was performed to confirm their biological characteristics. RESULTS The genetic status of each DENQCM was correctly detected by 4 routine NIPT assays for the samples with FFs >5%. The chromosome Y-based and single-nucleotide polymorphism-based estimations of FFs were linearly related to those expected FFs. The MPS results exhibited a concordance of quality metrics between DENQCMs and maternal plasma, such as GC contents of cfDNA and unique read ratios. CONCLUSIONS The DENQCMs are universally applicable for different platforms. We propose DENQCMs as an approach to produce matched maternal and fetal cfDNA that will be suitable for the preparation of quality control materials for NIPT.
- Published
- 2018
21. Regression-Adjusted Real-Time Quality Control.
- Author
-
Duan X, Wang B, Zhu J, Zhang C, Jiang W, Zhou J, Shao W, Zhao Y, Yu Q, Lei L, Yiu KL, Chin KT, Pan B, and Guo W
- Subjects
- Humans, Quality Control, Research Design, Algorithms, Laboratories
- Abstract
Background: Patient-based real-time quality control (PBRTQC) has gained increasing attention in the field of clinical laboratory management in recent years. Despite the many upsides that PBRTQC brings to the laboratory management system, it has been questioned for its performance and practical applicability for some analytes. This study introduces an extended method, regression-adjusted real-time quality control (RARTQC), to improve the performance of real-time quality control protocols., Methods: In contrast to the PBRTQC, RARTQC has an additional regression adjustment step before using a common statistical process control algorithm, such as the moving average, to decide whether an analytical error exists. We used all patient test results of 4 analytes in 2019 from Zhongshan Hospital, Fudan University, to compare the performance of the 2 frameworks. Three types of analytical error were added in the study to compare the performance of PBRTQC and RARTQC protocols: constant, random, and proportional errors. The false alarm rate and error detection charts were used to assess the protocols., Results: The study showed that RARTQC outperformed PBRTQC. RARTQC, compared with the PBRTQC, improved the trimmed average number of patients affected before detection (tANPed) at total allowable error by about 50% for both constant and proportional errors., Conclusions: The regression step in the RARTQC framework removes autocorrelation in the test results, allows researchers to add additional variables, and improves data transformation. RARTQC is a powerful framework for real-time quality control research., (© American Association for Clinical Chemistry 2021. All rights reserved. For permissions, please email: journals.permissions@oup.com.)
- Published
- 2021
- Full Text
- View/download PDF
22. Pushing Patient-Based Quality Control Forward through Regression.
- Author
-
Cervinski MA
- Subjects
- Humans, Quality Control, Muscle, Skeletal
- Published
- 2021
- Full Text
- View/download PDF
23. Average of Patient Deltas: Patient-Based Quality Control Utilizing the Mean Within-Patient Analyte Variation.
- Author
-
Cembrowski GS, Xu Q, and Cervinski MA
- Subjects
- Algorithms, Humans, Potassium, Quality Control, Laboratories, Sodium
- Abstract
Background: Because traditional QC is discontinuous, laboratories use additional strategies to detect systematic error. One strategy, the delta check, is best suited to detect large systematic error. The moving average (MA) monitors the mean patient analyte value but cannot equitably detect systematic error in skewed distributions. Our study combines delta check and MA to develop an average of deltas (AoD) strategy that monitors the mean delta of consecutive, intrapatient results., Methods: Arrays of the differences (delta) between paired patient results collected within 20-28 h of each other were generated from historical data. AoD protocols were developed using a simulated annealing algorithm in MatLab (Mathworks) to select the number of patient delta values to average and truncation limits to eliminate large deltas. We simulated systematic error by adding bias to arrays for plasma albumin, alanine aminotransferase, alkaline phosphatase, amylase, aspartate aminotransferase, bicarbonate, bilirubin (total and direct), calcium, chloride, creatinine, lipase, sodium, phosphorus, potassium, total protein, and magnesium. The average number of deltas to detection (ANDED) was then calculated in response to induced systematic error., Results: ANDED varied by combination of assay and AoD protocol. Errors in albumin, lipase, and total protein were detected with a mean of 6 delta pairs. The highest ANDED was calcium, with a positive 0.6-mg/dL shift detected with an ANDED of 75. However, a negative 0.6-mg/dL calcium shift was detected with an ANDED of 25., Conclusions: AoD detects systematic error with relatively few paired patient samples and is a patient-based QC technique that will enhance error detection., (© American Association for Clinical Chemistry 2021. All rights reserved. For permissions, please email: journals.permissions@oup.com.)
- Published
- 2021
- Full Text
- View/download PDF
24. Moving Average for Continuous Quality Control: Time to Move to Implementation in Daily Practice?
- Author
-
Hans Kemperman and Huub H. van Rossum
- Subjects
Quality Control ,030213 general clinical medicine ,business.industry ,media_common.quotation_subject ,Biochemistry (medical) ,Clinical Biochemistry ,Control (management) ,Reference Standards ,030204 cardiovascular system & hematology ,Industrial engineering ,Chemistry Techniques, Analytical ,Visual inspection ,03 medical and health sciences ,0302 clinical medicine ,Hematology analyzer ,Moving average ,Daily practice ,Humans ,Bias detection ,Medicine ,Quality (business) ,Laboratories ,business ,Quality assurance ,media_common - Abstract
To the Editor: Recently, Ng et al. described a new method for optimization of moving average (MA)1 QC procedures (1). The accompanying editorial mentioned that, during the last 50 years, slow but continuous improvements have been made in the understanding and methodology of MA in the move toward continuous analytical quality assurance (2). Although these improvements are being made, general implementation of MA for continuous QC on clinical laboratories has failed and many laboratories are struggling with the implementation and application of MA QC. In this letter, we address several steps that we consider to be important to support a more general implementation of MA as a continuous QC instrument in medical laboratories. Most improvements that really affected the use of MA in clinical laboratories originate from the 1970s and 1980s. For example, the algorithms described by Bull et al. in 1974 are still the basis of the application of MA today in most, if not all, hematology analyzers (3). Interestingly, Bull et al. stated that, because their findings were based on visual inspection of MA patterns, future MA research should focus on developing objective measures of MA performance (3). Recently, 2 methods have been described that allow more objective and realistic insight into MA performance. Ng et al. (1) described an MA optimization method that used the average number of patient samples affected until error detection (ANPed), and we reported the use of bias detection curves and …
- Published
- 2017
25. Preanalytical Variables Affecting the Integrity of Human Biospecimens in Biobanking
- Author
-
Jim Vaught and Christina Ellervik
- Subjects
Quality Control ,Research literature ,medicine.medical_specialty ,Pathology ,Biomedical Research ,Future studies ,business.industry ,Biochemistry (medical) ,Clinical Biochemistry ,Urine ,Biobank ,Specimen Handling ,Blood ,Chemistry, Clinical ,medicine ,Humans ,Personalized medicine ,Saliva ,business ,Intensive care medicine ,Biological Specimen Banks - Abstract
BACKGROUNDMost errors in a clinical chemistry laboratory are due to preanalytical errors. Preanalytical variability of biospecimens can have significant effects on downstream analyses, and controlling such variables is therefore fundamental for the future use of biospecimens in personalized medicine for diagnostic or prognostic purposes.CONTENTThe focus of this review is to examine the preanalytical variables that affect human biospecimen integrity in biobanking, with a special focus on blood, saliva, and urine. Cost efficiency is discussed in relation to these issues.SUMMARYThe quality of a study will depend on the integrity of the biospecimens. Preanalytical preparations should be planned with consideration of the effect on downstream analyses. Currently such preanalytical variables are not routinely documented in the biospecimen research literature. Future studies using biobanked biospecimens should describe in detail the preanalytical handling of biospecimens and analyze and interpret the results with regard to the effects of these variables.
- Published
- 2015
26. Evaluation of Preanalytical Conditions and Implementation of Quality Control Steps for Reliable Gene Expression and DNA Methylation Analyses in Liquid Biopsies
- Author
-
Areti Strati, Maria Chimonidou, Eleni Tzanikou, Martha Zavridou, Evi Lianidou, and Sofia Mastoraki
- Subjects
0301 basic medicine ,Quality Control ,Clinical Biochemistry ,Sensitivity and Specificity ,03 medical and health sciences ,chemistry.chemical_compound ,0302 clinical medicine ,Circulating tumor cell ,Complementary DNA ,Gene expression ,Biomarkers, Tumor ,SOXF Transcription Factors ,Humans ,Liquid biopsy ,Regulation of gene expression ,Whole Genome Amplification ,Reverse Transcriptase Polymerase Chain Reaction ,Biochemistry (medical) ,Liquid Biopsy ,DNA Methylation ,Neoplastic Cells, Circulating ,Molecular biology ,Gene Expression Regulation, Neoplastic ,Repressor Proteins ,030104 developmental biology ,chemistry ,030220 oncology & carcinogenesis ,DNA methylation ,MCF-7 Cells ,Cell-Free Nucleic Acids ,DNA - Abstract
BACKGROUND Liquid biopsy provides important information for the prognosis and treatment of cancer patients. In this study, we evaluated the effects of preanalytical conditions on gene expression and DNA methylation analyses in liquid biopsies. METHODS We tested the stability of circulating tumor cell (CTC) messenger RNA by spiking MCF-7 cells in healthy donor peripheral blood (PB) drawn into 6 collection-tube types with various storage conditions. CTCs were enriched based on epithelial cell adhesion molecule positivity, and RNA was isolated followed by cDNA synthesis. Gene expression was quantified using RT-quantitative PCR for CK19 and B2M. We evaluated the stability of DNA methylation in plasma under different storage conditions by spiking DNA isolated from MCF-7 cells in healthy donor plasma. Two commercially available sodium bisulfite (SB)-conversion kits were compared, in combination with whole genome amplification (WGA), to evaluate the stability of SB-converted DNA. SB-converted DNA samples were analyzed by real-time methylation-specific PCR (MSP) for ACTB, SOX17, and BRMS1. Quality control was assessed using Levey–Jennings graphs. RESULTS RNA-based analysis in CTCs is severely impeded by the preservatives used in many PB collection tubes (except for EDTA), as well as by time to analysis. Plasma and SB-converted DNA samples are stable and can be used safely for MSP when kept at −80 °C. Downstream WGA of SB-converted DNA compensated for the limited amount of available sample in liquid biopsies. CONCLUSIONS Standardization of preanalytical conditions and implementation of quality control steps is extremely important for reliable liquid biopsy analysis, and a prerequisite for routine applications in the clinic.
- Published
- 2018
27. Quality Control of Serum and Plasma by Quantification of (4E,14Z)-Sphingadienine-C18-1-Phosphate Uncovers Common Preanalytical Errors During Handling of Whole Blood
- Author
-
Peiyuan Yin, Miriam Hoene, Jakob S. Hansen, Peter Plomgaard, Christos T. Nakas, Andreas Fritsche, Xinyu Liu, Andreas Peter, Cora Weigert, Louise Fritsche, Jens Hudemann, Guowang Xu, Hans-Ulrich Häring, Michael Haap, Maimuna Mendy, Xiaolin Wang, Andreas M. Niess, and Rainer Lehmann
- Subjects
0301 basic medicine ,Quality Control ,Operating procedures ,Lactic acid blood ,Sample (material) ,Clinical Biochemistry ,Physiology ,Phosphates ,Specimen Handling ,03 medical and health sciences ,0302 clinical medicine ,Sphingosine ,Medicine ,Humans ,Lactic Acid ,610 Medicine & health ,Whole blood ,Plasma samples ,Quality assessment ,business.industry ,Biochemistry (medical) ,Reproducibility of Results ,Research findings ,030104 developmental biology ,Ethanolamines ,030220 oncology & carcinogenesis ,Lysophospholipids ,business ,Biomarkers - Abstract
BACKGROUND Nonadherence to standard operating procedures (SOPs) during handling and processing of whole blood is one of the most frequent causes affecting the quality of serum and plasma. Yet, the quality of blood samples is of the utmost importance for reliable, conclusive research findings, valid diagnostics, and appropriate therapeutic decisions. METHODS UHPLC-MS-driven nontargeted metabolomics was applied to identify biomarkers that reflected time to processing of blood samples, and a targeted UHPLC-MS analysis was used to quantify and validate these biomarkers. RESULTS We found that (4E,14Z)-sphingadienine-C18-1-phosphate (S1P-d18:2) was suitable for the reliable assessment of the pronounced changes in the quality of serum and plasma caused by errors in the phase between collection and centrifugation of whole blood samples. We rigorously validated S1P-d18:2, which included the use of practicality tests on >1400 randomly selected serum and plasma samples that were originally collected during single- and multicenter trials and then stored in 11 biobanks in 3 countries. Neither life-threatening disease states nor strenuous metabolic challenges (i.e., high-intensity exercise) affected the concentration of S1P-d18:2. Cutoff values for sample assessment were defined (plasma, ≤0.085 μg/mL; serum, ≤0.154 μg/mL). CONCLUSIONS Unbiased valid monitoring to check for adherence to SOP-dictated time for processing to plasma or serum and/or time to storage of whole blood at 4 °C is now feasible. This novel quality assessment step could enable scientists to uncover common preanalytical errors, allowing for identification of serum and plasma samples that should be excluded from certain investigations. It should also allow control of samples before long-term storage in biobanks.
- Published
- 2017
28. Planning Risk-Based SQC Schedules for Bracketed Operation of Continuous Production Analyzers
- Author
-
Sten A. Westgard, James O. Westgard, and Hassan Bayat
- Subjects
Quality Control ,Risk ,030213 general clinical medicine ,Schedule ,Computer science ,media_common.quotation_subject ,Clinical Biochemistry ,030204 cardiovascular system & hematology ,Continuous production ,03 medical and health sciences ,0302 clinical medicine ,Production (economics) ,Humans ,Quality (business) ,Bracketing ,media_common ,Automation, Laboratory ,business.industry ,Event (computing) ,Biochemistry (medical) ,Planning Techniques ,Models, Theoretical ,Statistical process control ,Automation ,Reliability engineering ,business ,Laboratories - Abstract
BACKGROUND To minimize patient risk, “bracketed” statistical quality control (SQC) is recommended in the new CLSI guidelines for SQC (C24-Ed4). Bracketed SQC requires that a QC event both precedes and follows (brackets) a group of patient samples. In optimizing a QC schedule, the frequency of QC or run size becomes an important planning consideration to maintain quality and also facilitate responsive reporting of results from continuous operation of high production analytic systems. METHODS Different plans for optimizing a bracketed SQC schedule were investigated on the basis of Parvin's model for patient risk and CLSI C24-Ed4's recommendations for establishing QC schedules. A Sigma-metric run size nomogram was used to evaluate different QC schedules for processes of different sigma performance. RESULTS For high Sigma performance, an effective SQC approach is to employ a multistage QC procedure utilizing a “startup” design at the beginning of production and a “monitor” design periodically throughout production. Example QC schedules are illustrated for applications with measurement procedures having 6-σ, 5-σ, and 4-σ performance. CONCLUSIONS Continuous production analyzers that demonstrate high σ performance can be effectively controlled with multistage SQC designs that employ a startup QC event followed by periodic monitoring or bracketing QC events. Such designs can be optimized to minimize the risk of harm to patients.
- Published
- 2017
29. Limitations of Accelerated Stability Model Based on the Arrhenius Equation for Shelf Life Estimation of In Vitro Diagnostic Products.
- Author
-
Ebrahim A, DeVore K, and Fischer T
- Subjects
- Humans, Quality Control, Temperature, Drug Stability, Models, Chemical
- Abstract
Background: An accelerated stability model based on the Arrhenius Equation can be used to estimate stability of diagnostic reagents. Here we review 3 examples in which the model does not accurately predict the stability of diagnostic reagents., Methods: We prepared several pilot lots of quality controls materials containing fructosamine, BNP, and HbA1c in human whole blood and serum matrices and performed accelerated stability studies at increased temperatures (5 °C to 35 °C) and real-time stability studies at the recommended storage temperature (-10 °C to -20 °C) for several analytes in quality control materials., Results: We observed that the stability predictions obtained from the accelerated stability studies were longer in 2 instances and much shorter in another than those observed from the real-time stability studies., Conclusions: Due to discrepancies between the stability results from accelerated stability studies and those from the real-time stability studies, we stress the need for caution when reagent manufacturers use the Arrhenius model and recommend that the technical groups and committees assigned to revise CLSI and ISO stability documents highlight the limitations of the accelerated stability model and include more guidance and direction on how and when to use the accelerated stability model., (© American Association for Clinical Chemistry 2020. All rights reserved. For permissions, please email: journals.permissions@oup.com.)
- Published
- 2021
- Full Text
- View/download PDF
30. Biomarkers in the Pharmaceutical Industry
- Author
-
Jeffrey E Ming, Susan Richards, John A. Wagner, Scott D. Patterson, Michael E. Burczynski, Andrew S. Plump, and Omar F Laterza
- Subjects
Quality Control ,medicine.medical_specialty ,business.industry ,Biochemistry (medical) ,Clinical Biochemistry ,Medical laboratory ,Pharmacology ,Precision medicine ,Food and drug administration ,Pharmaceutical Preparations ,Drug development ,medicine ,Humans ,Biomarker (medicine) ,Intensive care medicine ,business ,Biomarkers ,Pharmaceutical industry - Abstract
Although biomarkers have been the essence of laboratory medicine since its inception, they are relatively new tools in the development of pharmaceutical compounds. The main utility of biomarkers for pharmaceutical companies has been in making drug development a more efficient and a cost-effective process. This is primarily the case because of the drastic and perhaps alarming increase seen in the development cost of new drugs in the last few decades, which has been accompanied by a decrease in the number of drugs obtaining regulatory approval. Furthermore, the Critical Path Initiative of the US Food and Drug Administration has identified biomarkers as important tools that may help to correct this imbalance. As will become apparent in the discussion below from 4 biomarker groups in leading companies, a large number of these biomarkers have been successfully used for internal decision-making, yet they may have never left the pharmaceutical space. At present, biomarkers, at a growing number, are being codeveloped along with the therapeutic compound as companion diagnostics; this is part of the Precision Medicine paradigm of delivering the right medicine to the right patient. This development will undoubtedly move biomarkers beyond the pharmaceutical space and into the clinical laboratory, thus influencing the practice of laboratory medicine. It will also become apparent that biomarkers developed by the pharmaceutical industry must be evaluated and validated using rigorous procedures. Clinical chemists can certainly assist in this endeavor. This may, in turn, lead to a larger number of useful biomarkers, not only in drug development but also in the clinical laboratory, as companion diagnostics. Biomarkers are extensively used throughout the process of drug development. In your experience, what are the most prominent or illuminating examples of how biomarkers directly impacted the development of a drug? Sanofi team: Biomarkers have played critical roles in the development and …
- Published
- 2015
31. Planning Statistical Quality Control to Minimize Patient Risk: It's About Time
- Author
-
Curtis A. Parvin
- Subjects
Quality Control ,030213 general clinical medicine ,Measure (data warehouse) ,Clinical Laboratory Techniques ,Computer science ,Patient risk ,media_common.quotation_subject ,Biochemistry (medical) ,Clinical Biochemistry ,Sample (statistics) ,Guideline ,030204 cardiovascular system & hematology ,Statistical process control ,Reliability engineering ,03 medical and health sciences ,0302 clinical medicine ,Probability of error ,Humans ,Quality (business) ,False rejection ,media_common - Abstract
The purpose of statistical quality control (SQC)2 in the clinical laboratory is to assure that reported patient results are fit for their intended use, not only when a measurement procedure is operating in its stable incontrol state, but also when out-of-control conditions occur. The value of quality-control principles and practices in the laboratory has been well recognized and appreciated for many decades. The Clinical and Laboratory Standards Institute (CLSI; then known as the NCCLS) published its first approved guideline on statistical quality-control principles and definitions for quantitative measurement procedures in 1991 (1). The fourth edition of the guideline appeared last year (2). For many years SQC design primarily involved choosing how many quality control (QC) samples to measure and what QC rules to apply to the QC results. This approach originated in an era when batch testing was common. QC samples were placed in the batch along with patient specimens. The QC sample results were used to decide if the patient results in the batch were acceptable. The goal was for the QC rule to have a low probability of rejection when the batch was in control (probability of false rejection, Pfr) and a high probability of rejection when the batch was out-of-control (probability of error detection, Ped) (3). When continuous-production analyzers became prevalent in the laboratory, a new QC planning question arose: When should QC samples be measured? In batch testing, the answer was to measure QC samples with each batch. However, with continuous-production analyzers a link between QC results and patient results within a batch no longer exists. Instead, QC results simply reflect the current state of the measurement procedure at the time they are measured. Unfortunately, the traditional QC performance measures, …
- Published
- 2018
32. Measurement of Hematocrit in Dried Blood Spots Using Near-Infrared Spectroscopy: Robust, Fast, and Nondestructive
- Author
-
Erik M. van Maarseveen, Dennis Hekman, Mohsin El Amrani, Eric C Diemel, and Marlies Oostendorp
- Subjects
Adult ,Male ,Quality Control ,Analyte ,Adolescent ,Clinical Biochemistry ,Blood viscosity ,Analytical chemistry ,Blood volume ,Hematocrit ,030226 pharmacology & pharmacy ,01 natural sciences ,Sensitivity and Specificity ,03 medical and health sciences ,Young Adult ,0302 clinical medicine ,medicine ,Humans ,Child ,Whole blood ,Dried Blood Spot Testing ,Aged ,Aged, 80 and over ,Spectroscopy, Near-Infrared ,medicine.diagnostic_test ,Chemistry ,010401 analytical chemistry ,Biochemistry (medical) ,Infant, Newborn ,Infant ,Middle Aged ,0104 chemical sciences ,Dried blood spot ,Therapeutic drug monitoring ,Child, Preschool ,Female ,Biomedical engineering - Abstract
To the Editor: Dried blood spot (DBS)1 collection is an established sampling method for new born screening and is increasingly used in other domains, including therapeutic drug monitoring, toxicology, microbiology, and genetics. Advantages of DBS sampling are the low blood volume requirements, minimally invasive collection, favorable stability of many analytes, and the potential of patient self-sampling at home. Importantly, the introduction of more sensitive techniques in the clinical laboratory has paved the way for analysis of small volume DBS samples in clinical chemistry (1). A well-recognized concern for accurate analyte quantification using DBS is the hematocrit (Ht) effect (2). First, Ht influences blood viscosity and thereby fluidity of blood on the paper. This can lead to different volumes of blood in equally sized punches and to varying analyte concentrations within the spot, resulting in a potential bias. Second, Ht may influence analyte extraction from the paper. Third, many analytes in clinical chemistry are currently measured in plasma and/or serum, whereas DBS samples are whole blood lysates. Therefore, for many compounds, Ht is necessary to accurately convert DBS measurements to plasma/serum values. Previously proposed techniques to …
- Published
- 2016
33. Selecting Statistical Quality Control Procedures for Limiting the Impact of Increases in Analytical Random Error on Patient Safety
- Author
-
Martín Yago
- Subjects
Quality Control ,Models, Statistical ,business.industry ,Biochemistry (medical) ,Clinical Biochemistry ,Statistical model ,030204 cardiovascular system & hematology ,030224 pathology ,Statistical process control ,03 medical and health sciences ,Patient safety ,0302 clinical medicine ,Chemistry, Clinical ,Random error ,Statistics ,Humans ,Limit (mathematics) ,Patient Safety ,business ,Selection (genetic algorithm) ,Risk management ,Event (probability theory) - Abstract
BACKGROUND QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. METHODS A statistical model was used to construct charts for the 1ks and X/χ2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. RESULTS 1ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X/χ2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. CONCLUSIONS Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision.
- Published
- 2016
34. Continuous Improvement in Continuous Quality Control
- Author
-
Sterling T. Bennett
- Subjects
Quality Control ,business.industry ,media_common.quotation_subject ,Biochemistry (medical) ,Clinical Biochemistry ,Control (management) ,Sample (statistics) ,Decision rule ,030204 cardiovascular system & hematology ,Statistical process control ,Quality Improvement ,Test (assessment) ,Reliability engineering ,03 medical and health sciences ,0302 clinical medicine ,Chart ,Acceptance sampling ,030220 oncology & carcinogenesis ,Medicine ,Quality (business) ,business ,media_common - Abstract
“Quality is never an accident. It is always the result of intelligent effort.” So observed John Ruskin, Victorian writer and critic of art, architecture, and society (1), unknowingly foreshadowing the emergence of quality science in the 20th century. Statistical QC came from Bell Telephone Laboratories, where Shewhart developed the statistical control chart in 1924, and Dodge and Romig pioneered statistical methods for acceptance sampling rather than 100% inspection of products. These beneficial techniques were not widespread until the manufacturing demands of World War II necessitated better control of product quality. With Deming's involvement, quality science was further developed in postwar Japanese manufacturing (2). Levey and Jennings introduced clinical laboratory statistical QC in 1950 (3). The Levey–Jennings chart remains a staple of laboratory QC. Development of stabilized control materials enhanced QC practices (4), and the testing of controls at fixed intervals became and remains the QC mainstay. Control-based QC has many strengths, including sensitivity to small changes in bias and ready availability of controls. Test intervals may be adjusted to account for assay stability, regulations or a laboratory's experience. Decision rules may be selected to maximize both sensitivity and specificity of QC procedures (5, 6). Weaknesses of control-based QC include the cost of materials and labor, scope limited to the analytical component, and matrix-effect masking of clinically relevant drift. Furthermore, a risk inherent in the intermittency of control-based QC is that analytical instability goes undetected until the next QC episode, while the laboratory releases clinically significant erroneous results. Ideally, QC would be continuous. In a limited sense, continuous QC exists for sample QC through delta checks, anion gap calculations, indices of hemolysis, icterus, lipemia, etc., and for analyzer performance through function checks, temperature sensors, automated reaction pattern analysis, etc. In this issue of Clinical Chemistry , Ng et …
- Published
- 2016
35. Quality Materials for Quality Assurance in the Analysis of Liquid Biopsy Samples
- Author
-
Jason C. H. Tsang and K.C. Allen Chan
- Subjects
Quality Control ,0301 basic medicine ,Clinical Biochemistry ,Cell ,Disease ,Bioinformatics ,03 medical and health sciences ,chemistry.chemical_compound ,Neoplasms ,medicine ,Humans ,Epidermal growth factor receptor ,Liquid biopsy ,Gene ,Lung ,biology ,business.industry ,Biochemistry (medical) ,Liquid Biopsy ,Cancer ,medicine.disease ,030104 developmental biology ,medicine.anatomical_structure ,chemistry ,Mutation ,Cancer research ,biology.protein ,business ,Cell-Free Nucleic Acids ,DNA - Abstract
The analysis of circulating cell-free DNA for the detection of cancer-associated genetic and genomic changes, frequently called liquid biopsy, has become an important tool for the management of cancer patients. For example, the analysis of epidermal growth factor receptor ( EGFR ) mutations in the plasma of cancer patients suffering from non-small cell lung cancers has been rapidly adopted as a routine clinical service for guiding the use of EGFR tyrosine kinase inhibitors in many countries (1). The noninvasive nature of liquid biopsy, as well as its rapid turn-around time, makes it a favorable choice for patients over tissue biopsy. In some patients, the EGFR mutation status can be determined by circulating tumoral DNA analysis before imaging-guided tumor biopsies are performed. Recently, whole-exome and targeted sequencing of cancer-associated genes from plasma DNA have also been performed in patients with advanced cancer to search for cancer-associated alterations that are potential therapeutic targets and to detect disease recurrence (2, 3). With the increasing utilization of liquid biopsies, there is a growing demand for quality control materials for circulating tumoral DNA analysis for platform comparison, assay development, internal quality control, and proficiency testing. However, the production of good-quality control materials that biologically resemble authentic circulating DNA in cancer patients is challenging and was the subject of a study by Zhang and colleagues reported in this issue of Clinical Chemistry (4). Circulating cell-free DNA exhibits a characteristic nucleosomal size profile predominantly comprising DNA fragments …
- Published
- 2017
36. Failure of Current Laboratory Protocols to Detect Lot-to-Lot Reagent Differences: Findings and Possible Solutions
- Author
-
Stefan K.G. Grebe, Sandra C. Bryant, Kristin La Fortune, James C. Boyd, David E. Bruns, and Alicia Algeciras-Schimnich
- Subjects
Quality Control ,medicine.medical_specialty ,business.industry ,Patient demographics ,Biochemistry (medical) ,Clinical Biochemistry ,Medical laboratory ,Reagent Lot ,Rapid identification ,Young Adult ,Reference Values ,Reference values ,Luminescent Measurements ,Humans ,Medicine ,Medical physics ,Statistical analysis ,Reagent Kits, Diagnostic ,Insulin-Like Growth Factor I ,business ,Retrospective Studies - Abstract
BACKGROUND Maintaining consistency of results over time is a challenge in laboratory medicine. Lot-to-lot reagent changes are a major threat to consistency of results. METHODS For the period October 2007 through July 2012, we reviewed lot validation data for each new lot of insulin-like growth factor 1 (IGF-1) reagents (Siemens Healthcare Diagnostics) at Mayo Clinic, Rochester, MN, and the University of Virginia, Charlottesville, VA. Analyses of discarded patient samples were used for comparison of lots. For the same period, we determined the distributions of reported patient results for each lot of reagents at the 2 institutions. RESULTS Lot-to-lot validation studies identified no reagent lot as significantly different from the preceding lot. By contrast, significant lot-to-lot changes were seen in the means and medians of 105 668 reported patient IGF-I results during the period. The frequency of increased results increased nearly 2-fold to a high of 17%, without detectable changes in the underlying patient demographics. Retrospective statistical analysis indicated that lot-to-lot comparison protocols were underpowered and that validation studies for this assay required testing >100 samples to achieve 90% power to detect reagent lots that would significantly alter the distributions of patient results. CONCLUSIONS The number of test samples required for adequate lot-to-lot validation protocols is high and may be prohibitively large, especially for low-volume or complex assays. Monitoring of the distributions of patient results has the potential to detect lot-to-lot inconsistencies relatively quickly. We recommend that manufacturers implement remote monitoring of patient results from analyzers in multiple institutions to allow rapid identification of between-lot result inconsistency.
- Published
- 2013
37. Patient-Based Real Time QC.
- Author
-
Badrick T, Bietenbeck A, Katayev A, van Rossum HH, Cervinski MA, and Ping Loh T
- Subjects
- Humans, Software, Clinical Laboratory Techniques statistics & numerical data, Patients statistics & numerical data, Quality Control
- Published
- 2020
- Full Text
- View/download PDF
38. Understanding Patient-Based Real-Time Quality Control Using Simulation Modeling.
- Author
-
Bietenbeck A, Cervinski MA, Katayev A, Loh TP, van Rossum HH, and Badrick T
- Subjects
- Bias, Computer Simulation, Humans, Internet, Algorithms, Clinical Laboratory Techniques statistics & numerical data, Quality Control
- Abstract
Background: Patient-based real-time quality control (PBRTQC) avoids limitations of traditional quality control methods based on the measurement of stabilized control samples. However, PBRTQC needs to be adapted to the individual laboratories with parameters such as algorithm, truncation, block size, and control limit., Methods: In a computer simulation, biases were added to real patient results of 10 analytes with diverse properties. Different PBRTQC methods were assessed on their ability to detect these biases early., Results: The simulation based on 460 000 historical patient measurements for each analyte revealed several recommendations for PBRTQC. Control limit calculation with "percentiles of daily extremes" led to effective limits and allowed specification of the percentage of days with false alarms. However, changes in measurement distribution easily increased false alarms. Box-Cox but not logarithmic transformation improved error detection. Winsorization of outlying values often led to a better performance than simple outlier removal. For medians and Harrell-Davis 50 percentile estimators (HD50s), no truncation was necessary. Block size influenced medians substantially and HD50s to a lesser extent. Conversely, a change of truncation limits affected means and exponentially moving averages more than a change of block sizes. A large spread of patient measurements impeded error detection. PBRTQC methods were not always able to detect an allowable bias within the simulated 1000 erroneous measurements. A web application was developed to estimate PBRTQC performance., Conclusions: Computer simulations can optimize PBRTQC but some parameters are generally superior and can be taken as default., (© American Association for Clinical Chemistry 2020. All rights reserved. For permissions, please email: journals.permissions@oup.com.)
- Published
- 2020
- Full Text
- View/download PDF
39. Molecular and Serological Assays for SARS-CoV-2: Insights from Genome and Clinical Characteristics.
- Author
-
Shi J, Han D, Zhang R, Li J, and Zhang R
- Subjects
- Antibodies, Viral blood, Antigens, Viral genetics, Antigens, Viral immunology, Betacoronavirus classification, Betacoronavirus isolation & purification, COVID-19, Coronavirus Infections transmission, Coronavirus Infections virology, Humans, Pandemics, Phylogeny, Pneumonia, Viral transmission, Pneumonia, Viral virology, Quality Control, RNA, Viral metabolism, RNA, Viral standards, Reverse Transcriptase Polymerase Chain Reaction methods, Reverse Transcriptase Polymerase Chain Reaction standards, SARS-CoV-2, Betacoronavirus genetics, Coronavirus Infections diagnosis, Genome, Viral, Pneumonia, Viral diagnosis
- Abstract
Background: The ongoing outbreak of the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has posed a challenge for worldwide public health. A reliable laboratory assay is essential both to confirm suspected patients and to exclude patients infected with other respiratory viruses, thereby facilitating the control of global outbreak scenarios., Content: In this review, we focus on the genomic, transmission, and clinical characteristics of SARS-CoV-2, and comprehensively summarize the principles and related details of assays for SARS-CoV-2. We also explore the quality assurance measures for these assays., Summary: SARS-CoV-2 has some unique gene sequences and specific transmission and clinical features that can inform the conduct of molecular and serological assays in many aspects, including the design of primers, the selection of specimens, and testing strategies at different disease stages. Appropriate quality assurance measures for molecular and serological assays are needed to maintain testing proficiency. Because serological assays have the potential to identify later stages of the infection and to confirm highly suspected cases with negative molecular assay results, a combination of these two assays is needed to achieve a reliable capacity to detect SARS-CoV-2., (© American Association for Clinical Chemistry 2020. All rights reserved. For permissions, please email: journals.permissions@oup.com.)
- Published
- 2020
- Full Text
- View/download PDF
40. Preparedness and Rapid Implementation of External Quality Assessment Helped Quickly Increase COVID-19 Testing Capacity in the Republic of Korea.
- Author
-
Sung H, Yoo CK, Han MG, Lee SW, Lee H, Chun S, Lee WG, and Min WK
- Subjects
- Betacoronavirus isolation & purification, COVID-19, Coronavirus Envelope Proteins, Coronavirus Infections virology, Government Agencies, Humans, Pandemics, Pneumonia, Viral virology, Quality Assurance, Health Care, Quality Control, RNA, Viral metabolism, RNA-Dependent RNA Polymerase genetics, Republic of Korea, SARS-CoV-2, Viral Envelope Proteins genetics, Betacoronavirus genetics, Coronavirus Infections diagnosis, Pneumonia, Viral diagnosis, RNA, Viral standards, Real-Time Polymerase Chain Reaction standards
- Published
- 2020
- Full Text
- View/download PDF
41. Preanalytical Aspects and Sample Quality Assessment in Metabolomics Studies of Human Blood
- Author
-
Hans-Ulrich Häring, Marianna Lucio, Rainer Lehmann, Xinjie Zhao, Guowang Xu, Holger Franken, Lars Rosenbaum, Andreas Zell, Andreas Peter, Sabine S. Neukamm, and Peiyuan Yin
- Subjects
Quality Control ,Principal Component Analysis ,Chromatography ,Human blood ,Chemistry ,Biochemistry (medical) ,Clinical Biochemistry ,medicine.disease ,Hemolysis ,Specimen Handling ,Sample quality ,Metabolomics ,Tandem Mass Spectrometry ,Metabolome ,medicine ,Humans ,Blood Collection Tube ,Biomarker discovery ,Blood Chemical Analysis ,Chromatography, Liquid ,Whole blood - Abstract
BACKGROUND Metabolomics is a powerful tool that is increasingly used in clinical research. Although excellent sample quality is essential, it can easily be compromised by undetected preanalytical errors. We set out to identify critical preanalytical steps and biomarkers that reflect preanalytical inaccuracies. METHODS We systematically investigated the effects of preanalytical variables (blood collection tubes, hemolysis, temperature and time before further processing, and number of freeze–thaw cycles) on metabolomics studies of clinical blood and plasma samples using a nontargeted LC-MS approach. RESULTS Serum and heparinate blood collection tubes led to chemical noise in the mass spectra. Distinct, significant changes of 64 features in the EDTA-plasma metabolome were detected when blood was exposed to room temperature for 2, 4, 8, and 24 h. The resulting pattern was characterized by increases in hypoxanthine and sphingosine 1-phosphate (800% and 380%, respectively, at 2 h). In contrast, the plasma metabolome was stable for up to 4 h when EDTA blood samples were immediately placed in iced water. Hemolysis also caused numerous changes in the metabolic profile. Unexpectedly, up to 4 freeze–thaw cycles only slightly changed the EDTA-plasma metabolome, but increased the individual variability. CONCLUSIONS Nontargeted metabolomics investigations led to the following recommendations for the preanalytical phase: test the blood collection tubes, avoid hemolysis, place whole blood immediately in ice water, use EDTA plasma, and preferably use nonrefrozen biobank samples. To exclude outliers due to preanalytical errors, inspect the biomarker signal intensities reflecting systematic as well as accidental and preanalytical inaccuracies before processing the bioinformatics data.
- Published
- 2013
42. Demystifying Reference Sample Quality Control
- Author
-
George S. Cembrowski and Mark A. Cervinski
- Subjects
Quality Control ,Quality Assurance, Health Care ,business.industry ,media_common.quotation_subject ,Biochemistry (medical) ,Clinical Biochemistry ,Control (management) ,Quality control ,Workmanship ,030204 cardiovascular system & hematology ,Laboratory quality control ,Chemist ,03 medical and health sciences ,0302 clinical medicine ,Risk analysis (engineering) ,030220 oncology & carcinogenesis ,Medicine ,Quality (business) ,business ,Quality assurance ,Reliability (statistics) ,media_common - Abstract
One of the most frequent questions asked by the new clinical chemist is, “How do I set up the quality control for a particular analyzer?” Two new documents described below will readily help both the neophyte and practicing chemist. The practice of quality control in the clinical laboratory has been evolving in fits and starts. Unfortunately, today's laboratory quality control practices are obscured by undeserved complexity which robs the quality control practitioner (primarily the bench medical technologist) of tangible quality control intuition and in Deming's words, also robs her of pride of workmanship (1). In US clinical laboratories, quality control practices are hugely divergent in addition to being costly (2). In a recent review, Kazmierczak stated, “the costs associated with performing quality control testing and the costs associated with evaluating, reviewing and maintaining quality control records are not trivial” (3). He cited a developing nation hospital study that found the costs of quality to be 22% of total direct laboratory expenses, with 89% of the costs for maintaining quality associated with calibration and analysis of quality control material necessary to confirm the accuracy and reliability of test results (4). Simplicity is required in today's quality control practices and in this month's issue of Clinical Chemistry , Yago and Alcover, Spanish clinical chemists, provide highly instructive nomograms that permit the selection of simple quality control rules to yield appropriately low frequencies of defective results in analytic runs of 100 (5). These surprisingly orderly, ready-to-use nomograms provide a welcome contrast to the complex rule sets employed by many practicing clinical chemists and elaborated with or without the aid of quality control selection software. The nomograms of Yago and Alcover yield the number of defective analyses per analytical run of 100 samples. These nomograms visually demonstrate the importance of selecting analyzers with …
- Published
- 2016
43. Analytical Bias Exceeding Desirable Quality Goal in 4 out of 5 Common Immunoassays: Results of a Native Single Serum Sample External Quality Assessment Program for Cobalamin, Folate, Ferritin, Thyroid-Stimulating Hormone, and Free T4 Analyses
- Author
-
Pål Rustad, Jens P. Berg, Kristin M. Aakre, and Gunn B.B. Kristensen
- Subjects
Quality Control ,030213 general clinical medicine ,Quality Assurance, Health Care ,Sample (material) ,Clinical Biochemistry ,Analytical chemistry ,Thyrotropin ,030204 cardiovascular system & hematology ,Thyroid Function Tests ,Cobalamin ,03 medical and health sciences ,chemistry.chemical_compound ,0302 clinical medicine ,Folic Acid ,Thyroid-stimulating hormone ,External quality assessment ,medicine ,Humans ,Immunoassay ,Chromatography ,medicine.diagnostic_test ,biology ,business.industry ,Biochemistry (medical) ,Serum samples ,Ferritin ,Vitamin B 12 ,chemistry ,Ferritins ,biology.protein ,business ,Quality assurance - Abstract
BACKGROUNDWe undertook this study to evaluate method differences for 5 components analyzed by immunoassays, to explore whether the use of method-dependent reference intervals may compensate for method differences, and to investigate commutability of external quality assessment (EQA) materials.METHODSTwenty fresh native single serum samples, a fresh native serum pool, Nordic Federation of Clinical Chemistry Reference Serum X (serum X) (serum pool), and 2 EQA materials were sent to 38 laboratories for measurement of cobalamin, folate, ferritin, free T4, and thyroid-stimulating hormone (TSH) by 5 different measurement procedures [Roche Cobas (n = 15), Roche Modular (n = 4), Abbott Architect (n = 8), Beckman Coulter Unicel (n = 2), and Siemens ADVIA Centaur (n = 9)]. The target value for each component was calculated based on the mean of method means or measured by a reference measurement procedure (free T4). Quality specifications were based on biological variation. Local reference intervals were reported from all laboratories.RESULTSMethod differences that exceeded acceptable bias were found for all components except folate. Free T4 differences from the uncommonly used reference measurement procedure were large. Reference intervals differed between measurement procedures but also within 1 measurement procedure. The serum X material was commutable for all components and measurement procedures, whereas the EQA materials were noncommutable in 13 of 50 occasions (5 components, 5 methods, 2 EQA materials).CONCLUSIONSThe bias between the measurement procedures was unacceptably large in 4/5 tested components. Traceability to reference materials as claimed by the manufacturers did not lead to acceptable harmonization. Adjustment of reference intervals in accordance with method differences and use of commutable EQA samples are not implemented commonly.
- Published
- 2016
44. Selecting Statistical Procedures for Quality Control Planning Based on Risk Management
- Author
-
Silvia Alcover and Martín Yago
- Subjects
Quality Control ,030213 general clinical medicine ,Measure (data warehouse) ,Risk Management ,Models, Statistical ,business.industry ,Computer science ,media_common.quotation_subject ,Biochemistry (medical) ,Clinical Biochemistry ,Control (management) ,Value (computer science) ,Statistical model ,030204 cardiovascular system & hematology ,Clinical Laboratory Services ,03 medical and health sciences ,0302 clinical medicine ,Close relationship ,Statistics ,Patient harm ,Humans ,Quality (business) ,business ,Risk management ,media_common - Abstract
BACKGROUND According to the traditional approach to statistical QC planning, the performance of QC procedures is assessed in terms of its probability of rejecting an analytical run that contains critical size errors (PEDC). Recently, the maximum expected increase in the number of unacceptable patient results reported during the presence of an undetected out-of-control error condition [Max E(NUF)], has been proposed as an alternative QC performance measure because it is more related to the current introduction of risk management concepts for QC planning in the clinical laboratory. METHODS We used a statistical model to investigate the relationship between PEDC and Max E(NUF) for simple QC procedures widely used in clinical laboratories and to construct charts relating Max E(NUF) with the capability of the analytical process that allow for QC planning based on the risk of harm to a patient due to the report of erroneous results. RESULTS A QC procedure shows nearly the same Max E(NUF) value when used for controlling analytical processes with the same capability, and there is a close relationship between PEDC and Max E(NUF) for simple QC procedures; therefore, the value of PEDC can be estimated from the value of Max E(NUF) and vice versa. QC procedures selected by their high PEDC value are also characterized by a low value for Max E(NUF). CONCLUSIONS The PEDC value can be used for estimating the probability of patient harm, allowing for the selection of appropriate QC procedures in QC planning based on risk management.
- Published
- 2015
45. Concordance, Variance, and Outliers in 4 Contemporary Cardiac Troponin Assays: Implications for Harmonization
- Author
-
Urs Wilgen, Carel J. Pretorius, Jacobus P.J. Ungerer, Louise Marquart, and Peter O'Rourke
- Subjects
Quality Control ,medicine.medical_specialty ,Percentile ,Pathology ,Analyte ,Cardiac troponin ,business.industry ,Concordance ,Troponin I ,Biochemistry (medical) ,Clinical Biochemistry ,Clinical Chemistry Tests ,medicine.disease ,Troponin complex ,Siemens ADVIA Centaur ,Internal medicine ,medicine ,Cardiology ,Humans ,Myocardial infarction ,business ,Biomarkers - Abstract
BACKGROUND Data to standardize and harmonize the differences between cardiac troponin assays are needed to support their universal status in diagnosis of myocardial infarction. We characterized the variation between methods, the comparability of the 99th-percentile cutoff thresholds, and the occurrence of outliers in 4 cardiac troponin assays. METHODS Cardiac troponin was measured in duplicate in 2358 patient samples on 4 platforms: Abbott Architect i2000SR, Beckman Coulter Access2, Roche Cobas e601, and Siemens ADVIA Centaur XP. RESULTS The observed total variances between the 3 cardiac troponin I (cTnI) methods and between the cTnI and cardiac troponin T (cTnT) methods were larger than expected from the analytical imprecision (3.0%–3.7%). The between-method variations of 26% between cTnI assays and 127% between cTnI and cTnT assays were the dominant contributors to total variances. The misclassification of results according to the 99th percentile was 3%–4% between cTnI assays and 15%–17% between cTnI and cTnT. The Roche cTnT assay identified 49% more samples as positive than the Abbott cTnI. Outliers between methods were detected in 1 patient (0.06%) with Abbott, 8 (0.45%) with Beckman Coulter, 10 (0.56%) with Roche, and 3 (0.17%) with Siemens. CONCLUSIONS The universal definition of myocardial infarction should not depend on the choice of analyte or analyzer, and the between- and within-method differences described here need to be considered in the application of cardiac troponin in this respect. The variation between methods that cannot be explained by analytical imprecision and the discordant classification of results according to the respective 99th percentiles should be addressed.
- Published
- 2012
46. Commutability Assessment of External Quality Assessment Materials with the Difference in Bias Approach: Are Acceptance Criteria Based on Medical Requirements too Strict?
- Author
-
Hubert W. Vesper, Qinde Liu, and Vincent Delatour
- Subjects
Quality Control ,medicine.medical_specialty ,Standardization ,Clinical Biochemistry ,Siemens ,Clinical Chemistry Tests ,030204 cardiovascular system & hematology ,Article ,03 medical and health sciences ,0302 clinical medicine ,Bias ,Acceptance testing ,External quality assessment ,medicine ,Humans ,Medical physics ,Reference standards ,Protocol (science) ,Ldl cholesterol ,business.industry ,Cholesterol, HDL ,Biochemistry (medical) ,Cholesterol, LDL ,Reference Standards ,030220 oncology & carcinogenesis ,business - Abstract
To the Editor: Commutability of External Quality Assessment (EQA)1 materials is a key requirement for their use in accuracy-based EQA surveys (1–3). In a recent paper, Korzun et al. (4) evaluated commutability of 4 frozen pools for measurements of direct HDL cholesterol (HDLC) and LDL cholesterol (LDLC). These pools were used in the CDC's Lipid Standardization Program to assess accuracy of direct HDLC measurements only (4). Among the results presented using the medical requirement acceptance criteria for bias (4% for LDLC and 5% for HDLC), the authors found that 1 of the 4 frozen pools was commutable for most of the HDLC methods, whereas none were commutable for LDLC methods. The authors concluded that frozen pools prepared according to the CLSI C37 protocol may not always be commutable and especially for direct LDLC assays. In 2013, Laboratoire national de metrologie et d'essais (LNE) organized a similar study to assess commutability of 5 freshly prepared frozen serum pools prepared according to the CLSI C37-A protocol for HDLC and LDLC. The pools were shipped frozen and analyzed along with 20–25 fresh clinical specimens by 31 medical laboratories operating HDLC and LDLC routine methods on the most popular clinical chemistry analyzers: Roche Cobas, Siemens Vista, Abbott Architect, Ortho CD …
- Published
- 2016
47. Response Factor–Based Quantification for Mycophenolic Acid
- Author
-
Mathieu Gerits, Pieter Vermeersch, Nele Peersman, Steven Pauwels, and Koen Desmet
- Subjects
Quality Control ,Response factor ,Analyte ,Chromatography ,medicine.diagnostic_test ,Chemistry ,Biochemistry (medical) ,Clinical Biochemistry ,Analytical chemistry ,Mycophenolic Acid ,Mass spectrometry ,Tandem mass spectrometry ,High-performance liquid chromatography ,Tandem Mass Spectrometry ,Immunoassay ,Calibration ,Linear regression ,medicine ,Humans ,Protein precipitation ,Chromatography, Liquid - Abstract
To the Editor: In contrast with chemistry or immunoassay analyzers, quantification for liquid chromatography-tandem mass spectrometry (LC-MS/MS)1 assays is mostly done by time-consuming multipoint calibration in each run (CR) (1). Few reports on alternative quantification methods have been published, with most of them being impractical for clinical purposes (1–3) and none of them covering a large period of time (1–4). We evaluated the performance of quantification based on an experimentally derived response factor (RF) used for 1 year without change (4) for mycophenolic acid (MPA) compared to CR. We also examined the variables influencing RF quantification performance. MPA was extracted by protein precipitation using a reagent containing deuterated internal standard (IS) and analyzed on an Alliance HPLC 2795 separations module coupled to a Quattro Micro tandem MS (both Waters Corp.). From June 2012 to May 2013, the first 10 patient samples, QCs of each run, and external QCs were quantified by CR and RF. CR was performed using a weighted (1/ x ) linear regression of 7 calibrators. RF quantification was performed by multiplying the observed response ratio (RR) (area analyte/area IS) by …
- Published
- 2014
48. Proficiency Testing/External Quality Assessment: Current Challenges and Future Directions
- Author
-
W. Greg Miller, Graham R D Jones, Gary L. Horowitz, and Cas Weykamp
- Subjects
Quality Control ,Laboratory Proficiency Testing ,medicine.medical_specialty ,Quality Assurance, Health Care ,Standardization ,Clinical Laboratory Techniques ,business.industry ,Computer science ,media_common.quotation_subject ,Biochemistry (medical) ,Clinical Biochemistry ,Medical laboratory ,Sample (statistics) ,Specimen Handling ,Research Design ,External quality assessment ,Proficiency testing ,medicine ,Humans ,Quality (business) ,Medical physics ,business ,Quality assurance ,media_common - Abstract
BACKGROUNDProficiency testing (PT), or external quality assessment (EQA), is intended to verify on a recurring basis that laboratory results conform to expectations for the quality required for patient care.CONTENTKey factors for interpreting PT/EQA results are knowledge of the commutability of the samples used and the process used for target value assignment. A commutable PT/EQA sample demonstrates the same numeric relationship between different measurement procedures as that expected for patients' samples. Noncommutable PT/EQA samples frequently have a matrix-related bias of unknown magnitude that limits interpretation of results. PT/EQA results for commutable samples can be used to assess accuracy against a reference measurement procedure or a designated comparison method. In addition, the agreement of the results between different measurement procedures for commutable samples reflects that which would be seen for patients' samples. PT/EQA results for noncommutable samples must be compared to a peer group mean/median of results from participants who use measurement procedures that are expected to have the same or very similar matrix-related bias. Peer group evaluation is used to asses whether a laboratory is using a measurement procedure in conformance to the manufacturer's specifications and/or in conformance to other laboratories using the same technology. A noncommutable PT/EQA sample does not give meaningful information about the relationship of results for patients' samples between different measurement procedures.SUMMARYPT/EQA provides substantial value to the practice of laboratory medicine by assessing the performance of individual laboratories and, when commutable samples are used, the status of standardization or harmonization among different measurement procedures.
- Published
- 2011
49. Status of Hemoglobin A1c Measurement and Goals for Improvement: From Chaos to Order for Improving Diabetes Care
- Author
-
Curt L. Rohlfing, David B. Sacks, and Randie R. Little
- Subjects
Quality Control ,medicine.medical_specialty ,endocrine system diseases ,International Cooperation ,United Kingdom Prospective Diabetes Study ,Clinical Biochemistry ,Metrological traceability ,Diabetes mellitus ,medicine ,Humans ,In patient ,Intensive care medicine ,Glycated Hemoglobin ,American diabetes association ,business.industry ,Biochemistry (medical) ,nutritional and metabolic diseases ,Reference Standards ,Hemoglobin A1c measurement ,medicine.disease ,Diabetes Control and Complications Trial ,Surgery ,Diabetes Mellitus, Type 1 ,Hemoglobin A ,Diabetes Mellitus, Type 2 ,Hemoglobinometry ,business ,Biomarkers - Abstract
BACKGROUND The Diabetes Control and Complications Trial (DCCT) and United Kingdom Prospective Diabetes Study (UKPDS) established the importance of hemoglobin A1c (Hb A1c) as a predictor of outcome in patients with diabetes mellitus. In 1994, the American Diabetes Association began recommending specific Hb A1c targets, but lack of comparability among assays limited the ability of clinicians to use these targets. The National Glycohemoglobin Standardization Program (NGSP) was implemented in 1996 to standardize Hb A1c results to those of the DCCT/UKPDS. CONTENT The NGSP certifies manufacturers of Hb A1c methods as traceable to the DCCT. The certification criteria have been tightened over time and the NGSP has worked with the College of American Pathologists in tightening proficiency-testing requirements. As a result, variability of Hb A1c results among clinical laboratories has been considerably reduced. The IFCC has developed a reference system for Hb A1c that facilitates metrological traceability to a higher order. The NGSP maintains traceability to the IFCC network via ongoing sample comparisons. There has been controversy over whether to report Hb A1c results in IFCC or NGSP units, or as estimated average glucose. Individual countries are making this decision. SUMMARY Variability among Hb A1c results has been greatly reduced. Not all countries will report Hb A1c in the same units, but there are established equations that enable conversion between different units. Hb A1c is now recommended for diagnosing diabetes, further accentuating the need for optimal assay performance. The NGSP will continue efforts to improve Hb A1c testing to ensure that clinical needs are met.
- Published
- 2011
50. Commutability Limitations Influence Quality Control Results with Different Reagent Lots
- Author
-
Mitchell G. Scott, Aybala Erek, Tina D. Cunningham, W. Greg Miller, Robert E. Johnson, and Olajumoke Oladipo
- Subjects
Quality Control ,business.industry ,Sample (material) ,Biochemistry (medical) ,Clinical Biochemistry ,Significant difference ,Reproducibility of Results ,Data interpretation ,Clinical Chemistry Tests ,Reference Standards ,Reagent Lot ,Data Interpretation, Statistical ,Reagent ,Statistics ,Humans ,Medicine ,Indicators and Reagents ,Reagent Kits, Diagnostic ,business ,Reference standards - Abstract
BACKGROUNDGood laboratory practice includes verifying that each new lot of reagents is suitable for use before it is put into service. Noncommutability of quality control (QC) samples with clinical patient samples may preclude their use to verify consistency of results for patient samples between different reagent lots.METHODSPatient sample results and QC data were obtained from reagent lot change verification records for 18 QC materials, 661 reagent lot changes, 1483 reagent lot change–QC events, 82 analytes, and 7 instrument platforms. The significance of between-lot differences in the results for QC samples compared with those for patient samples was assessed by a modified 2-sample t test adjusted for heterogeneity of QC and patient sample measurement variances.RESULTSOverall, 40.9% of reagent lot change–QC events had a significant difference (P < 0.05) between results for QC samples compared with results for patient samples between 2 reagent lots. For QC results with differences CONCLUSIONSOccurrence of noncommutable results for QC materials was frequent enough that the QC results could not be used to verify consistency of results for patient samples when changing lots of reagents.
- Published
- 2011
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.