177 results on '"Jen-Pei, Liu"'
Search Results
2. Relationship Between Age at Menarche and Skeletal Maturation Stages in Taiwanese Female Orthodontic Patients
- Author
-
Eddie Hsiang-Hua Lai, Jenny Zwei-Chieng Chang, Chung-Chen Jane Yao, Shih-Jaw Tsai, Jen-Pei Liu, Yi-Jane Chen, and Chun-Pin Lin
- Subjects
cervical vertebrae ,hand bones ,menarche ,musculoskeletal development ,wrist ,Medicine (General) ,R5-920 - Abstract
Background/Purpose: The age at menarche reflects a pubertal girl's physiologic maturity. The aims of this study were to evaluate the relationship between the age at menarche and skeletal maturation in female orthodontic patients. Methods: Hand-wrist radiographs and lateral cephalometric radiographs from 304 adolescent female subjects (age, 8–18.9 years) were selected from the files of the Department of Orthodontics, National Taiwan University Hospital (NTUH). Hand-wrist bone maturation stages were assessed using the NTUH Skeletal Maturation Index (NTUH-SMI). Cervical vertebral maturation stages (CVMS) were determined using the latest CVMS Index. Menarcheal ages were self-reported by the patients and verified by the patients' mothers. The relationships between the NTUH-SMI or CVM stages and menarcheal status were investigated. Results: More than 90% of the 148 subjects who had already attained menstruation had skeletal maturation beyond the NTUH-SMI stage four or CVMS III. However, the subjects who had never experienced menarche mostly had skeletal maturation before NTUH-SMI stage five or CVMS IV. During the period of orthodontic treatment, 19 females experienced their menarche. The mean age at menarche for the 167 female patients in total was 11.97 years. In average, menarche occurred between NTUH-SMI stages four and five or between CVM stages III and IV. The percentage of girls with menses increased from 1.2% at age 9 to 6.6% at age 10, 39.5% at age 11, 81.4% at age 12, 97% at age 13, and 100% at age 14. Compared with the results obtained 20 years previously, we found a downward shift of 0.47 years per decade for the mean age at menarche in female orthodontic patients. Conclusion: The majority of female orthodontic patients have passed the pubertal growth spurt when they experience their menarche. Menarche usually follows the pubertal growth spurt by about 1 year and occurs after NTUH-SMI stage four or CVMS III.
- Published
- 2008
- Full Text
- View/download PDF
3. Radiographic Assessment of Skeletal Maturation Stages for Orthodontic Patients: Hand-wrist Bones or Cervical Vertebrae?
- Author
-
Eddie Hsiang-Hua Lai, Jen-Pei Liu, Jenny Zwei-Chieng Chang, Shih-Jaw Tsai, Chung-Chen Jane Yao, Mu-Hsiung Chen, Yi-Jane Chen, and Chun-Pin Lin
- Subjects
cervical vertebrae ,hand-wrist radiography ,lateral cephalometric radiography ,skeletal maturation ,Medicine (General) ,R5-920 - Abstract
The skeletal maturation status of a growing patient can influence the selection of orthodontic treatment procedures. Either lateral cephalometric or hand-wrist radiography can be used to assess skeletal development. In this study, we examined the correlation between the maturation stages of cervical vertebrae and hand-wrist bones in Taiwanese individuals. Methods: The study group consisted of 330 male and 379 female subjects ranging in age from 8 to 18 years. A total of 709 hand-wrist and 709 lateral cephalometric radiographs were analyzed. Hand-wrist maturation stages were assessed using National Taiwan University Hospital Skeletal Maturation Index (NTUH-SMI). Cervical vertebral maturation stages were determined by the latest Cervical Vertebral Maturation Stage (CVMS) Index. Spearman's rank correlation was used to correlate the respective maturation stages assessed from the hand-wrist bones and the cervical vertebrae. Results: The values of Spearman's rank correlation were 0.910 for males and 0.937 for females, respectively. These data confirmed a strong and significant correlation between CVMS and NTUH-SMI systems (p < 0.001). After comparison of the mean ages of subjects in different stages of CVMS and NTU-SMI systems, we found that CVMS I corresponded to NTUH-SMI stages 1 and 2, CVMS II to NTUH-SMI stage 3, CVMS III to NTUH-SMI stage 4, CVMS IV to NTUH-SMI stage 5, CVMS V to NTUH-SMI stages 6, 7 and 8, and CVMS VI to NTUH-SMI stage 9. Conclusion: Our results indicate that cervical vertebral maturation stages can be used to replace hand-wrist bone maturation stages for evaluation of skeletal maturity in Taiwanese individuals.
- Published
- 2008
- Full Text
- View/download PDF
4. Addressing Loss of Efficiency Due to Misclassification Error in Enriched Clinical Trials for the Evaluation of Targeted Therapies Based on the Cox Proportional Hazards Model.
- Author
-
Chen-An Tsai, Kuan-Ting Lee, and Jen-Pei Liu
- Subjects
Medicine ,Science - Abstract
A key feature of precision medicine is that it takes individual variability at the genetic or molecular level into account in determining the best treatment for patients diagnosed with diseases detected by recently developed novel biotechnologies. The enrichment design is an efficient design that enrolls only the patients testing positive for specific molecular targets and randomly assigns them for the targeted treatment or the concurrent control. However there is no diagnostic device with perfect accuracy and precision for detecting molecular targets. In particular, the positive predictive value (PPV) can be quite low for rare diseases with low prevalence. Under the enrichment design, some patients testing positive for specific molecular targets may not have the molecular targets. The efficacy of the targeted therapy may be underestimated in the patients that actually do have the molecular targets. To address the loss of efficiency due to misclassification error, we apply the discrete mixture modeling for time-to-event data proposed by Eng and Hanlon [8] to develop an inferential procedure, based on the Cox proportional hazard model, for treatment effects of the targeted treatment effect for the true-positive patients with the molecular targets. Our proposed procedure incorporates both inaccuracy of diagnostic devices and uncertainty of estimated accuracy measures. We employed the expectation-maximization algorithm in conjunction with the bootstrap technique for estimation of the hazard ratio and its estimated variance. We report the results of simulation studies which empirically investigated the performance of the proposed method. Our proposed method is illustrated by a numerical example.
- Published
- 2016
- Full Text
- View/download PDF
5. Sample size determination for individual bioequivalence inference.
- Author
-
Chieh Chiang, Chin-Fu Hsiao, and Jen-Pei Liu
- Subjects
Medicine ,Science - Abstract
Statistical criterion for evaluation of individual bioequivalence (IBE) between generic and innovative products often involves a function of the second moments of normal distributions. Under replicated crossover designs, the aggregate criterion for IBE proposed by the guidance of the U.S. Food and Drug Administration (FDA) contains the squared mean difference, variance of subject-by-formulation interaction, and the difference in within-subject variances between the generic and innovative products. The upper confidence bound for the linearized form of the criterion derived by the modified large sample (MLS) method is proposed in the 2001 U.S. FDA guidance as a testing procedure for evaluation of IBE. Due to the complexity of the power function for the criterion based on the second moments, literature on sample size determination for the inference of IBE is scarce. Under the two-sequence and four-period crossover design, we derive the asymptotic distribution of the upper confidence bound of the linearized criterion. Hence the asymptotic power can be derived for sample size determination for evaluation of IBE. Results of numerical studies are reported. Discussion of sample size determination for evaluation of IBE based on the aggregate criterion of the second moments in practical applications is provided.
- Published
- 2014
- Full Text
- View/download PDF
6. Statistical Evaluation of Quality Performance on Genomic Composite Biomarker Classifiers
- Author
-
Jen-Pei Liu and Li-Tien Lu
- Subjects
agreement ,differentially expressed genes ,genomic composite biomarker classifier ,reproducibility of results ,Medicine (General) ,R5-920 - Abstract
After completion of the Human Genome Project, genomic composite biomarker classifiers (GCBCs) became available. However, quality performance of GCBCs varies. We propose statistical methods for evaluation of the quality performance of GCBCs on selection of differentially expressed genes, agreement and reproducibility. Methods: For detection of differentially expressed genes, an interval hypothesis was employed to take into account both biological and statistical significance. The concordance correlation coefficient (CCC) was used to evaluate the agreement of expression levels of technical replicates. The intraclass correlation coefficient (ICC) was suggested to assess the reproducibility between laboratories. Results: A two one-sided test procedure was proposed to test the interval hypothesis. Statistical methods based on the generalized pivotal quantities for CCC and ICC were suggested to test the hypotheses for agreement and reproducibility. Simulation results demonstrated that all three methods could adequately control the type I error rate at the nominal level for assessment of differentially expressed genes, agreement and reproducibility. Conclusion: Three appropriate statistical methods were developed for evaluation of quality performance on differentially expressed genes, agreement and reproducibility of GCBCs.
- Published
- 2008
- Full Text
- View/download PDF
7. Statistical Methods for Targeted Clinical Trials under Enrichment Design
- Author
-
Jen-Pei Liu and Jr-Rung Lin
- Subjects
diagnostic accuracy ,enrichment design ,targeted treatment ,Medicine (General) ,R5-920 - Abstract
After completion of the Human Genome Project, disease targets at the molecular level can be identified. Treatment for these specific targets can be developed with the individualized treatment of patients becoming a reality. However, the accuracy of diagnostic devices for molecular targets is not perfect and statistical inference for treatment effects of the targeted therapy is biased. We developed statistical methods for an unbiased inference for the targeted therapy in patients who truly have the molecular targets. Methods: Under the enrichment design, for binary data, we propose using the expectation maximization (EM) algorithm with the bootstrap method, to incorporate the inaccuracy of the diagnostic device for detection of the molecular targets for inference of the treatment effects. A simulation study was conducted to empirically investigate the performance of the proposed estimation and testing procedures. A numerical example illustrates the application of the proposed method. Results: Simulation results demonstrated that the proposed estimation method was unbiased, with adequate precision, and the confidence interval provided satisfactory coverage probability. The proposed testing procedure adequately controlled the size with sufficient power. The numerical example showed that a statistically significant treatment effect could be obtained when the inaccuracy of the diagnostic device was taken into account. Conclusion: Our proposed estimation and testing procedures are adequate statistical methods for the inference of the treatment effect for patients who truly have the molecular targets.
- Published
- 2008
- Full Text
- View/download PDF
8. Prognostic factors of health care–associated bloodstream infection in adult patients ≥40 years of age
- Author
-
Yee-Chun Chen, Ying-Ying Chang, I-Chen Hung, Ya-Huei Huang, Hsuan-Yin Ma, Jann-Tay Wang, Wang-Huei Sheng, Jen-pei Liu, and Wei-Chu Chie
- Subjects
Adult ,Male ,0301 basic medicine ,medicine.medical_specialty ,Epidemiology ,030106 microbiology ,Bacteremia ,03 medical and health sciences ,0302 clinical medicine ,Blood serum ,Internal medicine ,Odds Ratio ,Humans ,Medicine ,030212 general & internal medicine ,Intensive care medicine ,Aged ,Retrospective Studies ,Cross Infection ,biology ,business.industry ,Health Policy ,C-reactive protein ,Public Health, Environmental and Occupational Health ,Retrospective cohort study ,Odds ratio ,Middle Aged ,Prognosis ,medicine.disease ,Infectious Diseases ,Blood chemistry ,biology.protein ,Vancomycin ,Female ,business ,Body mass index ,medicine.drug - Abstract
We investigated 401 geriatric patients and 453 middle-aged patients with health care-associated bloodstream infection (HABSI) at a medical center during January-December 2014. Compared with middle-aged patients, the geriatric group had higher 30-day mortality (31.2% vs 23.4%, P = .01). Body mass index, serum albumin concentration, Charlson comorbidity index score, vancomycin-resistant Enterococcus bacteremia, and high C-reactive protein levels predict poor outcomes for HABSI among adult patients.
- Published
- 2018
- Full Text
- View/download PDF
9. Large-scale search method for locating and identifying fugitive emission sources in petrochemical processing areas
- Author
-
Pao-Erh Chang, Jen-pei Liu, Chang-Fu Wu, Jen-Chih Yang, and Wei-Chu Chie
- Subjects
021110 strategic, defence & security studies ,Engineering ,Environmental Engineering ,Waste management ,business.industry ,General Chemical Engineering ,0211 other engineering and technologies ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Correspondence analysis ,Data set ,Reduction (complexity) ,symbols.namesake ,Fourier transform ,Petrochemical ,Principal component analysis ,symbols ,Environmental Chemistry ,Safety, Risk, Reliability and Quality ,business ,Scale (map) ,Fugitive emissions ,0105 earth and related environmental sciences ,Remote sensing - Abstract
Background Fugitive emission sources generated from leaking components are often difficult to identify and locate, especially in petrochemical processing areas, which have concentrated facilities and equipment. Methods A large-scale search method for locating and identifying fugitive emission sources in a petrochemical processing area by using multiple intersecting open-path Fourier transform infrared (OP-FTIR) beam paths and multivariate statistical methods is proposed in this study. Multivariate statistical methods, namely principal component analysis (PCA) and correspondence analysis (CA), were applied to the measured data set and the characteristics of emission sources were summarized. Results Styrene, 1,3-butadiene, cyclohexane, ammonia, ethylene, propylene and methanol were identified in most beam paths with mean concentrations up to 75.3 ± 11.4 ppbv, 355.2 ± 34.6 ppbv, 188.1 ± 17.82 ppbv, 255.57 ± 19.28 ppbv, 194.3 ± 18.2 ppbv, and 94.0 ± 16.7 ppbv, respectively. PCA extracted at least three source categories in the plant. CA provided additional information about the approximate locations of each source, and thus, future emission reduction plans can be developed accordingly. Conclusion This study established a dynamic approach for locating fugitive emission sources in a complex petrochemical plant by applying an OP-FTIR matrix-path approach combined with PCA and CA. Future emission reduction plans can be developed according to the findings of PCA and CA.
- Published
- 2016
- Full Text
- View/download PDF
10. Step 4: Divide the first entry of column 4 by 16 (i.e., 2K) and the remaining 21(-1). The corresponding entries in column 4 of Table
- Author
-
Shein-Chung Chow and Jen-pei Liu
- Subjects
Table (landform) ,Geometry ,Column (database) ,Mathematics - Published
- 2018
- Full Text
- View/download PDF
11. p value of T, is less than 0.0001, the null
- Author
-
Jen-pei Liu and Shein-Chung Chow
- Subjects
Combinatorics ,Null (mathematics) ,p-value ,Mathematics - Published
- 2018
- Full Text
- View/download PDF
12. Statistical Design and Analysis in Pharmaceutical Science
- Author
-
Shein-Chung Chow and Jen-pei Liu
- Published
- 2018
- Full Text
- View/download PDF
13. Association of quality of life with laboratory measurements and lifestyle factors in community dwelling older people in Taiwan
- Author
-
David Blane, Chen Kun Liaw, Wei-Chu Chie, Tai Yin Wu, Jen-pei Liu, and Gopalakrishnan Netuveli
- Subjects
Male ,Gerontology ,Aging ,Taiwan ,Poison control ,Suicide prevention ,Occupational safety and health ,Body Mass Index ,Quality of life ,Surveys and Questionnaires ,Injury prevention ,Health Status Indicators ,Humans ,Medicine ,Association (psychology) ,Geriatric Assessment ,Life Style ,Aged ,Aged, 80 and over ,business.industry ,Human factors and ergonomics ,Middle Aged ,humanities ,Psychiatry and Mental health ,Cross-Sectional Studies ,Lifestyle factors ,Socioeconomic Factors ,Multivariate Analysis ,Quality of Life ,Female ,Independent Living ,Geriatrics and Gerontology ,Pshychiatric Mental Health ,Factor Analysis, Statistical ,business - Abstract
Little is known about the influence of routine laboratory measurements and lifestyle factors on generic quality of life (QOL) at older ages. We aimed to study the relationship between generic QOL and laboratory measurements and lifestyle factors in community dwelling older Chinese people.We conducted a cross-sectional analysis. Six hundred and ninety nine elders were randomly selected from the examinees of the annual health examination in Taipei City, Taiwan. Blood, urine and stool of the participants were examined and lifestyle data were collected. Participants completed the CASP-19 (control, autonomy, self-realization, pleasure) questionnaire, a 19-item QOL scale. The relationship between QOL and laboratory results and lifestyle factors was explored, using multiple linear regression and profile analysis.The mean age of the participants was 75.5 years (SD = 6.5), and 49.5% were female. Male gender standardized β coefficients (β = 0.122) and exercise habit (β = 0.170) were associated with a better QOL, whereas advanced age (β = -0.242), blurred vision (β = -0.143), depression (β = -0.125), central obesity (β = -0.093), anemia (β = -0.095), rheumatoid arthritis (β = -0.073), Parkinsonism (β = -0.079), malignancy (β = -0.086) and motorcycle riding (β = -0.086) were associated with a lower QOL. Profile analysis revealed that young-old males, social drinkers, regular exercisers and car drivers had the best QOL (all p0.001).Of the many laboratory measurements, only anemia was associated with the lower QOL. By contrast, several lifestyle factors, such as social drinking, exercise habit and car driving, were associated with better QOL, whereas abdominal obesity and motorcycle riding were associated with lower QOL.
- Published
- 2014
- Full Text
- View/download PDF
14. Factors Associated with Falls Among Community-Dwelling Older People in Taiwan
- Author
-
Tai Yin, Wu, Wei Chu, Chie, Rong Sen, Yang, Jen Pei, Liu, Kuan Liang, Kuo, Wai Kuen, Wong, and Chen Kun, Liaw
- Subjects
Aged, 80 and over ,Male ,Taiwan ,General Medicine ,Middle Aged ,Risk Assessment ,Cross-Sectional Studies ,Sex Factors ,Socioeconomic Factors ,Risk Factors ,Hyperglycemia ,Odds Ratio ,Polypharmacy ,Body Constitution ,Humans ,Accidental Falls ,Female ,Independent Living ,Geriatric Assessment ,Aged ,Demography - Abstract
Introduction: Falls are common among older people. Previous studies have shown that falls were multifactorial. However, data regarding community-dwelling Chinese population are minimal. We aimed to study factors associated with falls among community-dwelling older Chinese people. Materials and Methods: We conducted a cross-sectional study in a community hospital in Taiwan in 2010. Our sample included 671 elders from the 3680 examinees of the free annual Senior Citizens Health Examination. Participants were interviewed with a detailed questionnaire, and 317 elders were further invited for serum vitamin D tests. The main outcome was falls in the previous 12 months. Predictor variables included sociodemographic characteristics, lifestyle risk factors, body stature, frailty, serum 25 (OH) D levels, and medications. Results: The mean age of the 671 participants was 75.7 ± 6.4 years old, and 48.7% of which were female. Fallers comprised 21.0% of the study population. In multivariate models, female gender (adjusted odds ratio (aOR): 2.32), loss of height in adulthood (aOR: 1.52), low body weight (aOR: 2.69), central obesity (aOR: 1.67), frailty (aOR: 1.56), polypharmacy (aOR: 2.18) and hyperglycaemia (aOR: 1.56) were factors associated with falls. Vitamin D insufficiency (serum 25 (OH) D levels
- Published
- 2013
- Full Text
- View/download PDF
15. An Approximate Approach to Sampling Size Determination for the Equivalence Hypothesis
- Author
-
Ching-Ying Hsieh and Jen-pei Liu
- Subjects
Pharmacology ,Statistics and Probability ,education.field_of_study ,Population ,Statistical power ,Boundary-value analysis ,Reference product ,Therapeutic Equivalency ,Sample size determination ,Data Interpretation, Statistical ,Sample Size ,Statistics ,Drugs, Generic ,Humans ,Pharmacology (medical) ,education ,Equivalence (measure theory) ,Equivalence partitioning ,Algorithms ,Mathematics ,Type I and type II errors - Abstract
The equivalence hypothesis is the correct hypothesis to confirm whether the new test product conforms to the standard reference product. It has many applications to evaluation of generic drug products and other new clinical modalities. The two one-sided tests (TOST) procedure was proposed to test the equivalence hypothesis for two treatments. When the difference in population means between two treatments is not 0, the proportion of the type II error rate allocated to each of the two tails of the central t-distribution cannot be analytically determined. Hence, no close form of the exact sample size for the equivalence hypothesis is available. Currently only approximate formulas are proposed. The resulting sample sizes may provide either insufficient power or unnecessarily excessive power. We suggest an approximate approach with consideration of type II error rates for both one-sided hypotheses to determination of the sample size for the equivalence hypothesis. The results of a numerical study are reported. Remarks on the usage of different methods for the sample size determination for the equivalence hypothesis in practical applications are provided.
- Published
- 2013
- Full Text
- View/download PDF
16. Statistical inference on censored data for targeted clinical trials under enrichment design
- Author
-
Jen-pei Liu, Jr-Rung Lin, and Chen-Fang Chen
- Subjects
Statistics and Probability ,Exponential distribution ,Endpoint Determination ,Computer science ,Coverage probability ,Inference ,computer.software_genre ,Predictive Value of Tests ,Statistical inference ,Humans ,Computer Simulation ,Pharmacology (medical) ,Molecular Targeted Therapy ,Probability ,Proportional Hazards Models ,Pharmacology ,Clinical Trials as Topic ,Models, Statistical ,Confidence interval ,Nominal level ,Research Design ,Data Interpretation, Statistical ,Inclusion and exclusion criteria ,Data mining ,computer ,Algorithms ,Type I and type II errors - Abstract
For the traditional clinical trials, inclusion and exclusion criteria are usually based on some clinical endpoints; the genetic or genomic variability of the trial participants are not totally utilized in the criteria. After completion of the human genome project, the disease targets at the molecular level can be identified and can be utilized for the treatment of diseases. However, the accuracy of diagnostic devices for identification of such molecular targets is usually not perfect. Some of the patients enrolled in targeted clinical trials with a positive result for the molecular target might not have the specific molecular targets. As a result, the treatment effect may be underestimated in the patient population truly with the molecular target. To resolve this issue, under the exponential distribution, we develop inferential procedures for the treatment effects of the targeted drug based on the censored endpoints in the patients truly with the molecular targets. Under an enrichment design, we propose using the expectation-maximization algorithm in conjunction with the bootstrap technique to incorporate the inaccuracy of the diagnostic device for detection of the molecular targets on the inference of the treatment effects. A simulation study was conducted to empirically investigate the performance of the proposed methods. Simulation results demonstrate that under the exponential distribution, the proposed estimator is nearly unbiased with adequate precision, and the confidence interval can provide adequate coverage probability. In addition, the proposed testing procedure can adequately control the size with sufficient power. On the other hand, when the proportional hazard assumption is violated, additional simulation studies show that the type I error rate is not controlled at the nominal level and is an increasing function of the positive predictive value. A numerical example illustrates the proposed procedures.
- Published
- 2013
- Full Text
- View/download PDF
17. Immune gene expression profiles in swine inguinal lymph nodes with different viral loads of porcine circovirus type 2
- Author
-
Chih-Cheng Chang, Jen-pei Liu, Chian-Ren Jeng, En-Chung Lin, Mi-Yuan Chia, Victor Fei Pang, Chun-Ming Lin, Yu-Liang Huang, Cho-Hua Wan, and Yi-Chieh Tsai
- Subjects
Circovirus ,Male ,Swine ,animal diseases ,Sus scrofa ,Down-Regulation ,Biology ,Real-Time Polymerase Chain Reaction ,Microbiology ,law.invention ,Immune system ,Immunity ,law ,Gene expression ,Animals ,Circoviridae Infections ,Gene ,Polymerase chain reaction ,Swine Diseases ,General Veterinary ,Wasting Syndrome ,Gene Expression Profiling ,virus diseases ,General Medicine ,Viral Load ,biology.organism_classification ,Reverse transcriptase ,Porcine circovirus ,Immunology ,Female ,Lymph Nodes ,Transcriptome ,Viral load - Abstract
Porcine circovirus type 2 (PCV2) infection has been suggested as an acquired immunodeficiency disorder. However, the immunopathogenesis of PCV2 infection is still not fully clarified. In the present study, 35 inguinal lymph nodes (LNs) with different levels of PCV2 load obtained from postwaening multisystemic wasting syndrome (PMWS)-affected pigs and 7 from healthy subclinically PCV2-infected pigs were selected. The LNs were subsequently ranked by their PCV2 loads to mimic the progression of PCV2 infection-associated lesion development. The expressions of 96 selected immune genes in these LNs were assessed by the integration of several reverse transcription quantitative real-time polymerase chain reaction experiments. Hierarchical cluster analysis of the gene expression profiles resulted in 5 major clusters (A, B, C, D, and E). Different clusters of immune gene expression profiles were compatible with the divergent functions of various immune cell subpopulations. 61 out of 96 selected genes belonged to cluster C and were mainly involved in the activation of dendritic cells and B and T lymphocytes. The expression levels of these genes were generally up-regulated in the LNs obtained from PMWS-affected pigs with relatively lower PCV2 loads. However, the up-regulated level tended to reduce or turned into down-regulation as the PCV2 load increased. Genes belonging to cluster B, involved in T cell receptor signaling, became silenced as the PCV2 load increased. The expression profiles of macrophage-associated genes were either independent from or positively correlated with the PCV2 load, such as those in clusters A and E and in cluster D, respectively. In addition, the principle component analysis of the expression of the 96 selected genes in the 42 inguinal LNs revealed that 53.10% and 72.29% of the total data variants could be explained by the top-3 and top-7 principle components, respectively, suggesting that the disease development of PCV2 infection may be associated with a few major and some minor factors. In conclusion, assessment of immune gene expression profiles in LNs supports a close interaction between immune activation and suppression during the progression of PMWS development.
- Published
- 2013
- Full Text
- View/download PDF
18. Statistical Methods for Bridging Studies
- Author
-
Jen-pei Liu, Chin-Fu Hsiao, Chieh Chiang, and Shein-Chung Chow
- Subjects
Statistics and Probability ,medicine.medical_specialty ,Bridging (networking) ,Drug-Related Side Effects and Adverse Reactions ,MEDLINE ,Guidelines as Topic ,Harmonization ,Drug Therapy ,Statistics ,Ethnicity ,medicine ,Humans ,Multicenter Studies as Topic ,media_common.cataloged_instance ,Pharmacology (medical) ,Treatment effect ,Medical physics ,Generalizability theory ,European union ,media_common ,Pharmacology ,business.industry ,Reproducibility of Results ,Bayes Theorem ,Ethnic factor ,Regimen ,Treatment Outcome ,Pharmaceutical Preparations ,Research Design ,Data Interpretation, Statistical ,business - Abstract
In 1998, the International Conference on Harmonization (ICH) published a guidance to facilitate the registration of medicines among ICH regions including the European Union, the United States, and Japan by recommending a framework for evaluating the impact of ethnic factors on a medicine's effect, such as its efficacy and safety at a particular dosage and dose regimen (ICH E5, 1998). The purpose of ICH E5 is not only to evaluate the ethnic factor influence on safety, efficacy, dosage, and dose regimen, but also more importantly to minimize duplication of clinical data and allow extrapolation of foreign clinical data to a new region. In this article, statistical methods for evaluation of bridging studies based on the concepts of consistency (Shih, 2001), reproducibility/generalizability (Shao and Chow, 2002), the weighted Z-tests for the design of bridging studies (Lan et al., 2005), and similarity between the new and original region based in terms of positive treatment effect (Hsiao et al., 2007) are studied. The relative merits and disadvantages of these methods are compared by several examples.
- Published
- 2012
- Full Text
- View/download PDF
19. Application of the parallel line assay to assessment of biosimilar products based on binary endpoints
- Author
-
Ya-Ching Lin, Jen-pei Liu, Chih-Hsi Chang, Jr-Rung Lin, and Shein-Chung Chow
- Subjects
Statistics and Probability ,Biological Products ,Endpoint Determination ,United States Food and Drug Administration ,Epidemiology ,Computer science ,Biosimilar ,Product characteristics ,Biological product ,Chemistry Techniques, Analytical ,United States ,Toxicology ,Cost reduction ,Reference product ,Biosimilar Pharmaceuticals ,Drug Evaluation ,Biochemical engineering ,Drug Approval ,Stepwise approach ,Algorithms - Abstract
Biological drug products are therapeutic moieties manufactured by a living system or organisms. These are important life-saving drug products for patients with unmet medical needs. Because of expensive cost, only a few patients have access to life-saving biological products. Most of the early biological products will lose their patent in the next few years. This provides the opportunity for generic versions of the biological products, referred to as biosimilar drug products. The US Biologic Price Competition and Innovation Act passed in 2009 and the draft guidance issued in 2012 provide an approval pathway for biological products shown to be biosimilar to, or interchangeable with, a Food and Drug Administration-licensed reference biological product. Hence, cost reduction and affordability of the biosimilar products to the average patients may become possible. However, the complexity and heterogeneity of the molecular structures, complicated manufacturing processes, different analytical methods, and possibility of severe immunogenicity reactions make evaluation of equivalence between the biosimilar products and their corresponding reference product a great challenge for statisticians and regulatory agencies. To accommodate the stepwise approach and totality of evidence, we propose to apply a parallel assay to evaluate the extrapolation of the similarity in product characteristics such as doses or pharmacokinetic responses to the similarity in binary efficacy endpoints. We also report the results of simulation studies to evaluate the performance, in terms of size and power, of our proposed methods. We present numerical examples to illustrate the suggested procedures.
- Published
- 2012
- Full Text
- View/download PDF
20. An inferential procedure for the probability of passing the USP dissolution test
- Author
-
Jen-pei Liu, Meng-Yang Huang, Chieh Chiang, and Chen-Fang Chen
- Subjects
Pharmacopoeias as Topic ,Quality Control ,Pharmacology ,Statistics and Probability ,Models, Statistical ,Monte Carlo method ,Estimator ,Stability (probability) ,United States ,Standard deviation ,Pharmaceutical Preparations ,Solubility ,Sampling distribution ,Statistics ,Pharmacology (medical) ,Dissolution testing ,Monte Carlo Method ,Dissolution ,Probability ,Parametric statistics ,Mathematics - Abstract
Dissolution is one of the tests that is required and specified by the United States Pharmacopeia and National Formulary (USP/NF) to ensure that the drug products meet the standards of the identity, strength, quality, purity, and stability. The sponsors also establish the in-house specifications for the mean and standard deviation of the dissolution rates to guarantee a high probability of passing the USP/NF dissolution test. However, the USP/NF dissolution test is a complicated three-stage sampling plan that involves both the sample mean dissolution rate of all units and the dissolution rate of individual units. It turns out that the true probability of passing the USP/NF dissolution is formidable to compute analytically even when the population mean and variance of dissolution rates are known. It is not clear that previously proposed methods are the estimators of the true probability for passing the USP dissolution test. Therefore, we propose to employ a parametric bootstrap method in conjunction with the Monte Carlo simulation to obtain the sampling distribution of the estimated probabilities of passing the USP/NF dissolution test and hence the confidence interval for the passing probability. In addition, a procedure is proposed to test whether the true probability of passing the USP/NF dissolution test is greater than some specified value. A numerical example illustrates the proposed method. Copyright © 2011 John Wiley & Sons, Ltd.
- Published
- 2011
- Full Text
- View/download PDF
21. Immunopathological characterization of porcine circovirus type 2 infection-associated follicular changes in inguinal lymph nodes using high-throughput tissue microarray
- Author
-
Mi Yuan Chia, Chih-Cheng Chang, Victor Fei Pang, Chun-Ming Lin, Shih Hsuan Hsiao, Jen-pei Liu, Yi Chieh Tsai, Chian-Ren Jeng, and Ming Tang Chiou
- Subjects
Circovirus ,Male ,Porcine parvovirus ,Pathology ,medicine.medical_specialty ,Swine ,animal diseases ,In situ hybridization ,Microbiology ,Antigen ,medicine ,Animals ,Porcine respiratory and reproductive syndrome virus ,Circoviridae Infections ,Lymph node ,Retrospective Studies ,Swine Diseases ,Paraffin Embedding ,Tissue microarray ,General Veterinary ,biology ,virus diseases ,Germinal center ,General Medicine ,Parvovirus, Porcine ,Viral Load ,biology.organism_classification ,Porcine circovirus ,medicine.anatomical_structure ,Tissue Array Analysis ,Regression Analysis ,Female ,Lymph Nodes ,CD79 Antigens - Abstract
The immunopathogenesis of porcine circovirus type 2 (PCV2) infection in conventional pigs is complicated by various environmental factors and individual variation and is difficult to be completely reproduced experimentally. In the present field-based study, a tissue microarray (TMA) consisting of a series of lymphoid follicles having different PCV2-loads was constructed using formalin-fixed and paraffin-embedded superficial inguinal lymph nodes (LNs) from 102 pigs. Using the TMA, a wide range of parameters, including co-infected viral pathogens, immune cell subsets, and cell apoptosis/proliferation activity by immunohistochemical (IHC) staining or in situ hybridization (ISH) were measured, characterized, and compared. The signal location and area extent of each parameter were interpreted by pathologists, semi-quantified by automated image analysis software, and analyzed statistically. The results herein demonstrated a significant negative correlation between PCV2 and CD79a (p < 0.001) and a significant positive correlation between PCV2 and lysozyme (p < 0.001) or TUNEL (p < 0.001) using Pearson correlation analysis. The amount of porcine respiratory and reproductive syndrome virus (PRRSV) and porcine parvovirus antigens did not correlate with the tissue loads of PCV2 nucleic acid. Multiple regression analysis further predicted that PCV2 contributed major effects on CD79a, lysozyme, and TUNEL but PRRSV showed relatively less effects on these parameters. In addition, the total signal intensity of Ki67 (index of cell proliferation activity) did not change significantly among cases with different PCV2 loads; however, as the loading of PCV2 nucleic acid increased, the main contribution of Ki67 signal gradually shifted from B cells in the germinal center to T cells and macrophages in the interfollicular regions. In the present study, the use of TMA to establish a mathematical model with a wider range of statistical analysis can bring us a step forward to understand the immunopathogenesis of PCV2 infection-associated follicular changes in LNs.
- Published
- 2011
- Full Text
- View/download PDF
22. Statistical evaluation of non-profile analyses for the in vitro bioequivalence
- Author
-
Shih-Ting Chiu, Jen-pei Liu, and Pei-Ying Tsai
- Subjects
education.field_of_study ,Applied Mathematics ,medicine.medical_treatment ,Population ,Pharmacology ,Bioequivalence ,Random effects model ,Confidence interval ,Analytical Chemistry ,Nasal spray ,Innovator ,Statistics ,medicine ,Empirical power ,education ,Mathematics ,Statistical hypothesis testing - Abstract
For locally acting drug products such as nasal aerosols and nasal sprays, the 2003 US Food and Drug Administration (FDA) draft guidance suggests that bioequivalence between generic and brand-name products be established through in vitro tests. In addition, for non-profile analyses based on spray content uniformity, droplet size distribution, spray pattern, priming, and re-priming, the draft US FDA guidance recommends that the population bioequivalence (PBE) between generic and innovator's products be demonstrated. However, the linearized criterion recommended in the draft FDA guidance does not take into consideration the variations due to batches, samples, and life stages. Hence, under a two-stage nested random effects model, we apply the methods of modified large-sample (MLS) and generalized pivotal quantities (GPQs) to construct the upper 95% confidence limit for in vitro PBE criterion with consideration of variance components as the statistical testing procedures for establishing the in vitro BE. A simulation study was conducted to compare empirical size and empirical power among the three methods. A numerical example illustrates the proposed methods. Copyright © 2010 John Wiley & Sons, Ltd.
- Published
- 2010
- Full Text
- View/download PDF
23. Sample Size Determination for a Specific Region in a Multiregional Trial
- Author
-
Chin-Fu Hsiao, Jen-pei Liu, Feng-Shou Ko, and Hsiao-Hui Tsou
- Subjects
Statistics and Probability ,Internationality ,Operations research ,Hypercholesterolemia ,MEDLINE ,Consistency (database systems) ,Ethnicity ,Humans ,Multicenter Studies as Topic ,Medicine ,Pharmacology (medical) ,Product (category theory) ,Probability ,Randomized Controlled Trials as Topic ,Pharmacology ,Models, Statistical ,Actuarial science ,business.industry ,Anticholesteremic Agents ,Guideline ,Atherosclerosis ,Test (assessment) ,Clinical trial ,Drug development ,Sample size determination ,Sample Size ,business ,Algorithms - Abstract
Recently, geotherapeutics have attracted much attention from sponsors as well as regulatory authorities. A bridging study defined by the International Conference on Harmonisation (ICH) E5 is usually conducted in the new region after the test product has been approved for commercial marketing in the original region due to its proven efficacy and safety. However, extensive duplication of clinical evaluation in the new region not only requires valuable development resources but also delays availability of the test product to the needed patients in the new regions. To shorten the drug lag or the time lag for approval, simultaneous drug development, submission, and approval in the world may be desirable. On September 28, 2007, the Ministry of Health, Labour and Welfare (MHLW) in Japan published the "Basic Principles on Global Clinical Trials" guidance related to the planning and implementation of global clinical studies. The 11th question and answer for the ICH E5 guideline also discuss the concept of a multiregional trial. Both guidelines have established a framework on how to demonstrate the efficacy of a drug in all participating regions while also evaluating the possibility of applying the overall trial results to each region by conducting a multiregional trial. In this paper, we focus on a specific region and establish statistical criteria for consistency between the region of interest and overall results. More specifically, four criteria are considered. Two criteria are to assess whether the treatment effect in the region of interest is as large as that of the other regions or of the regions overall, while the other two criteria are to assess the consistency of the treatment effect of the specific region with other regions or the regions overall. Sample size required for the region of interest can also be evaluated based on these four criteria.
- Published
- 2010
- Full Text
- View/download PDF
24. Statistical Assessment of Biosimilar Products
- Author
-
Jen-pei Liu and Shein-Chung Chow
- Subjects
Pharmacology ,Statistics and Probability ,Biological Products ,Models, Statistical ,United States Food and Drug Administration ,Manufacturing process ,Biosimilar ,United States ,Food and drug administration ,Biopharmaceutical ,Therapeutic Equivalency ,Risk analysis (engineering) ,Animals ,Humans ,media_common.cataloged_instance ,Pharmacology (medical) ,Business ,European union ,Drug Approval ,media_common - Abstract
Biological products or medicines are therapeutic agents that are produced using a living system or organism. Access to these life-saving biological products is limited because of their expensive costs. Patents on the early biological products will soon expire in the next few years. This allows other biopharmaceutical/biotech companies to manufacture the generic versions of the biological products, which are referred to as follow-on biological products by the U.S. Food and Drug Administration (FDA) or as biosimilar medicinal products by the European Medicine Agency (EMEA) of the European Union (EU). Competition of cost-effective follow-on biological products with equivalent efficacy and safety can cut down the costs and hence increase patients' access to the much-needed biological pharmaceuticals. Unlike for the conventional pharmaceuticals of small molecules, the complexity and heterogeneity of the molecular structure, complicated manufacturing process, different analytical methods, and possibility of severe immunogenicity reactions make evaluation of equivalence (similarity) between the biosimilar products and their corresponding innovator product a great challenge for both the scientific community and regulatory agencies. In this paper, we provide an overview of the current regulatory requirements for approval of biosimilar products. A review of current criteria for evaluation of bioequivalence for the traditional chemical generic products is provided. A detailed description of the differences between the biosimilar and chemical generic products is given with respect to size and structure, immunogenicity, product quality attributed, and manufacturing processes. In addition, statistical considerations including design criteria, fundamental biosimilar assumptions, and statistical methods are proposed. The possibility of using genomic data in evaluation of biosimilar products is also explored.
- Published
- 2009
- Full Text
- View/download PDF
25. Statistical Test for Evaluation of Biosimilarity in Variability of Follow-On Biologics
- Author
-
Eric Chi, Jen-pei Liu, Tsung-Cheng Hsieh, Shein-Chung Chow, and Chin-Fu Hsiao
- Subjects
Pharmacology ,Statistics and Probability ,Biological Products ,Models, Statistical ,Computer science ,Biosimilar ,Biologic Products ,Therapeutic Equivalency ,Econometrics ,Animals ,Humans ,Pharmacology (medical) ,Patent system ,Biotechnology industry ,Statistical hypothesis testing - Abstract
As more biologic products are going off patent protection, the development of follow-on biologics products has received much attention from both biotechnology industry and the regulatory agencies. Unlike small-molecule drug products, the development of biologic products is very different and variable via the manufacture process and environment. Thus, Chow et al. (2010) suggested that the assessment of biosimilarity between biologic products focus on variability rather than average biosimilarity. In addition, it is also suggested that a probability-based criterion, which is more sensitive to variability, should be employed. In this article, we propose a probability-based asymptotic statistical testing procedure to evaluate biosimilarity in variability of two biologic products. A numerical study is conducted to investigate the relationship between the probability-based criterion in variability and various study parameters. Simulation studies were also conducted to empirically investigate the performance of the proposed probability-based asymptotic statistical testing procedure in term of empirical sizes and powers. A numerical example is provided to illustrate the proposed methods.
- Published
- 2009
- Full Text
- View/download PDF
26. Deviations from linearity in statistical evaluation of linearity in assay validation
- Author
-
Jen-pei Liu, Tsung-Cheng Hsieh, and Shein-Chung Chow
- Subjects
Absolute deviation ,Measure (data warehouse) ,Linear range ,Applied Mathematics ,Coefficient of variation ,Statistics ,Explained sum of squares ,Linearity ,Algorithm ,Analytical Chemistry ,Mathematics - Abstract
Linearity and linear range are the key evaluations of the accuracy in assay validation. The average deviation from linearity (ADL) and the sum of squares of deviations from linearity (SSDL) have been proposed for assessment of the linearity. However, both ADL and SSDL do no consider the variability of the assay for evaluation of linearity. Therefore, we proposed the coefficient of variation of deviations from linearity (CVDL) as an alternative measure for the linearity assessment. For the inference of evaluation of linearity, we proposed testing procedures based on generalized pivotal quantities (GPQ) of ADL and CVDL for evaluation of linearity. The simulation studies were conducted to empirically investigate the size and power between the three methods. The simulation results show that all three methods adequately control size. However, the ADL method is uniformly more powerful than the other two methods. A numeric example illustrates the proposed methods. Copyright © 2009 John Wiley & Sons, Ltd.
- Published
- 2009
- Full Text
- View/download PDF
27. Botulinum toxin (Dysport) treatment of the spastic gastrocnemius muscle in children with cerebral palsy: a randomized trial comparing two injection volumes
- Author
-
Ying-Fang Chen, Jen-pei Liu, Yi-Min Chen, Gwo-Chi Hu, Yao-Chia Chuang, and Kuo-Liong Chien
- Subjects
Male ,medicine.medical_specialty ,Modified Ashworth scale ,Action Potentials ,Physical Therapy, Sports Therapy and Rehabilitation ,Injections, Intramuscular ,law.invention ,Cerebral palsy ,Gastrocnemius muscle ,Randomized controlled trial ,law ,Spastic ,medicine ,Humans ,Single-Blind Method ,Botulinum Toxins, Type A ,Range of Motion, Articular ,Child ,Muscle, Skeletal ,Leg ,Dose-Response Relationship, Drug ,business.industry ,Cerebral Palsy ,Rehabilitation ,Recovery of Function ,medicine.disease ,Botulinum toxin ,medicine.anatomical_structure ,Neuromuscular Agents ,Muscle Spasticity ,Child, Preschool ,Physical therapy ,Female ,Ankle ,business ,Range of motion ,medicine.drug - Abstract
Objective: To compare the effect of equivalent doses in two different volumes of botulinum toxin type A (Dysport) on gastrocnemius spasticity. Design: Single-blind, randomized, controlled trial. Setting: Hospital rehabilitation department Subjects: Twenty-two children with spastic diplegic or quadriplegic cerebral palsy. Intervention: High (500 U/5 mL) and low (500 U/1 mL)-volume preparations of Dysport were injected into the gastrocnemius muscles, each child randomly receiving one preparation in the right and the other in the left leg. Main measures: Dynamic ankle joint range of motion (ROM), passive ROM of the ankle joint, modified Ashworth Scale scores, and the areas of the compound muscle action potential assessed before treatment and at four and eight weeks post treatment. Results: Both legs improved significantly. The mean (SD) improvements between baseline and the end of follow-up were 19.7 (10.83) degrees for dynamic ROM, 8.4 (9.19) degrees for passive ROM, -1.3 (0.6) for modified Ashworth Scale scores, and -9.4 (11.41) mV-ms for compound muscle action potential in the high-volume group; and 13.5 (10.45) degrees for dynamic ROM, 7.4 (7.88) for passive ROM, -0.9 (0.5) for modified Ashworth Scale scores, and -5.9 (7.50) mV-ms for areas of compound muscle action potential in the low-volume group. The high-volume preparation yielded significantly greater improvement in dynamic ROM (P Conclusions: A high-volume preparation of Dysport is more effective than a low volume in reducing spasticity in the gastrocnemius muscle.
- Published
- 2009
- Full Text
- View/download PDF
28. Statistical methods for evaluating the linearity in assay validation
- Author
-
Jen-pei Liu, Chin-Fu Hsiao, and Eric Hsieh
- Subjects
Absolute deviation ,Applied Mathematics ,Alternative hypothesis ,Statistics ,Metric (mathematics) ,Explained sum of squares ,Linearity ,Applied mathematics ,Inference ,Analytical Chemistry ,Mathematics ,Type I and type II errors ,Power (physics) - Abstract
One of the most important characteristics for evaluation of the accuracy in assay validation is the linearity. Kroll, et al. 1 proposed a method based on the average deviation from linearity (ADL) to evaluate the linearity. Hsieh and Liu 2 suggested that hypothesis for proving the linearity be formulated as the alternative hypothesis and proposed the corrected Kroll's method. However, the issue concerning the variability in estimation of the non-centrality parameter is still unresolved. Consequently, the type I error rates may still be inflated for the corrected Kroll's method. To overcome this issue, we propose the sum of squares of deviations from linearity (SSDL) as an alternative metric for evaluation of linearity. Based on SSDL, we applied the method of generalized pivotal quantities (GPQ) for the inference of evaluation of linearity. The simulation studies were conducted to empirically investigate the size and power between current and proposed methods. The simulation results show that the proposed GPQ method not only adequately control size but also provide sufficient power than other methods. A numeric example illustrates the proposed methods. Copyright © 2008 John Wiley & Sons, Ltd.
- Published
- 2009
- Full Text
- View/download PDF
29. Statistical Validation of Traditional Chinese Diagnostic Procedures
- Author
-
Jen-pei Liu, Annpey Pong, Chien-Hsiung Lin, Yeu-Jhy Chang, Chin-Fu Hsiao, Hsiao-Hui Tsou, and Shein-Chung Chow
- Subjects
medicine.medical_specialty ,Disease status ,business.industry ,Statistical validation ,Public Health, Environmental and Occupational Health ,Pharmacology (nursing) ,Signs and symptoms ,Traditional Chinese medicine ,Quality of life (healthcare) ,Drug Guides ,Medicine ,Pharmacology (medical) ,Medical physics ,business ,Reliability (statistics) - Abstract
In recent years, the modernization of traditional Chinese medicine (TCM) for treatment of patients with critical and life-threatening diseases has attracted much attention in the pharmaceutical industry. The modernization of TCM is based on a scientific evaluation of the safety and effectiveness of the TCM in terms of some well-established quantitative instruments. As a result, statistical validation of such an instrument is essential to have an accurate and reliable clinical assessment of the performance of the TCM. Similar to the validation of a typical quality of life instrument, some validation performance characteristics such as validity, reliability, and ruggedness are considered. In this article, a design for validation of a standard quantitative instrument that is commonly employed for diagnosis of a patient’s functions and activities, performance, signs and symptoms of disease, and disease status and severity based on Chinese diagnostic practice is proposed. Methods for statistical validation of the standard instrument are derived. A numerical example is given to illustrate the proposed methods for validation of Chinese diagnostic procedures.
- Published
- 2009
- Full Text
- View/download PDF
30. A Noninferiority Test for Treatment-by-Factor Interaction with Application to Bridging Studies and Global Trials
- Author
-
Jen-pei Liu, Jr-Rung Lin, and Eric Hsieh
- Subjects
Bridging (networking) ,business.industry ,Public Health, Environmental and Occupational Health ,Adult population ,Pharmacology (nursing) ,Test (assessment) ,Margin (machine learning) ,Drug Guides ,Similarity (psychology) ,Statistics ,Numeric data ,Geographic regions ,Econometrics ,Medicine ,Drug product ,Pharmacology (medical) ,business - Abstract
Similarity of the treatment effects of a drug product among different intrinsic and extrinsic ethnic, geographic, or demographic factors is important not only to regulatory agencies in the approval of the drug but also to the public health. Examples include bridging studies in different regions, the extrapolation from the adult population to the pediatric subpopulation, or comparisons between different geographic regions within a global trial. It is an issue of evaluation of the treatment-by-factor interaction. However, assessment of similarity is not to detect the existence of treatment-by-factor interaction but rather to evaluate whether the magnitude of treatment-by-factor interaction is within a clinically allowable margin. As a result, we propose two testing procedures for the noninferiority hypothesis of the treatment-by-factor interaction to assess the similarity of the treatment effects among ethnical or demographic factors. Two numeric data sets illustrate applications of the proposed methods to different scenarios under different circumstances.
- Published
- 2009
- Full Text
- View/download PDF
31. On Statistical Evaluation of the Linearity in Assay Validation
- Author
-
Jen-pei Liu and Eric Hsieh
- Subjects
Pharmacology ,Statistics and Probability ,Models, Statistical ,Alternative hypothesis ,Linear model ,Reproducibility of Results ,Linearity ,Sampling distribution ,Statistics ,Linear Models ,Computer Simulation ,Pharmacology (medical) ,Limit (mathematics) ,Point estimation ,Algorithm ,Statistical hypothesis testing ,Type I and type II errors ,Mathematics - Abstract
Linearity is one of the most important characteristics for evaluation of the accuracy in assay validation. The current statistical method for evaluation of the linearity recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP6-A is reviewed. The method directly compares the point estimates with the pre-specified allowable limit and completely ignores the sampling error of the point estimates. An alternative method for evaluation of linearity, proposed by Kroll et al. (2000), considers the statistical test procedure based on the average deviation from linearity (ADL). However this procedure is based on an inappropriate formulation of hypotheses for the evaluation of linearity. Consequently, the type I error rates of both current methods may be inflated for inference of linearity. To claim the linearity of analytical methods, we propose that the hypothesis of proving the linearity should be formulated as the alternative hypothesis. Furthermore, any procedures for assessment of linearity should be based on the sampling distributions of the proposed test statistics. Therefore, we propose a two one-sided test (TOST) procedure and a corrected Kroll's procedure. The simulation studies were conducted to empirically compare the size and power between current and proposed methods. The simulation results show that the proposed methods not only adequately control size but also provide sufficient power. A numeric example illustrates the proposed methods.
- Published
- 2008
- Full Text
- View/download PDF
32. Relationship Between Age at Menarche and Skeletal Maturation Stages in Taiwanese Female Orthodontic Patients
- Author
-
Jenny Zwei-Chieng Chang, Chung-Chen Jane Yao, Shih-Jaw Tsai, Yi-Jane Chen, Jen-pei Liu, Eddie Hsiang-Hua Lai, and Chun-Pin Lin
- Subjects
medicine.medical_specialty ,Pediatrics ,Adolescent ,media_common.quotation_subject ,Taiwan ,Orthodontics ,Menstruation ,Age Determination by Skeleton ,Female patient ,wrist ,Humans ,Medicine ,Girl ,Child ,media_common ,Medicine(all) ,lcsh:R5-920 ,Bone Development ,hand bones ,business.industry ,menarche ,Mean age ,General Medicine ,cervical vertebrae ,University hospital ,musculoskeletal development ,Surgery ,Skeletal maturation ,Bone maturation ,Menarche ,Female ,business ,lcsh:Medicine (General) - Abstract
Background/Purpose The age at menarche reflects a pubertal girl's physiologic maturity. The aims of this study were to evaluate the relationship between the age at menarche and skeletal maturation in female orthodontic patients. Methods Hand-wrist radiographs and lateral cephalometric radiographs from 304 adolescent female subjects (age, 8–18.9 years) were selected from the files of the Department of Orthodontics, National Taiwan University Hospital (NTUH). Hand-wrist bone maturation stages were assessed using the NTUH Skeletal Maturation Index (NTUH-SMI). Cervical vertebral maturation stages (CVMS) were determined using the latest CVMS Index. Menarcheal ages were self-reported by the patients and verified by the patients' mothers. The relationships between the NTUH-SMI or CVM stages and menarcheal status were investigated. Results More than 90% of the 148 subjects who had already attained menstruation had skeletal maturation beyond the NTUH-SMI stage four or CVMS III. However, the subjects who had never experienced menarche mostly had skeletal maturation before NTUH-SMI stage five or CVMS IV. During the period of orthodontic treatment, 19 females experienced their menarche. The mean age at menarche for the 167 female patients in total was 11.97 years. In average, menarche occurred between NTUH-SMI stages four and five or between CVM stages III and IV. The percentage of girls with menses increased from 1.2% at age 9 to 6.6% at age 10, 39.5% at age 11, 81.4% at age 12, 97% at age 13, and 100% at age 14. Compared with the results obtained 20 years previously, we found a downward shift of 0.47 years per decade for the mean age at menarche in female orthodontic patients. Conclusion The majority of female orthodontic patients have passed the pubertal growth spurt when they experience their menarche. Menarche usually follows the pubertal growth spurt by about 1 year and occurs after NTUH-SMI stage four or CVMS III.
- Published
- 2008
- Full Text
- View/download PDF
33. A Two-Stage Design for Drug Screening Trials Based on Continuous Endpoints
- Author
-
Jen-pei Liu, Hsiao-Hui Tsou, Chin-Fu Hsiao, and Shein-Chung Chow
- Subjects
Computer science ,Process (engineering) ,Public Health, Environmental and Occupational Health ,Pharmacology (nursing) ,Minimax ,Reliability engineering ,Clinical trial ,Sample size determination ,Drug Guides ,Screening design ,Pharmacology (medical) ,Operations management ,Duration (project management) ,Type I and type II errors - Abstract
Pharmaceutical development is a risky, complex, costly, and time-consuming endeavor. More than half of development duration is spent in clinical trials. Despite the large number of potential candidates available and the lengthy process of clinical development, the success rate is disappointing. Hence, there is an urgent need for new strategies and methodology for efficient and cost-effective designs to screen potential candidates based on the idea of the proof of the concept for efficacy in a rapid and reliable manner to minimize the total sample size and hence to shorten the duration of the trials. In this article, a two-stage screening design based on continuous efficacy endpoints is proposed. The proposed two-stage screening designs minimize the expected sample size if the new candidate has low efficacy activity subject to the constraint upon the type I and type II error rates. In addition, two-stage screening designs that minimize the maximum sample size (minimax) are presented. Tables of two-stage and minimax designs for various combinations of design parameters are also provided. Applications to the phase 1 and 2 stages of clinical development are illustrated.
- Published
- 2008
- Full Text
- View/download PDF
34. A non-inferiority test for diagnostic accuracy based on the paired partial areas under ROC curves
- Author
-
Jen-pei Liu, Chi-Rong Li, and Chen-Tuo Liao
- Subjects
Male ,Statistics and Probability ,Likelihood Functions ,Receiver operating characteristic ,Diagnostic Tests, Routine ,Epidemiology ,Maximum likelihood ,Diagnostic accuracy ,Generalized p-value ,Statistics, Nonparametric ,Nominal level ,Test (assessment) ,Pancreatic Neoplasms ,Non inferiority ,ROC Curve ,Predictive Value of Tests ,Statistics ,Confidence Intervals ,Humans ,Computer Simulation ,Sensitivity (control systems) ,Mathematics - Abstract
Non-inferiority is a reasonable approach to assessing the diagnostic accuracy of a new diagnostic test if it provides an easier administration or reduces the cost. The area under the receiver operating characteristic (ROC) curve is one of the common measures for the overall diagnostic accuracy. However, it may not differentiate the various shapes of the ROC curves with different diagnostic significances. The partial area under the ROC curve (PAUROC) may present an alternative that can provide additional and complimentary information for some diagnostic tests which require false-positive rate that does not exceed a certain level. Non-parametric and maximum likelihood methods can be used for the non-inferiority tests based on the difference in paired PAUROCs. However, their performance has not been investigated in finite samples. We propose to use the concept of generalized p-value to construct a non-inferiority test for diagnostic accuracy based on the difference in paired PAUROCs. Simulation results show that the proposed non-inferiority test not only adequately controls the size at the nominal level but also is uniformly more powerful than the non-parametric methods. The proposed method is illustrated with a numerical example using published data.
- Published
- 2008
- Full Text
- View/download PDF
35. Addressing Loss of Efficiency Due to Misclassification Error in Enriched Clinical Trials for the Evaluation of Targeted Therapies Based on the Cox Proportional Hazards Model
- Author
-
Jen-pei Liu, Kuan-Ting Lee, and Chen-An Tsai
- Subjects
medicine.medical_treatment ,Cancer Treatment ,lcsh:Medicine ,computer.software_genre ,Targeted therapy ,Medicine and Health Sciences ,Molecular Targeted Therapy ,Precision Medicine ,lcsh:Science ,Clinical Trials as Topic ,Multidisciplinary ,Pharmaceutics ,Simulation and Modeling ,Applied Mathematics ,Hazard ratio ,Variance (accounting) ,Oncology ,Feature (computer vision) ,Physical Sciences ,Statistics (Mathematics) ,Algorithms ,Research Article ,Clinical Oncology ,Accuracy and precision ,Drug Research and Development ,Machine learning ,Research and Analysis Methods ,Drug Therapy ,Diagnostic Medicine ,medicine ,Confidence Intervals ,Chemotherapy ,Humans ,Clinical Trials ,Computer Simulation ,Proportional Hazards Models ,Pharmacology ,Models, Statistical ,Proportional hazards model ,business.industry ,lcsh:R ,Precision medicine ,Probability Theory ,Probability Distribution ,Confidence interval ,lcsh:Q ,Artificial intelligence ,Clinical Medicine ,business ,computer ,Mathematics - Abstract
A key feature of precision medicine is that it takes individual variability at the genetic or molecular level into account in determining the best treatment for patients diagnosed with diseases detected by recently developed novel biotechnologies. The enrichment design is an efficient design that enrolls only the patients testing positive for specific molecular targets and randomly assigns them for the targeted treatment or the concurrent control. However there is no diagnostic device with perfect accuracy and precision for detecting molecular targets. In particular, the positive predictive value (PPV) can be quite low for rare diseases with low prevalence. Under the enrichment design, some patients testing positive for specific molecular targets may not have the molecular targets. The efficacy of the targeted therapy may be underestimated in the patients that actually do have the molecular targets. To address the loss of efficiency due to misclassification error, we apply the discrete mixture modeling for time-to-event data proposed by Eng and Hanlon [8] to develop an inferential procedure, based on the Cox proportional hazard model, for treatment effects of the targeted treatment effect for the true-positive patients with the molecular targets. Our proposed procedure incorporates both inaccuracy of diagnostic devices and uncertainty of estimated accuracy measures. We employed the expectation-maximization algorithm in conjunction with the bootstrap technique for estimation of the hazard ratio and its estimated variance. We report the results of simulation studies which empirically investigated the performance of the proposed method. Our proposed method is illustrated by a numerical example.
- Published
- 2016
36. Rethinking Statistical Approaches to Evaluating Drug Safety
- Author
-
Jen-pei Liu
- Subjects
Drug ,safety ,non-inferiority approach ,Biometry ,Injury control ,Drug-Related Side Effects and Adverse Reactions ,Accident prevention ,media_common.quotation_subject ,Poison control ,Safety margin ,Effectiveness ,Review Article ,Occupational safety and health ,Injury prevention ,Confidence Intervals ,Medicine ,no excessive risk ,Humans ,media_common ,Models, Statistical ,business.industry ,General Medicine ,Hazard ,Risk analysis (engineering) ,Research Design ,Data Interpretation, Statistical ,Drug Evaluation ,Controlled Clinical Trials as Topic ,business - Abstract
Purpose The current methods used to evaluate the efficacy of drug products are inadequate. We propose a non-inferiority approach to prove the safety of drugs. Materials and Methods Traditional hypotheses for the evaluation of the safety of drugs are based on proof of hazard, which have proven to be inadequate. Therefore, based on the concept of proof of safety, the non-inferiority hypothesis is employed to prove that the risk of new drugs does not exceed a pre-specified allowable safety margin, hence proving that a drug has no excessive risk. The results from papers published on Vioxx® and Avandia® are used to illustrate the difference between the traditional approach for proof of hazard and the non-inferiority approach for proof of safety. Results The p-values from traditional hypotheses were greater than 0.05, and failed to demonstrate that Vioxx® and Avandia® are of cardiovascular hazard. However, these results cannot prove that both Vioxx® and Avandia® are of no cardiovascular risk. On the other hand, the non-inferiority approach can prove that they are of excessive cardiovascular risk. Conclusion The non-inferiority approach is appropriate to prove the safety of drugs.
- Published
- 2007
37. Two-phase survey of eating disorders in gifted dance and non-dance high-school students in Taiwan
- Author
-
Meg Mei Chih Tseng, Wei-Chu Chie, Ming-Been Lee, Jen-pei Liu, David Fang, and Wei J. Chen
- Subjects
medicine.medical_specialty ,Anorexia Nervosa ,Adolescent ,Personality Inventory ,Dance ,Cross-sectional study ,Taiwan ,Grammar school ,Personality Assessment ,Risk Factors ,Body Image ,medicine ,Humans ,Mass Screening ,Dancing ,Bulimia Nervosa ,Psychiatry ,Applied Psychology ,Mass screening ,Bulimia nervosa ,Child, Gifted ,Incidence ,Not Otherwise Specified ,medicine.disease ,Health Surveys ,Psychiatry and Mental health ,Eating disorders ,Cross-Sectional Studies ,Socioeconomic Factors ,Eating Attitudes Test ,Female ,Psychology ,Clinical psychology - Abstract
BackgroundDespite a growing body of literature reporting eating disorders (EDs) in non-Western countries in recent years, most of these studies are limited to questionnaire-based surveys or case-series studies. This study aimed to investigate the prevalence and correlates of EDs in Taiwanese high-school students.MethodsThe study subjects consisted of all the female high-school students enrolled in the gifted dance class in 2003 in Taiwan (n=655) and non-dance female students randomly chosen from the same school (n=1251). All the participants were asked to complete self-report questionnaires, including the 26-item Eating Attitudes Test (EAT-26) and the Bulimic Investigatory Test Edinburgh (BITE). All the screen positives and an approximate 10% random sample of the screen negatives were then interviewed using the Structured Clinical Interview for DSM-IV-TR Axis I Disorders Patient Version (SCID-I/P).ResultsThe prevalence of individual EDs was much higher in the dance [0·7% for anorexia nervosa (AN), 2·5% for bulimia nervosa (BN) and 4·8% for EDs, not otherwise specified (EDNOS)] than in the non-dance (0·1, 1·0 and 0·7% respectively) students. Multivariate logistic regression analyses revealed that being in the dance class, higher concern about body shape and lower family support were correlates of EDs for all students, whereas lower parental education level was associated with EDs only for non-dance students.ConclusionEDs were more prevalent in the weight-concerned subpopulation. Although AN is still rare, BN has emerged as a comparable prevalent disorder in Taiwan, as in Western countries.
- Published
- 2007
- Full Text
- View/download PDF
38. Noninferiority Tests Based on Concordance Correlation Coefficient for Assessment of the Agreement for Gene Expression Data from Microarray Experiments
- Author
-
Jen-pei Liu, Chen-Tuo Liao, and Chia-Ying Lin
- Subjects
Pharmacology ,Statistics and Probability ,Accuracy and precision ,Reproducibility ,Models, Statistical ,Microarray ,Gene Expression Profiling ,Reproducibility of Results ,Pearson product-moment correlation coefficient ,symbols.namesake ,Concordance correlation coefficient ,Data Interpretation, Statistical ,Gene expression ,Statistics ,Confidence Intervals ,Gene chip analysis ,symbols ,Pharmacology (medical) ,Oligonucleotide Array Sequence Analysis ,Mathematics - Abstract
Microarray is one of the breakthrough technologies in the twenty-first century. Despite of its great potential, transition and realization of microarray technology into the clinically useful commercial products have not been as rapid as the technology could promise. One of the primary reasons is lack of agreement and poor reproducibility of the intensity measurements on gene expression obtained from microarray experiments. Current practices often use the testing the hypothesis of zero Pearson correlation coefficient to assess the agreement of gene expression levels between the technical replicates from microarray experiments. However, Pearson correlation coefficient is to evaluate linear association between two variables and fail to take into account changes in accuracy and precision. Hence, it is not appropriate for evaluation of agreement of gene expression levels between technical replicates. Therefore, we propose to use the concordance correlation coefficient to assess agreement of gene expression levels between technical replicates. We also apply the Generalized Pivotal Quantities to obtain the exact confidence interval for concordance coefficient. In addition, based on the concept of noninferiority test, a one-sided (1 - alpha) lower confidence limit for concordance correlation coefficient is employed to test the hypothesis that the agreement of expression levels of the same genes between two technical replicates exceeds some minimal requirement of agreement. We conducted a simulation study, under various combinations of mean differences, variability, and sample size, to empirically compare the performance of different methods for assessment of agreement in terms of coverage probability, expected length, size, and power. Numerical data from published papers illustrate the application of the proposed methods.
- Published
- 2007
- Full Text
- View/download PDF
39. On the exact interval estimation for the difference in paired areas under the ROC curves
- Author
-
Jen-pei Liu, Chi-Rong Li, and Chen-Tuo Liao
- Subjects
Statistics and Probability ,Clinical Trials as Topic ,Models, Statistical ,Receiver operating characteristic ,Diagnostic Tests, Routine ,Epidemiology ,Maximum likelihood ,Interval estimation ,Taiwan ,Data interpretation ,Measure (mathematics) ,Confidence interval ,ROC Curve ,Sample size determination ,Data Interpretation, Statistical ,Statistics ,Confidence Intervals ,Humans ,Area under the roc curve ,Mathematics - Abstract
An important measure for comparison of accuracy between two diagnostic procedures is the difference in paired areas under the receiver operating characteristic (ROC) curves. Non-parametric and maximum likelihood methods have been proposed for interval estimation for the difference in paired areas under ROC curves. However, these two methods are asymptotic procedures and their performance in finite sample sizes has not been thoroughly investigated. We propose to use the concept of generalized pivotal quantities (GPQs) to construct an exact confidence interval for the difference in paired areas under ROC curves. A simulation study is conducted to empirically investigate the probability coverage and expected length of the three methods for various combinations of sample sizes, values of the area under the ROC curve and correlations. Simulation results demonstrate that the exact confidence interval based on the concept of GPQs provides not only sufficient probability coverage but also reasonable expected length. Numerical examples using published data sets illustrate the proposed method.
- Published
- 2007
- Full Text
- View/download PDF
40. Use of Prior Information for Bayesian Evaluation of Bridging Studies
- Author
-
Yu-Yi Hsu, Chin-Fu Hsiao, Hsiao-Hui Tsou, and Jen-pei Liu
- Subjects
Statistics and Probability ,Research design ,Internationality ,Bayesian probability ,Population ,Extrapolation ,computer.software_genre ,Bridging (programming) ,Bayes' theorem ,Humans ,Medicine ,Pharmacology (medical) ,education ,Prior information ,Pharmacology ,Clinical Trials as Topic ,education.field_of_study ,Models, Statistical ,business.industry ,Bayes Theorem ,Research Design ,Sample size determination ,Sample Size ,Data mining ,business ,computer ,Algorithms - Abstract
The ICH E5 guideline defines a bridging study as a supplementary study conducted in the new region to provide pharmacodynamic or clinical data on efficacy, safety, dosage, and dose regimen to allow extrapolation of the foreign clinical data to the population of the new region. Therefore, a bridging study is usually conducted in the new region only after the test product has been approved for commercial marketing in the original region based on its proven efficacy and safety. In this paper we address the issue of analysis of clinical data generated by the bridging study conducted in the new region to evaluate the similarity for extrapolation of the foreign clinical data to the population of the new region. Information on efficacy, safety, dosage, and dose regimen of the original region cannot be concurrently obtained from the local bridging studies but available in the trials conducted in the original region. Liu et al. (2002) have proposed a Bayesian approach to synthesize the data generated by the bridging study and foreign clinical data generated in the original region for assessment of similarity based on superior efficacy of the test product over a placebo control. However, the results of the bridging studies using their approach will be overwhelmingly dominated by the results of the original region due to an imbalance of sample sizes between the regions. Therefore, in this paper we propose a Bayesian approach with the use of a mixture prior for assessment of similarity between the new and original region based on the concept of positive treatment effect. Methods for sample size determination for the bridging study are also proposed. Numerical examples illustrate applications of the proposed procedures in different scenarios.
- Published
- 2007
- Full Text
- View/download PDF
41. Tests of equivalence and non-inferiority for diagnostic accuracy based on the paired areas under ROC curves
- Author
-
Mi Chia Ma, Jia Yen Tai, Chin Yu Wu, and Jen-pei Liu
- Subjects
Statistics and Probability ,Models, Statistical ,Receiver operating characteristic ,Epidemiology ,Nonparametric statistics ,Diagnostic accuracy ,Diagnostic Services ,Sensitivity and Specificity ,Data type ,Statistics, Nonparametric ,Nominal level ,ROC Curve ,Therapeutic Equivalency ,Research Design ,Sample size determination ,Data Interpretation, Statistical ,Sample Size ,Statistics ,Humans ,Computer Simulation ,Equivalence (measure theory) ,Randomized Controlled Trials as Topic ,Mathematics ,Type I and type II errors - Abstract
Assessment of equivalence or non-inferiority in accuracy between two diagnostic procedures often involves comparisons of paired areas under the receiver operating characteristic (ROC) curves. With some pre-specified clinically meaningful limits, the current approach to evaluating equivalence is to perform the two one-sided tests (TOST) based on the difference in paired areas under ROC curves estimated by the non-parametric method. We propose to use the standardized difference for assessing equivalence or non-inferiority in diagnostic accuracy based on paired areas under ROC curves between two diagnostic procedures. The bootstrap technique is also suggested for both non-parametric method and the standardized difference approach. A simulation study was conducted empirically to investigate the size and power of the four methods for various combinations of distributions, data types, sample sizes, and different correlations. Simulation results demonstrate that the bootstrap procedure of the standardized difference approach not only can adequately control the type I error rate at the nominal level but also provides equivalent power under both symmetrical and skewed distributions. A numerical example using published data illustrates the proposed methods.
- Published
- 2006
- Full Text
- View/download PDF
42. An Alternative Approach to Evaluation of Poolability for Stability Studies
- Author
-
Jen-pei Liu, Yun-Ming Pong, and Sheng-Che Tung
- Subjects
Pharmacology ,Statistics and Probability ,Analysis of covariance ,Models, Statistical ,Pooling ,Regression analysis ,Stability (probability) ,Drug Stability ,Meta-Analysis as Topic ,Research Design ,Statistics ,Econometrics ,Computer Simulation ,Pharmacology (medical) ,Time point ,Bracketing ,Null hypothesis ,Equivalence (measure theory) ,Mathematics - Abstract
The current method for pooling the data from different batches or factors, suggested by ICH Q1E guidance, is to use analysis of covariance (ANCOVA) for test interaction between slopes and intercepts and factors. Failure to reject the null hypothesis of equality of slopes and equality of intercepts, however, does not prove that slopes and intercepts from different levels of factors are the same, and the data can be pooled for estimation of shelf life. In addition, the ANCOVA approach uses indirect parameters of intercepts and slopes in the regression model for assessment of poolability. The hypothesis for poolability is then formulated on the basis of the concept of equivalence for the means among the distributions of the quantitative attributes at a particular time point. Methods based on the intersection-union procedure are proposed to test the hypothesis of equivalence. A large simulation study was conducted to empirically investigate the size and power of the proposed method for the bracketing and matrixing designs given in the ICH Q1D guidance. Simulation results show that the proposed method can adequately control the size and provides sufficient power when the number of factors considered is fewer than three. A numerical example using the published data illustrates the proposed method.
- Published
- 2006
- Full Text
- View/download PDF
43. Tests for Equivalence Based on Odds Ratio for Matched-Pair Design
- Author
-
Jen-pei Liu, Hsin Yi Fan, and Mi Chia Ma
- Subjects
Pharmacology ,Statistics and Probability ,Score test ,Likelihood Functions ,Models, Statistical ,Restricted maximum likelihood ,Matched-Pair Analysis ,Odds ratio ,Confidence interval ,Therapeutic Equivalency ,Research Design ,Sample size determination ,Sample Size ,Statistics ,Odds Ratio ,Diagnostic odds ratio ,Econometrics ,Computer Simulation ,Pharmacology (medical) ,Equivalence (measure theory) ,Mathematics ,Type I and type II errors - Abstract
Currently, methods for evaluation of equivalence under a matched-pair design use either difference in proportions or relative risk as measures of risk association. However, these measures of association are only for cross-sectional studies or prospective investigations, such as clinical trials and they cannot be applied to retrospective research such as case-control studies. As a result, under a matched-pair design, we propose the use of the conditional odds ratio for assessment of equivalence in both prospective and retrospective research. We suggest the use of the asymptotic confidence interval of the conditional odds ratio for evaluation of equivalence. In addition, a score test based on the restricted maximum likelihood estimator (RMLE) is derived to test the hypothesis of equivalence under a matched-pair design. On the other hand, a sample size formula is also provided. A simulation study was conducted to empirically investigate the size and power of the proposed procedures. Simulation results show that the score test not only adequately controls the Type I error but it can also provide sufficient power. A numerical example illustrates the proposed methods.
- Published
- 2005
- Full Text
- View/download PDF
44. Better prediction of prognosis for patients with nasopharyngeal carcinoma using primary tumor volume
- Author
-
Jen-pei Liu, Mu-Kuan Chen M.D., Tony Hsiu Hsi Chen, Wei-Chu Chie, and Cheng-Chuan Chang
- Subjects
Male ,Oncology ,Cancer Research ,medicine.medical_specialty ,Sensitivity and Specificity ,Cohort Studies ,Predictive Value of Tests ,Internal medicine ,medicine ,Carcinoma ,Humans ,Stage (cooking) ,Neoplasm Staging ,Retrospective Studies ,AJCC staging system ,Receiver operating characteristic ,business.industry ,Hazard ratio ,Cancer ,Nasopharyngeal Neoplasms ,Middle Aged ,Prognosis ,medicine.disease ,Primary tumor ,Survival Rate ,Treatment Outcome ,Nasopharyngeal carcinoma ,Female ,Tomography, X-Ray Computed ,Nuclear medicine ,business - Abstract
BACKGROUND. Heterogeneity of primary tumor volume within tumors of the same classification indicates a need to elucidate the effects of primary tumor volume on treatment outcomes in patients with nasopharyngeal carcinoma (NPC). METHODS. From 1994 through 1996, 129 patients with newly diagnosed NPC who were treated with high-dose radiotherapy were enrolled in the study. Computed tomography-derived primary tumor volume was measured using the summationof-area technique. Correlations between American Joint Committee on Cancer (AJCC) disease stage, primary tumor volume, and disease-specific survival were assessed using a Cox regression model. Cross-validation based on receiver operating characteristic (ROC) curve also was examined. RESULTS. Compared with the AJCC staging system and the TNM classification system, primary tumor volume was better at determining cumulative survival for patients with NPC. Hazard ratios increased with tumor volume, ranging from 6.68 (95% confidence interval [95% CI], 1.89 –23.67) for tumor volumes between 20 – 40 mL, 18.03 (95% CI, 4.80 – 67.75) for tumor volumes between 40 – 60 mL, and 26.06 (95% CI, 7.70 – 88.20) for tumor volumes 60 mL. With both tumor volume and T classification in the same Cox regression model, only tumor volume remained statistically significant in the prognosis of NPC. The validation results with ROC curves also revealed that, in predicting patient outcome, primary tumor volume (area under the ROC 83.33%) was superior to disease stage (area under the ROC 66.53%) and TNM classification (area under the ROC 58.61%). CONCLUSIONS. The incorporation of primary tumor volume may lead to a further refinement of the current AJCC staging system, particularly for patients with large primary tumor volumes ( 60 mL), who require more aggressive treatment. Cancer 2004;100:2160 – 6. © 2004 American Cancer Society.
- Published
- 2004
- Full Text
- View/download PDF
45. Design and Analysis of Bridging Studies
- Author
-
Jen-pei Liu, Shein-Chung Chow, Chin-Fu Hsiao, Jen-pei Liu, Shein-Chung Chow, and Chin-Fu Hsiao
- Subjects
- Clinical Trials as Topic--standards, Drug Evaluation, Preclinical--standards, Biostatistics--methods, Guidelines as Topic, Internationality, Research Design
- Abstract
As the development of medicines has become more globalized, the geographic variations in the efficacy and safety of pharmaceutical products need to be addressed. To accelerate the product development process and shorten approval time, researchers are beginning to design multiregional trials that incorporate subjects from many countries around the w
- Published
- 2013
46. Simultaneous Non-inferiority Test of Sensitivity and Specificity for Two Diagnostic Procedures in the Presence of a Gold Standard
- Author
-
Huey-Miin Hsueh, James J. Chen, and Jen-pei Liu
- Subjects
Statistics and Probability ,Sample size determination ,Statistics ,Test statistic ,General Medicine ,Gold standard (test) ,Sensitivity (control systems) ,Statistics, Probability and Uncertainty ,Likelihood ratios in diagnostic testing ,Statistic ,Mathematics ,Statistical hypothesis testing ,Type I and type II errors - Abstract
Sensitivity and specificity have traditionally been used to assess the performance of a diagnostic procedure. Diagnostic procedures with both high sensitivity and high specificity are desirable, but these procedures are frequently too expensive, hazardous, and/or difficult to operate. A less sophisticated procedure may be preferred, if the loss of the sensitivity or specificity is determined to be clinically acceptable. This paper addresses the problem of simultaneous testing of sensitivity and specificity for an alternative test procedure with a reference test procedure when a gold standard is present. The hypothesis is formulated as a compound hypothesis of two non-inferiority (one-sided equivalence) tests. We present an asymptotic test statistic based on the restricted maximum likelihood estimate in the framework of comparing two correlated proportions under the prospective and retrospective sampling designs. The sample size and power of an asymptotic test statistic are derived. The actual type I error and power are calculated by enumerating the exact probabilities in the rejection region. For applications that require high sensitivity as well as high specificity, a large number of positive subjects and a large number of negative subjects are needed. We also propose a weighted sum statistic as an alternative test by comparing a combined measure of sensitivity and specificity of the two procedures. The sample size determination is independent of the sampling plan for the two tests.
- Published
- 2003
- Full Text
- View/download PDF
47. Authors' reply to the letter to the editor by L. Chen and Y. X. Liu
- Author
-
Jen-pei Liu, Shein-Chung Chow, and Jr-Rung Lin
- Subjects
Pharmacology ,Statistics and Probability ,Letter to the editor ,Philosophy ,Pharmacology (medical) ,Theology - Published
- 2012
- Full Text
- View/download PDF
48. Sample Size Requirements for Evaluation of Bridging Evidence
- Author
-
Huey-Miin Hsueh, Jen-pei Liu, and James J. Chen
- Subjects
Statistics and Probability ,Bridging (networking) ,Similarity (network science) ,Computer science ,Sample size determination ,Statistics ,Extrapolation ,Randomized response ,Treatment effect ,General Medicine ,Statistics, Probability and Uncertainty ,Equivalence (measure theory) ,Hierarchical database model - Abstract
This paper addresses issues concerning methodologies on the sample size required for statistical evaluation of bridging evidence for a registration of pharmaceutical products in a new region. The bridging data can be either in the Complete Clinical Data Package (CCDP) generated during clinical drug development for submission to the original region or from a bridging study conducted in the new region after the pharmaceutical product was approved in the original region. When the data are in the CCDP, the randomized parallel dose-response design stratified to the ethnic factors and region will generate internally valid data for evaluating similarity concurrently between the regions for assessment of the ability of extrapolation to the new region. Formula for sample size under this design is derived. The required sample size for evaluation of similarity between the regions can be at least four times as large as that needed for evaluation of treatment effects only. For a bridging study conducted in the new region in which the data of the foreign and new regions are not generated concurrently, a hierarchical model approach to incorporating the foreign bridging information into the data generated by the bridging study is suggested. The sample size required is evaluated. In general, the required sample size for the bridging trials in the new region is inversely proportional to equivalence limits, variability of primary endpoints, and the number of patients of the trials conducted in the original region.
- Published
- 2002
- Full Text
- View/download PDF
49. BRIDGING STUDIES IN CLINICAL DEVELOPMENT
- Author
-
Jen-pei Liu and Shein-Chung Chow
- Subjects
Statistics and Probability ,Bridging (networking) ,Drug Industry ,Drug-Related Side Effects and Adverse Reactions ,Population ,MEDLINE ,Ethnic group ,Guidelines as Topic ,Harmonization ,computer.software_genre ,Drug Therapy ,Ethnicity ,Medicine ,Pharmacology (medical) ,Product (category theory) ,education ,Pharmacology ,Clinical Trials as Topic ,education.field_of_study ,Models, Statistical ,business.industry ,Reproducibility of Results ,Bayes Theorem ,Guideline ,Research Design ,Engineering ethics ,Data mining ,International development ,business ,computer ,Algorithms - Abstract
Global development of pharmaceutical products has become the key to the success of any pharmaceutical sponsors. It is therefore crucial to address the efficacy and safety variations of a new test pharmaceutical product among different geographic regions due to ethnic factors. Recently, geotherapeutics has attracted much attention from sponsors as well as regulatory authorities from different geographic regions. To address this issue, the International Conference on Harmonization (ICH) has published a guideline entitled "Ethnic Factors in the Acceptability of Foreign Clinical Data," which is known as ICH E5 guideline. The ICH E5 guideline provides a general framework for evaluation of the impact of ethnic factors on the efficacy, safety, dosage, and dose regimen. We provide an overview of ICH E5 guideline including ethnic sensitivity, necessity of bridging studies, types of bridging studies, and assessment of similarity between regions based on bridging evidence. In addition, challenges on the establishment of regulatory requirements, the assessment of bridging evidence, and design and analysis of bridging studies are addressed.
- Published
- 2002
- Full Text
- View/download PDF
50. BAYESIAN APPROACH TO EVALUATION OF BRIDGING STUDIES
- Author
-
Huey-Miin Hsueh, Chin-Fu Hsiao, and Jen-pei Liu
- Subjects
Pharmacology ,Statistics and Probability ,Clinical Trials as Topic ,Bridging (networking) ,Dose-Response Relationship, Drug ,Computer science ,Bayesian probability ,Extrapolation ,Bayes Theorem ,computer.software_genre ,Similarity (network science) ,Sample size determination ,Sample Size ,Statistics ,Pharmacology (medical) ,Data mining ,computer ,Algorithms - Abstract
We address the issue of analysis of clinical data generated by the bridging study conducted in the new region to evaluate the similarity for extrapolation of the foreign clinical data. A bridging study is usually conducted in the new region only after the test product is approved for commercial marketing in the original region due to its proven efficacy and safety. Sufficient information on efficacy, safety, dosage, and dose regimen has already generated in the original region. The empirical Bayesian approach is proposed to synthesize the data generated by the bridging study and foreign clinical data generated in the original region for assessment of similarity between the new and the original regions. A method for sample size determination for the bridging study is also suggested. It can be shown that the total sample size is inversely proportional to the strength of the evidence for the efficacy presented in the original region and the proportion of the patients assigned to receive the test product in the bridging study.
- Published
- 2002
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.