6 results on '"Sample size determination"'
Search Results
2. Studien mit Surrogatendpunkten: Nutzen und Grenzen in der klinischen Entscheidungsfindung
- Author
-
H.C. Bucher
- Subjects
medicine.medical_specialty ,Surrogate endpoint ,business.industry ,MEDLINE ,Disease ,Quality of life (healthcare) ,Sample size determination ,Internal Medicine ,medicine ,business ,Intensive care medicine ,Survival rate ,Severe toxicity ,Pharmaceutical industry - Abstract
Ideally clinicians should base their treatment decisions on results from randomised controlled trials which include patient-important outcomes, such as quality of life, prevented disease events or death. Conducting such trials often involves large sample sizes and extended follow-up periods. Therefore, researchers have aimed to conduct trials with surrogate endpoints by substituting patient-important outcomes in order to reduce sample size and observation time. Surrogate endpoints are outcomes that substitute for direct measures of how a patient feels, functions, or survives. In many countries drugs are approved based on data from surrogate endpoint trials. Recently, a controversy evolved on the reliability of results generated from these trials driven by unanticipated side effects or severe toxicity leading to the withdrawal of drugs that were solely approved based on evidence from surrogate endpoint trials. We present some recent examples and criteria how clinicians can critically evaluate the validity of claims by experts or the pharmaceutical industry in regard to the expected patients' benefit from drugs approved by results from surrogate endpoint trials.
- Published
- 2018
3. Smart sampling and incremental function learning for very large high dimensional data
- Author
-
Mattia Pedergnana, Sebastian Gimeno Garcia, Diego G. Loyola R, Doya, Kenji, and Deliang, Wang
- Subjects
Clustering high-dimensional data ,010504 meteorology & atmospheric sciences ,Computer science ,Cognitive Neuroscience ,Probably approximately correct learning ,Datasets as Topic ,Computational intelligence ,Sample (statistics) ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,Machine Learning ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,0105 earth and related environmental sciences ,High dimensional function approximation ,Artificial neural network ,business.industry ,Sampling (statistics) ,Sampling discrepancy ,Function (mathematics) ,Atmosphärenprozessoren ,Probably approximately correct computation ,Function learning ,Sample size determination ,020201 artificial intelligence & image processing ,Artificial intelligence ,Data mining ,Neural Networks, Computer ,business ,computer ,Algorithms ,Design of experiments ,Neural networks - Abstract
Very large high dimensional data are common nowadays and they impose new challenges to data-driven and data-intensive algorithms. Computational Intelligence techniques have the potential to provide powerful tools for addressing these challenges, but the current literature focuses mainly on handling scalability issues related to data volume in terms of sample size for classification tasks.This work presents a systematic and comprehensive approach for optimally handling regression tasks with very large high dimensional data. The proposed approach is based on smart sampling techniques for minimizing the number of samples to be generated by using an iterative approach that creates new sample sets until the input and output space of the function to be approximated are optimally covered. Incremental function learning takes place in each sampling iteration, the new samples are used to fine tune the regression results of the function learning algorithm. The accuracy and confidence levels of the resulting approximation function are assessed using the probably approximately correct computation framework.The smart sampling and incremental function learning techniques can be easily used in practical applications and scale well in the case of extremely large data. The feasibility and good results of the proposed techniques are demonstrated using benchmark functions as well as functions from real-world problems.
- Published
- 2016
4. Die Bedeutung von Fallzahl und Power in der klinischen Forschung
- Author
-
Ulrike Held, University of Zurich, and Held, Ulrike
- Subjects
Clinical trial ,Actuarial science ,Computer science ,Order (business) ,Sample size determination ,Treatment effect ,610 Medicine & health ,2700 General Medicine ,General Medicine ,10029 Clinic and Policlinic for Internal Medicine ,Confidence interval ,Outcome (probability) - Abstract
Zur Beurteilung der Wirksamkeit einer neuen Therapie ist es in der klinischen Forschung wichtig, vor Studienbeginn eine Fallzahlplanung durchzuführen. Basierend auf dem Studiendesign, der erwarteten Grösse des Therapieeffekts, seiner Variabilität, der angestrebten Power und dem Signifikanzniveau kann ausgerechnet werden, wie viele Patienten in die Studie eingeschlossen werden müssen. Häufig ist es vor Beginn einer Studie schwierig, diese Grössen sinnvoll festzulegen, jedoch finden sich in der Fachliteratur meistens hilfreiche Anhaltspunkte. Sowohl aus wissenschaftlichen wie auch aus ethischen Gesichtspunkten ist eine Fallzahlplanung notwendig, damit die Studienfrage überhaupt beantwortet werden kann.
- Published
- 2014
5. Marginal inferences about variance components in a mixed linear model using Gibbs sampling
- Author
-
J. J. Rutledge, Daniel Gianola, CS Wang, and Revues Inra, Import
- Subjects
lcsh:QH426-470 ,Bayesian probability ,Posterior probability ,[SDV.GEN.GA] Life Sciences [q-bio]/Genetics/Animal genetics ,Biology ,03 medical and health sciences ,symbols.namesake ,Genetics ,Applied mathematics ,Genetics(clinical) ,Ecology, Evolution, Behavior and Systematics ,ComputingMilieux_MISCELLANEOUS ,030304 developmental biology ,lcsh:SF1-1100 ,0303 health sciences ,Research ,0402 animal and dairy science ,Linear model ,04 agricultural and veterinary sciences ,General Medicine ,Variance (accounting) ,Conditional probability distribution ,040201 dairy & animal science ,Statistics::Computation ,[SDV.GEN.GA]Life Sciences [q-bio]/Genetics/Animal genetics ,lcsh:Genetics ,Sample size determination ,symbols ,Animal Science and Zoology ,lcsh:Animal culture ,Marginal distribution ,Gibbs sampling - Abstract
Summary - Arguing from a Bayesian viewpoint, Gianola and Foulley (1990) derived a new method for estimation of variance components in a mixed linear model: variance estimation from integrated likelihoods (VEIL). Inference is based on the marginal posterior distribution of each of the variance components. Exact analysis requires numerical integration. In this paper, the Gibbs sampler, a numerical procedure for generating marginal distributions from conditional distributions, is employed to obtain marginal inferences about variance components in a general univariate mixed linear model. All needed conditional posterior distributions are derived. Examples based on simulated data sets containing varying amounts of information are presented for a one-way sire model. Estimates of the marginal densities of the variance components and of functions thereof are obtained, and the corresponding distributions are plotted. Numerical results with a balanced sire model suggest that convergence to the marginal posterior distributions is achieved with a Gibbs sequence length of 20, and that Gibbs sample sizes ranging from 300 - 3 000 may be needed to appropriately characterize the marginal distributions. variance components / linear models / Bayesian methods / marginalization / Gibbs sampler
- Published
- 1993
6. Zum Problem der Response in epidemiologischen Studien in Deutschland (Teil II)
- Author
-
R. Holle, P. Kamtsiuris, U. Latza, Wolfgang Hoffmann, S. Sauer, Andreas Stang, Anja Kroke, M. Bergmann, and C. Terschüren
- Subjects
Estimation ,Selection bias ,education.field_of_study ,Actuarial science ,media_common.quotation_subject ,Population ,Public Health, Environmental and Occupational Health ,Medizin ,Context (language use) ,Epidemiologic Measurements ,language.human_language ,German ,Incentive ,Sample size determination ,language ,Psychology ,education ,media_common - Abstract
The first part of this paper introduced various definitions of response and discussed their significance in the context of different study types. This second part addresses incentives as a method to increase response and evaluates the impact of non response or delayed response on the validity of the study results. Recruitment aims at minimising the proportion of refusal. To achieve this, incentives can be used and potential participants can be contacted in a sequence of increasing intensity. The effectiveness of different incentives was investigated within the pretest of the German survey on children and adolescents by the Robert Koch Institute. A low response is often interpreted in terms of non-response bias. This assumption, however, is as incorrect as would be opposite conclusion, that a high response guarantees valid results. Any study of the influence of nonresponse requires information on non-responders. The comparison between early and late responders as an indirect method to evaluate systematic differences between participants and non-participants by wave analysis is demonstrated within the Northern Germany Leukaemia and Lymphoma study (NLL). The German guidelines for Good Epidemiologic Practice recommend to solicit a minimum of information on the principal hypotheses of a study from non-participants. The example of a population-based health survey (Cooperative Health Research in the Region of Augsburg, KORA) illustrates how information on non-responders within a quantitative non-responder analysis can be achieved and used for the estimation of prevalences. Recommendations how to deal with the response in epidemiological studies in Germany are suggested.
- Published
- 2004
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.