sample size

/Tag:sample size

Statistical Methods and Software for Validation Studies on New In Vitro Toxicity Assays

Frank Schaarschmidt and Ludwig A. Hothorn

When a new in vitro assay method is introduced, it should be validated against the best available knowledge or a reference standard assay. For assays resulting in a simple binary outcome, the data can be displayed as a 2 × 2 table. Based on the estimated sensitivity and specificity, and the assumed prevalence of true positives in the population of interest, the positive and negative predictive values of the new assay can be calculated. We briefly discuss the experimental design of validation experiments and previously published methods for computing confidence intervals for predictive values. The application of the methods is illustrated for two toxicological examples, by using tools available in the free software, namely, R: confidence intervals for predictive values are computed for a validation study of an in vitro test battery, and sample size.

This article is currently only available in full to paid subscribers. Click here to subscribe, or you will need to log in/register to buy and download this article

Guidelines for the Design and Statistical Analysis of Experiments in Papers Submitted to ATLA

Michael F.W. Festing

In vitro experiments need to be well designed and correctly analysed if they are to achieve their full potential to replace the use of animals in research. An “experiment” is a procedure for collecting scientific data in order to answer a hypothesis, or to provide material for generating new hypotheses, and differs from a survey because the scientist has control over the treatments that can be applied. Most experiments can be classified into one of a few formal designs, the most common being completely randomised, and randomised block designs. These are quite common with in vitro experiments, which are often replicated in time. Some experiments involve a single independent (treatment) variable, whereas other “factorial” designs simultaneously vary two or more independent variables, such as drug treatment and cell line. Factorial designs often provide additional information at little extra cost. Experiments need to be carefully planned to avoid bias, be powerful yet simple, provide for a valid statistical analysis and, in some cases, have a wide range of applicability. Virtually all experiments need some sort of statistical analysis in order to take account of biological variation among the experimental subjects. Parametric methods using the t test or analysis of variance are usually more powerful than non-parametric methods, provided the underlying assumptions of normality of the residuals and equal variances are approximately valid. The statistical analyses of data from a completely randomised design, and from a randomised-block design are demonstrated in Appendices 1 and 2, and methods of determining sample size are discussed in Appendix 3. Appendix 4 gives a checklist for authors submitting papers to ATLA.
You need to register (for free) to download this article. Please log in/register here.