risk assessment

/Tag:risk assessment

An Overall Strategy for the Testing of Chemicals for Human Hazard and Risk Assessment under the EU REACH System

Robert Combes, Martin Barratt and Michael Balls

In its White Paper, Strategy for a Future Chemicals Policy, published in 2001, the European Commission (EC) proposed the REACH (Registration, Evaluation and Authorisation of CHemicals) system to deal with both existing and new chemical substances. This system is based on a top-down approach to toxicity testing, in which the degree of toxicity information required is dictated primarily by production volume (tonnage). If testing is to be based on traditional methods, very large numbers of laboratory animals could be needed in response to the REACH system, causing ethical, scientific and logistical problems that would be incompatible with the time-schedule envisaged for testing. The EC has emphasised the need to minimise animal use, but has failed to produce a comprehensive strategy for doing so. The present document provides an overall scheme for predictive toxicity testing, whereby the non-animal methods identified and discussed in a recent and comprehensive ECVAM document, could be used in a tiered approach to provide a rapid and scientifically justified basis for the risk assessment of chemicals for their toxic effects in humans. The scheme starts with a preliminary risk assessment process (involving available information on hazard and exposure), followed by testing, based on physicochemical properties and (Q)SAR approaches. (Q)SAR analyses are used in conjunction with expert system and biokinetic modelling, and information on metabolism and identification of the principal metabolites in humans. The resulting information is then combined with production levels and patterns of use to assess potential human exposure. The nature and extent of any further testing should be based strictly on the need to fill essential information gaps in order to generate adequate risk assessments, and should rely on non-animal methods, as far as possible. The scheme also includes a feedback loop, so that new information is used to improve the predictivity of computational expert systems. Several recommendations are made, the most important of which is that the European Union (EU) should actively promote the improvement and validation of (Q)SAR models and expert systems, and computer-based methods for biokinetic modelling, since these offer the most realistic and most economical solution to the need to test large numbers of chemicals.
You need to register (for free) to download this article. Please log in/register here.

Biokinetic and Toxicodynamic Modelling and its Role in Toxicological Research and Risk Assessment

Bas J. Blaauboer

Toxicological risk assessment for chemicals is still mainly based on highly standardised protocols for animal experimentation and exposure assessment. However, developments in our knowledge of general physiology, in chemicobiological interactions and in (computer-supported) modelling, have resulted in a tremendous change in our understanding of the molecular mechanisms underlying the toxicity of chemicals. This permits the development of biologically based models, in which the biokinetics as well as the toxicodynamics of compounds can be described. In this paper, the possibilities are discussed of developing systems in which the systemic (acute and chronic) toxicities of chemicals can be quantified without the heavy reliance on animal experiments. By integrating data derived from different sources, predictions of toxicity can be made. Key elements in this integrated approach are the evaluation of chemical functionalities representing structural alerts for toxic actions, the construction of biokinetic models on the basis of non-animal data (for example, tissue–blood partition coefficients, in vitro biotransformation parameters), tests or batteries of tests for determining basal cytotoxicity, and more-specific tests for evaluating tissue or organ toxicity. It is concluded that this approach is a useful tool for various steps in toxicological hazard and risk assessment, especially for those forms of toxicity for which validated in vitro and other non-animal tests have already been developed.
You need to register (for free) to download this article. Please log in/register here.

Status and Prospects of In Vitro Tests in Risk Assessment

Kimmo Louekari

According to the new chemicals policy of the European Union (EU), most chemicals, i.e. the 20,000 chemicals manufactured or imported at 1–10 tons annually, should be tested primarily by using in vitro methods. Also, for other chemicals, the use of in vitro methods is encouraged in the testing strategies given in the draft EU legislation. However, the validation and international acceptance of in vitro tests has been slow. Only recently has the OECD approved four new in vitro test methods, validated by the European Centre for the Validation of Alternative Methods. An analysis of ten randomly selected risk assessment reports of the EU Existing Chemicals Risk Assessment Programme showed that in vitro studies, for example, on cytotoxicity to different cell cultures, cell transformation, metabolism and skin penetration (a total of 115 studies) were used for the assessments. Key metabolic pathways and mechanisms of toxicity have been elucidated, for some chemicals, by using in vitro methods. On the other hand, the results of in vitro studies were regarded as secondary or unreliable in some cases. For several toxic endpoints, in vitro methods will probably serve as screening tools and for mechanistic studies, while target organ toxicity or physiologically regulated adverse effects caused by long-term exposure are difficult to observe without the use of animal models.
You need to register (for free) to download this article. Please log in/register here.

The Feasibility of Replacing Animal Testing for Assessing Consumer Safety: A Suggested Future Direction

Julia Fentem, Mark Chamberlain and Bart Sangster

At present, we are unable to use much of the data derived from alternative (non-animal) tests for human health risk assessment. This brief Comment outlines why it is plausible that new paradigms could be developed to enable risk assessment to support consumer safety decisions, without the need to generate data in animal tests. The availability of technologies that did not exist 10 years ago makes this new approach possible. The approach is based on the concept that data and information derived from applying existing and new technologies to non-animal models can be interpreted in terms of harm and disease in man. A prerequisite is that similar data and information generated in a clinical setting are available to permit this “translation”. The incorporation of this additional translation step should make it possible to use data and information generated in non-animal models as inputs to risk assessment. The new technologies include genomics, transcriptomics, proteomics and metabonomics. Their application to in vitro and human “models” enables large amounts of data to be generated very quickly. The processing, interpretation and translation of these data need to be supported by powerful informatics capabilities and statistical tools. The use of integrated “systems biology” approaches will further support the interpretation by providing better understanding of the underlying biological complexity and mechanisms of toxicity. Clinical medicine is using the opportunities offered by the new ’omics’ technologies to advance the understanding of disease. The application of these technologies in clinical medicine will generate massive amounts of data that will need processing and interpretation to allow clinicians to better diagnose disease and understand the patients’ responses to therapeutic interventions. Support from clinical epidemiology will be essential. If these data and information can be made generally accessible in an ethical and legal way, they should also permit the “translation” of experimental non-animal data, so that they can then be used in risk assessment.
You need to register (for free) to download this article. Please log in/register here.

Animal Carcinogenicity Studies: 1. Poor Human Predictivity

Andrew Knight, Jarrod Bailey and Jonathan Balcombe

The regulation of human exposure to potentially carcinogenic chemicals constitutes society’s most important use of animal carcinogenicity data. Environmental contaminants of greatest concern within the USA are listed in the Environmental Protection Agency’s (EPA’s) Integrated Risk Information System (IRIS) chemicals database. However, of the 160 IRIS chemicals lacking even limited human exposure data but possessing animal data that had received a human carcinogenicity assessment by 1 January 2004, we found that in most cases (58.1%; 93/160), the EPA considered animal carcinogenicity data inadequate to support a classification of probable human carcinogen or non-carcinogen. For the 128 chemicals with human or animal data also assessed by the World Health Organisation’s International Agency for Research on Cancer (IARC), human carcinogenicity classifications were compatible with EPA classifications only for those 17 having at least limited human data (p = 0.5896). For those 111 primarily reliant on animal data, the EPA was much more likely than the IARC to assign carcinogenicity classifications indicative of greater human risk (p < 0.0001). The IARC is a leading international authority on carcinogenicity assessments, and its significantly different human carcinogenicity classifications of identical chemicals indicate that: 1) in the absence of significant human data, the EPA is over-reliant on animal carcinogenicity data; 2) as a result, the EPA tends to over-predict carcinogenic risk; and 3) the true predictivity for human carcinogenicity of animal data is even poorer than is indicated by EPA figures alone. The EPA policy of erroneously assuming that tumours in animals are indicative of human carcinogenicity is implicated as a primary cause of these errors.
You need to register (for free) to download this article. Please log in/register here.

Animal Carcinogenicity Studies: 3. Alternatives to the Bioassay

Andrew Knight, Jarrod Bailey and Jonathan Balcombe

Conventional animal carcinogenicity tests take around three years to design, conduct and interpret. Consequently, only a tiny fraction of the thousands of industrial chemicals currently in use have been tested for carcinogenicity. Despite the costs of hundreds of millions of dollars and millions of skilled personnel hours, as well as millions of animal lives, several investigations have revealed that animal carcinogenicity data lack human specificity (i.e. the ability to identify human non-carcinogens), which severely limits the human predictivity of the bioassay. This is due to the scientific inadequacies of many carcinogenicity bioassays, and numerous serious biological obstacles, which render profoundly difficult any attempts to accurately extrapolate animal data in order to predict carcinogenic hazards to humans. Proposed modifications to the conventional bioassays have included the elimination of mice as a second species, and the use of genetically-altered or neonatal mice, decreased study durations, initiation–promotion models, the greater incorporation of toxicokinetic and toxicodynamic assessments, structure-activity relationship (computerised) systems, in vitro assays, cDNA microarrays for detecting changes in gene expression, limited human clinical trials, and epidemiological research. The potential advantages of nonanimal assays when compared to bioassays include the superior human specificity of the results, substantially reduced time-frames, and greatly reduced demands on financial, personnel and animal resources. Inexplicably, however, the regulatory agencies have been frustratingly slow to adopt alternative protocols. In order to decrease the enormous cost of cancer to society, a substantial redirection of resources away from excessively slow and resource-intensive rodent bioassays, into the further development and implementation of non-animal assays, is both strongly justified and urgently required.
You need to register (for free) to download this article. Please log in/register here.

Integrated Testing Strategies for Use in the EU REACH System

Christina Grindon, Robert Combes, Mark T.D. Cronin, David W. Roberts and John F. Garrod

Integrated testing strategies have been proposed to facilitate the process of chemicals risk assessment to fulfil the requirements of the proposed EU REACH system. Here, we present individual, decision- tree style, strategies for the eleven major toxicity endpoints of the REACH system, including human health effects and ecotoxicity. These strategies make maximum use of non-animal approaches to hazard identification, before resorting to traditional animal test methods. Each scheme: a) comprises a mixture of validated and non-validated assays (distinguished in the schemes); and b) decision points at key stages to allow the cessation of further testing, should it be possible to use the available information to classify and label and/or undertake risk assessment. The rationale and scientific justification for each of the schemes, with respect to the validation status of the tests involved and their individual advantages and limitations, will be discussed in detail in a series of future publications.
You need to register (for free) to download this article. Please log in/register here.

Integrated Decision-tree Testing Strategies for Environmental Toxicity With Respect to the Requirements of the EU REACH Legislation

Christina Grindon, Robert Combes, Mark T.D. Cronin, David W. Roberts and John Garrod

Liverpool John Moores University and FRAME recently conducted a research project sponsored by Defra on the status of alternatives to animal testing with regard to the European Union REACH (Registration, Evaluation and Authorisation of Chemicals) system for safety testing and risk assessment of chemicals. The project covered all the main toxicity endpoints associated with the REACH system. This paper focuses on the prospects for using alternative methods (both in vitro and in silico) for environmental (aquatic) toxicity testing. The manuscript reviews tests based on fish cells and cell lines, fish embryos, lower organisms, and the many expert systems and QSARs for aquatic toxicity testing. Ways in which reduction and refinement measures can be used are also discussed, including the Upper Threshold Concentration — Step Down (UTC) approach, which has recently been retrospectively validated by ECVAM and subsequently endorsed by the ECVAM Scientific Advisory Committee (ESAC). It is hoped that the application of this approach could reduce the number of fish used in acute toxicity studies by around 65–70%. Decisiontree style integrated testing strategies are also proposed for acute aquatic toxicity and chronic toxicity (including bioaccumulation), followed by a number of recommendations for the future facilitation of aquatic toxicity testing with respect to environmental risk assessment.
You need to register (for free) to download this article. Please log in/register here.

Introduction to the EU REACH Legislation

Christina Grindon and Robert Combes

FRAME initiatives on the European Union REACH (Registration, Evaluation and Authorisation of Chemicals) system for the safety testing and risk assessment of chemicals, first proposed as a White Paper in 2001, are summarised. These initiatives considered the scientific and animal welfare issues raised by the REACH proposals, and resulted in a number of suggestions for improvement, many of which seem to have been adopted during the current progress of the legislation through the European Council and European Parliament.
You need to register (for free) to download this article. Please log in/register here.

An Overall Strategy for the Testing of Chemicals for Human Hazard and Risk Assessment under the EU REACH System

Robert Combes, Martin Barratt and Michael Balls

In its White Paper, Strategy for a Future Chemicals Policy, published in 2001, the European Commission (EC) proposed the REACH (Registration, Evaluation and Authorisation of CHemicals) system to deal with both existing and new chemical substances. This system is based on a top-down approach to toxicity testing, in which the degree of toxicity information required is dictated primarily by production volume (tonnage). If testing is to be based on traditional methods, very large numbers of laboratory animals could be needed in response to the REACH system, causing ethical, scientific and logistical problems that would be incompatible with the time-schedule envisaged for testing. The EC has emphasised the need to minimise animal use, but has failed to produce a comprehensive strategy for doing so. The present document provides an overall scheme for predictive toxicity testing, whereby the non-animal methods identified and discussed in a recent and comprehensive ECVAM document, could be used in a tiered approach to provide a rapid and scientifically justified basis for the risk assessment of chemicals for their toxic effects in humans. The scheme starts with a preliminary risk assessment process (involving available information on hazard and exposure), followed by testing, based on physicochemical properties and (Q)SAR approaches. (Q)SAR analyses are used in conjunction with expert system and biokinetic modelling, and information on metabolism and identification of the principal metabolites in humans. The resulting information is then combined with production levels and patterns of use to assess potential human exposure. The nature and extent of any further testing should be based strictly on the need to fill essential information gaps in order to generate adequate risk assessments, and should rely on non-animal methods, as far as possible. The scheme also includes a feedback loop, so that new information is used to improve the predictivity of computational expert systems. Several recommendations are made, the most important of which is that the European Union (EU) should actively promote the improvement and validation of (Q)SAR models and expert systems, and computer-based methods for biokinetic modelling, since these offer the most realistic and most economical solution to the need to test large numbers of chemicals.
You need to register (for free) to download this article. Please log in/register here.