1.3.1 Single-Target Toxicity Concepts
The science and practice of toxicology over the past several decades have consistently used classic toxicological approaches, such as in vivo and in vitro toxicology studies, combined with predictive toxicological methodologies. The desired endpoints of the in vivo animal research efforts have been the determination of a toxic dose where a chemical could be shown to induce pathologic effects after a specified duration of treatment or exposure. Where appropriate, these studies have included the estimate of the lowest observed adverse effect level, the no observed adverse effect level, and the maximally tolerated dose (MTD).5,14 These adverse effect level estimates are traditionally used in drug research and development to predict the first dose in humans and to predict margins of safety estimates based on delivered dose and/or internal exposure from pharmacokinetic/pharmacodynamic (PK/PD) modeling with extrapolations into clinical trial subjects. By regulatory requirements, all potential drugs undergoing research and development will undergo both in vitro and in vivo studies, and, if the compound reaches the clinical trial stage successfully, data from human exposure to judge the adequacy of nonclinical data in predicting clinical outcomes. Uncertainties in these estimates include the definition of adverse, which is specific for each organ system in each study and typically determined by the study pathologist; the accuracy of cross-species extrapolations (particularly rodent-to-human); and the true definition of riskābenefit for each individual drug. However, the generation of classical toxicology data does not assure the accurate prediction of potential human toxicity. Sundqvist and colleagues15 have reported on a human dose prediction process, supplemented by case studies, to integrate uncertainties into simplified plots for quantification. Drug safety is recognized as one of the primary causes of attrition during the clinical phases of development; however, in numerous instances the actual determination of serious adverse effects only occurs after the drug reaches the market. In the United States, ā¼2 million patients are affected with drug-mediated adverse effects per year, of which ā¼5% are fatal.16 This places drug toxicity as one of the top five causes of death in the United States, and the costs to the health care system worldwide are estimated at US$40ā50 billion per year.16 In drug development there are always riskābenefit considerations, which will weigh any potential toxicity against the benefit expected to be gained by a patient taking the drug. An example of the uncertainty of these estimates can be seen in the methods used for carcinogenicity testing and evaluation for drug approval. The design of these studies rely on high-dose exposure to animals and default linear extrapolation procedures, while little consideration is given to many of the new advances in the toxicological sciences.17 Carcinogenicity studies are typically 2-year studies in rodents conducted with three dosage groups (low, mid, and high dose) and one or two concurrent control groups. Dose levels are established from previous studies, such as 13-week toxicity studies, where a MTD has been estimated. Each group in the carcinogenicity study has 60ā70 animals of each sex, and the analysis of whether there is a potential carcinogenicity concern is based on an analysis of each tumor in each tissue or organ system individually by gender; certain tumors are combined via standardized procedures for statistical analysis. The analysis uses the historical database from the laboratory where the studies are conducted to determine whether each tumor is considered common or rare, using the background incidence of 1% as the standard. Common tumors are those with a background incidence of 1% or over and rare tumors are those with a background incidence below 1%. In the statistical analysis, p-values for rare and common tumors are evaluated for pair-wise significance at 0.05 (for rare) and 0.01 (for common). The rare vs. common tumor classification is an arbitrary tumor threshold and adjustments to the specific classifications by individual tumor, which can occur from laboratory to laboratory and via analyses of different control groups, can have consequences in the overall tumor evaluation outcome.8 Applying a āweight of evidenceā approach into the evaluation procedures, particularly during regulatory review, attempts to alleviate some of the uncertainties; however, after more than 50 years of on-going experience, these studies still fail to bring the 21st century mindset to carcinogenicity testing. The classic toxicological process for drug development assumes that a chemical interacts with a higher affinity to a single macromolecule (the toxicological target), and therefore a single biological pathway may be perturbed at the initial target modulation. This would be followed by downstream activation of secondary and possibly tertiary pathways that result in the tissue or organ effect as indicated by key biomarkers.2 In this concept, the magnitude of toxicological effects are related to the concentration of altered molecular targets (at the site of interest), which in turn is related to the concentration of the active form of the chemical (parent compound or metabolite) at the site where the molecular targets are located. Also included in this concept is the unique susceptibility of the organism exposed to the compound.
1.3.2 Toxicological Profiling for Potential Adverse Reactions
Predictive toxicology efforts in drug research and development involve the use of multiple sources of legacy data including data generated by chemical and pharmaceutical companies and data submitted to regulatory agencies. These efforts have led to the ādata warehouseā model which includes data generated through high throughput and targeted screening, and in vitro and in vivo toxicology studies on thousands of compounds and structural analogues. In a majority of cases these data also include findings from clinical trials where an experimental drug was tested on humans.
The information is applied in a ābackwardā fashion to predict potential findings where data do not yet exist or where decisions are being made on new potential drug candidates. Bowes and colleagues18 have described a pharmacological profiling effort by four large pharmaceutical companies: AstraZeneca, GlaxoSmithKline, Novartis, and Pfizer. The companies suggest that ā¼75% of adverse drug reactions can be predicted by studying pharmacological profiles of candidate drugs. The pharmacological screening identifies primary effects related to the intended action of the candidate drug, whereas identification of secondary effects due to interactions with targets other than the primary (intended) target could be related to off-target adverse events. The groups have identified 44 screening targets including 24 G-protein coupled receptors, eight ion channels, six intracellular enzymes, three neurotransmitter transporters, two nuclear receptors, and one kinase. These types of screening data are used in the data warehouse model, typically configured in a proprietary fashion within each company. Other collaborative efforts have been developed and data from these sources would also be incorporated.
Blomme and Will19 have reviewed the current and past efforts by the pharmaceutical industry to optimize safety into molecules at the earliest stage of drug research. They conclude that new a...