Computational Systems Pharmacology and Toxicology
eBook - ePub

Computational Systems Pharmacology and Toxicology

Rudy J Richardson, Dale E Johnson, Rudy J Richardson, Dale E Johnson

Buch teilen
  1. 332 Seiten
  2. English
  3. ePUB (handyfreundlich)
  4. Über iOS und Android verfügbar
eBook - ePub

Computational Systems Pharmacology and Toxicology

Rudy J Richardson, Dale E Johnson, Rudy J Richardson, Dale E Johnson

Angaben zum Buch
Buchvorschau
Inhaltsverzeichnis
Quellenangaben

Über dieses Buch

The network approaches of systems pharmacology and toxicology serve as early predictors of the most relevant screening approach to pursue both in drug discovery and development and ecotoxicological assessments. Computational approaches have the potential to improve toxicological experimental design, enable more rapid drug efficacy and safety testing and also reduce the number of animals used in experimentation. Rapid advances in availability of computing technology hold tremendous promise for advancing applied and basic science and increasing the efficiency of risk assessment.
This book provides an understanding of the basic principles of computational toxicology and the current methods of predictive toxicology using chemical structures, toxicity-related databases, in silico chemical-protein docking, and biological pathway tools. The book begins with an introduction to systems pharmacology and toxicology and computational tools followed by a section exploring modelling adverse outcomes and events. The second part of the book covers the discovery of protein targets and the characterisation of toxicant-protein interactions. Final chapters include case studies and additionally discuss interactions between phytochemicals and Western therapeutics.
This book will be useful for scientists involved in environmental research and risk assessment. It will be a valuable resource for postgraduate students and researchers wishing to learn about key methods used in studying biological targets both from a toxicity and pharmacological activity standpoint.

Häufig gestellte Fragen

Wie kann ich mein Abo kündigen?
Gehe einfach zum Kontobereich in den Einstellungen und klicke auf „Abo kündigen“ – ganz einfach. Nachdem du gekündigt hast, bleibt deine Mitgliedschaft für den verbleibenden Abozeitraum, den du bereits bezahlt hast, aktiv. Mehr Informationen hier.
(Wie) Kann ich Bücher herunterladen?
Derzeit stehen all unsere auf Mobilgeräte reagierenden ePub-Bücher zum Download über die App zur Verfügung. Die meisten unserer PDFs stehen ebenfalls zum Download bereit; wir arbeiten daran, auch die übrigen PDFs zum Download anzubieten, bei denen dies aktuell noch nicht möglich ist. Weitere Informationen hier.
Welcher Unterschied besteht bei den Preisen zwischen den Aboplänen?
Mit beiden Aboplänen erhältst du vollen Zugang zur Bibliothek und allen Funktionen von Perlego. Die einzigen Unterschiede bestehen im Preis und dem Abozeitraum: Mit dem Jahresabo sparst du auf 12 Monate gerechnet im Vergleich zum Monatsabo rund 30 %.
Was ist Perlego?
Wir sind ein Online-Abodienst für Lehrbücher, bei dem du für weniger als den Preis eines einzelnen Buches pro Monat Zugang zu einer ganzen Online-Bibliothek erhältst. Mit über 1 Million Büchern zu über 1.000 verschiedenen Themen haben wir bestimmt alles, was du brauchst! Weitere Informationen hier.
Unterstützt Perlego Text-zu-Sprache?
Achte auf das Symbol zum Vorlesen in deinem nächsten Buch, um zu sehen, ob du es dir auch anhören kannst. Bei diesem Tool wird dir Text laut vorgelesen, wobei der Text beim Vorlesen auch grafisch hervorgehoben wird. Du kannst das Vorlesen jederzeit anhalten, beschleunigen und verlangsamen. Weitere Informationen hier.
Ist Computational Systems Pharmacology and Toxicology als Online-PDF/ePub verfügbar?
Ja, du hast Zugang zu Computational Systems Pharmacology and Toxicology von Rudy J Richardson, Dale E Johnson, Rudy J Richardson, Dale E Johnson im PDF- und/oder ePub-Format sowie zu anderen beliebten Büchern aus Scienze biologiche & Biochimica. Aus unserem Katalog stehen dir über 1 Million Bücher zur Verfügung.

Information

Jahr
2017
ISBN
9781788011204
CHAPTER 1
Systems Biology Approaches in Pharmacology and Toxicology
DALE E. JOHNSON*a,b
aUniversity of Michigan, School of Public Health, Department of Environmental Health Sciences, Ann Arbor, MI 48109-2029, USA;
bUniversity of California, Berkeley, Department of Nutritional Sciences and Toxicology, Morgan Hall, Berkeley, CA 94720-3104, USA
*E-mail: [email protected]

1.1 Introduction

The science and practical field of toxicology has been changing dramatically over the last 15–20 years, transitioning into a more systems biology and network-based approach.14 Several factors have been involved, including the developing genomics era where the understanding of genetic changes has enhanced the ability to understand diseases and chemically-induced toxicities at the molecular level. The genomics era has also ushered in “omics” technologies and approaches such as transcriptomics, metabolomics, proteomics, and epigenomics, which have changed the way we view mechanisms of toxicity and the perturbation of biological systems that lead to adverse outcomes.5 These advances have been coupled with the public availability of large datasets of information and new modeling approaches that have enhanced the ability to understand toxicological events and effects at multiple biological levels.6 Since our scientific approaches, inquiries, and visions aimed at understanding toxicological events and outcomes have been broadened tremendously, this reinforces our need for new and better ways to assess toxicity and risk. The large numbers of uncharacterized chemicals already present in the environment and new chemicals that continue to enter it has required hazard and risk assessments to be made with very few data. These factors have had a major influence on the need to accelerate new approaches and move away from an overdependence on in vivo animal testing and make better use of computational, molecular, and in vitro tools.6,7 The identification of the majority of toxic events in in vivo animal toxicology studies rely on high-dose exposure to the animals and default linear extrapolation procedures,8 with the incorporation of newer technologies absent in the vast majority of animal studies. This has been considered a shortcoming in risk assessment and several weaknesses in this process include the comparative shape of the dose–response relationship after relevant levels of human exposure, whether biological and/or toxicological thresholds do in fact exist and for what toxicological endpoints, and potential population variability in response.5

1.2 Systems Toxicology

Accordingly, research in toxicology has moved into a new systems-oriented phase called systems toxicology, which involves the study of complex molecular response networks initiated by exposure (both intentional and unintentional) to chemical substances. At the heart of systems toxicology approaches are the development and usage of quantitative mechanistic models that create a predictive toxicology aspect relevant to all toxicology fields, including drug research and development and environmental research. The overall approach involves the integration of classical toxicology with the quantitative analysis of large networks of chemically-induced molecular and functional changes, which occur across multiple levels of biological organization.5 Examples of key influential events in this transition since the year 2000 include the release of human genome sequencing data including specific signal transduction domains, the development and issuance of the report Toxicity Testing in the Twenty-first Century by the National Research Council (NRC),9 which has influenced all sectors of the toxicology field, and the development and publication of the adverse outcome pathway (AOP) approach,6,10,11 which has highlighted the realities that exist as the science moves away from an overdependence on in vivo testing and makes greater use of computational, molecular, and focused in vitro tools. Additional drivers of change include the European Union (EU) report from the Scientific Committee on Health and Environmental Risks, the EU’s Registration, Evaluation, Authorisation and Restriction of Chemical Substances (REACH) program, and the International Programme on Chemical Safety (IPCS).7,12 The paradigm shift can also be seen in the drug research and development sector, but rather than focusing on drugs during late stages of development or on marketed drugs, the systems-related efforts are positioned at the front end of research, both on safer chemical design and extensive target research. While the drug industry is required to conduct animal toxicology studies by regulatory agencies and international guidelines, the major effort underway is to determine chemical liabilities early in the drug discovery pipeline, both to reduce the time and cost of failures later in the process, but also to avoid costly failures once a drug reaches the market.5 Currently, there is an International Consortium for Innovation and Quality in Pharmaceutical Development (IQ), where several pharmaceutical and biotechnology companies have created a Nonclinical to Clinical Translational Database (WG1) to allow analysis of the reliability and potential limitations of nonclinical data in predicting clinical outcomes, including the evaluation of conventional biomarkers of toxicity.13 Current screening approaches applied to the front end of drug research are described below.

1.3 Chemical Toxicities

1.3.1 Single-Target Toxicity Concepts

The science and practice of toxicology over the past several decades have consistently used classic toxicological approaches, such as in vivo and in vitro toxicology studies, combined with predictive toxicological methodologies. The desired endpoints of the in vivo animal research efforts have been the determination of a toxic dose where a chemical could be shown to induce pathologic effects after a specified duration of treatment or exposure. Where appropriate, these studies have included the estimate of the lowest observed adverse effect level, the no observed adverse effect level, and the maximally tolerated dose (MTD).5,14 These adverse effect level estimates are traditionally used in drug research and development to predict the first dose in humans and to predict margins of safety estimates based on delivered dose and/or internal exposure from pharmacokinetic/pharmacodynamic (PK/PD) modeling with extrapolations into clinical trial subjects. By regulatory requirements, all potential drugs undergoing research and development will undergo both in vitro and in vivo studies, and, if the compound reaches the clinical trial stage successfully, data from human exposure to judge the adequacy of nonclinical data in predicting clinical outcomes. Uncertainties in these estimates include the definition of adverse, which is specific for each organ system in each study and typically determined by the study pathologist; the accuracy of cross-species extrapolations (particularly rodent-to-human); and the true definition of risk–benefit for each individual drug. However, the generation of classical toxicology data does not assure the accurate prediction of potential human toxicity. Sundqvist and colleagues15 have reported on a human dose prediction process, supplemented by case studies, to integrate uncertainties into simplified plots for quantification. Drug safety is recognized as one of the primary causes of attrition during the clinical phases of development; however, in numerous instances the actual determination of serious adverse effects only occurs after the drug reaches the market. In the United States, ∼2 million patients are affected with drug-mediated adverse effects per year, of which ∼5% are fatal.16 This places drug toxicity as one of the top five causes of death in the United States, and the costs to the health care system worldwide are estimated at US$40–50 billion per year.16 In drug development there are always risk–benefit considerations, which will weigh any potential toxicity against the benefit expected to be gained by a patient taking the drug. An example of the uncertainty of these estimates can be seen in the methods used for carcinogenicity testing and evaluation for drug approval. The design of these studies rely on high-dose exposure to animals and default linear extrapolation procedures, while little consideration is given to many of the new advances in the toxicological sciences.17 Carcinogenicity studies are typically 2-year studies in rodents conducted with three dosage groups (low, mid, and high dose) and one or two concurrent control groups. Dose levels are established from previous studies, such as 13-week toxicity studies, where a MTD has been estimated. Each group in the carcinogenicity study has 60–70 animals of each sex, and the analysis of whether there is a potential carcinogenicity concern is based on an analysis of each tumor in each tissue or organ system individually by gender; certain tumors are combined via standardized procedures for statistical analysis. The analysis uses the historical database from the laboratory where the studies are conducted to determine whether each tumor is considered common or rare, using the background incidence of 1% as the standard. Common tumors are those with a background incidence of 1% or over and rare tumors are those with a background incidence below 1%. In the statistical analysis, p-values for rare and common tumors are evaluated for pair-wise significance at 0.05 (for rare) and 0.01 (for common). The rare vs. common tumor classification is an arbitrary tumor threshold and adjustments to the specific classifications by individual tumor, which can occur from laboratory to laboratory and via analyses of different control groups, can have consequences in the overall tumor evaluation outcome.8 Applying a “weight of evidence” approach into the evaluation procedures, particularly during regulatory review, attempts to alleviate some of the uncertainties; however, after more than 50 years of on-going experience, these studies still fail to bring the 21st century mindset to carcinogenicity testing. The classic toxicological process for drug development assumes that a chemical interacts with a higher affinity to a single macromolecule (the toxicological target), and therefore a single biological pathway may be perturbed at the initial target modulation. This would be followed by downstream activation of secondary and possibly tertiary pathways that result in the tissue or organ effect as indicated by key biomarkers.2 In this concept, the magnitude of toxicological effects are related to the concentration of altered molecular targets (at the site of interest), which in turn is related to the concentration of the active form of the chemical (parent compound or metabolite) at the site where the molecular targets are located. Also included in this concept is the unique susceptibility of the organism exposed to the compound.

1.3.2 Toxicological Profiling for Potential Adverse Reactions

Predictive toxicology efforts in drug research and development involve the use of multiple sources of legacy data including data generated by chemical and pharmaceutical companies and data submitted to regulatory agencies. These efforts have led to the “data warehouse” model which includes data generated through high throughput and targeted screening, and in vitro and in vivo toxicology studies on thousands of compounds and structural analogues. In a majority of cases these data also include findings from clinical trials where an experimental drug was tested on humans.
The information is applied in a “backward” fashion to predict potential findings where data do not yet exist or where decisions are being made on new potential drug candidates. Bowes and colleagues18 have described a pharmacological profiling effort by four large pharmaceutical companies: AstraZeneca, GlaxoSmithKline, Novartis, and Pfizer. The companies suggest that ∼75% of adverse drug reactions can be predicted by studying pharmacological profiles of candidate drugs. The pharmacological screening identifies primary effects related to the intended action of the candidate drug, whereas identification of secondary effects due to interactions with targets other than the primary (intended) target could be related to off-target adverse events. The groups have identified 44 screening targets including 24 G-protein coupled receptors, eight ion channels, six intracellular enzymes, three neurotransmitter transporters, two nuclear receptors, and one kinase. These types of screening data are used in the data warehouse model, typically configured in a proprietary fashion within each company. Other collaborative efforts have been developed and data from these sources would also be incorporated.
Blomme and Will19 have reviewed the current and past efforts by the pharmaceutical industry to optimize safety into molecules at the earliest stage of drug research. They conclude that new a...

Inhaltsverzeichnis