Big Data in Predictive Toxicology
eBook - ePub

Big Data in Predictive Toxicology

Daniel Neagu, Andrea-Nicole Richarz, Daniel Neagu, Andrea-Nicole Richarz

Share book
  1. 394 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Big Data in Predictive Toxicology

Daniel Neagu, Andrea-Nicole Richarz, Daniel Neagu, Andrea-Nicole Richarz

Book details
Book preview
Table of contents
Citations

About This Book

The rate at which toxicological data is generated is continually becoming more rapid and the volume of data generated is growing dramatically. This is due in part to advances in software solutions and cheminformatics approaches which increase the availability of open data from chemical, biological and toxicological and high throughput screening resources. However, the amplified pace and capacity of data generation achieved by these novel techniques presents challenges for organising and analysing data output.

Big Data in Predictive Toxicology discusses these challenges as well as the opportunities of new techniques encountered in data science. It addresses the nature of toxicological big data, their storage, analysis and interpretation. It also details how these data can be applied in toxicity prediction, modelling and risk assessment.

This title is of particular relevance to researchers and postgraduates working and studying in the fields of computational methods, applied and physical chemistry, cheminformatics, biological sciences, predictive toxicology and safety and hazard assessment.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on ā€œCancel Subscriptionā€ - itā€™s as simple as that. After you cancel, your membership will stay active for the remainder of the time youā€™ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlegoā€™s features. The only differences are the price and subscription period: With the annual plan youā€™ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weā€™ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Big Data in Predictive Toxicology an online PDF/ePUB?
Yes, you can access Big Data in Predictive Toxicology by Daniel Neagu, Andrea-Nicole Richarz, Daniel Neagu, Andrea-Nicole Richarz in PDF and/or ePUB format, as well as other popular books in Medicine & Toxicology. We have over one million books available in our catalogue for you to explore.

Information

Year
2019
ISBN
9781839160820
Edition
1
Subtopic
Toxicology
CHAPTER 1
Big Data in Predictive Toxicology: Challenges, Opportunities and Perspectives
Andrea-Nicole Richarz*
European Commission, Joint Research Centre (JRC), Ispra, Italy,
*E-mail: [email protected]

Predictive toxicology and model development rely heavily on data to draw upon and have historically suffered from the paucity of available and good quality datasets. The situation has now dramatically changed from a lack of data hampering model development to ā€œdata overloadā€. With high throughput/content screening methodologies being systematically used aiming to understand the mechanistic basis of adverse effects, and increasing use of omics technologies and consideration of (bio)monitoring data, the volume of data is continuously increasing. Big data in predictive toxicology may not have reached the dimension of other areas yet, such as real-time generated data in the health sector, but encompass similar characteristics and related challenges. Pertinent questions in this area are whether the new plethora of data are adequate for use in predictive toxicology and whether they address this area's most urgent problems. This overview chapter looks at the definition and characteristics of big data in the context of predictive toxicology as well as the challenges and opportunities big data present in this field.

1.1 Introduction

Predictive toxicology and model development rely heavily on data to draw upon and have historically suffered from the paucity of available and good quality datasets. As recently as 20 years ago, obtaining even relatively limited datasets of chemical structures and experimental bioactivities involved the laborious and manual collection, curation and compilation of data. Historically, the foremost problem in predictive toxicity modelling has been the scarcity of data, in addition to that of data quality. The data were produced manually, and thus slowly, by measuring physicoā€“chemical properties and activities experimentally in the laboratory. As a result, models have been built based on the same training sets, and data were only available for a limited number of properties and endpoints.
With the advance in laboratory automation and the paradigm change towards 21st Century Toxicology, focussing on adverse effect pathways and elucidating mechanisms of action, the availability of data has changed and, overall, improved. For example, initiatives such as ToxCast and Tox21 have produced publicly accessible bioactivity data for a large number of chemicals for many endpoints with high throughput/high content assays. Furthermore, omics technologies for screening changes in genomes, proteomes, metabolomes etc. have produced many data. Biomonitoring and epidemiological data are also increasingly available.
Thus, in a short time period, the situation for predictive toxicology has changed dramatically from a lack of data hampering model development to that of ā€œdata overloadā€. It can be argued that big data in predictive toxicology have not yet reached the dimension of other sectors, such as large-scale real-world data generated in the health sector. 1,2 However, these data encompass similar characteristics and related challenges and represent big data for this specific field.
This introductory chapter looks at the definition of big data in the context of predictive toxicology and the challenges and opportunities big data present in this field.

1.2 Big Data in the Area of Predictive Toxicology

Big data is a ā€œbuzz wordā€ in many areas of science and technology and without doubt have changed everyday lives in many areas of society, for example, with the ā€œinternet of thingsā€, by offering new opportunities to integrate and use information.
Given the ubiquity of the term, what are the ā€œbig dataā€ as relates to predictive toxicology? As compared to other fields, the data taken into account for predictive toxicology are not yet generated in real time for instant analysis. In the closely related sector of health care, for example, 24-hour data produced by monitoring functions in patients are already a reality. There are many initiatives in the area of drug discovery or medicine and public health that address the leverage of big data. For example, the European Union Innovative Medicines Initiative (IMI) programme Big Data for Better Outcomes (BD4BO 3 ) supports the use of big data to improve health care. This programme assists by developing platforms for integrating and analysing diverse big data sets. In the area of drug discovery, possibilities to accelerate medicinal chemistry by using big data and artificial intelligence are being investigated. 4 The European Lead Factory, 5 a collaborative publicā€“private partnership, has established the first European Compound Library and the first European Screening Centre, comprising up to 500 000 novel compounds. The EU Integrated Training Network BIGCHEM 6 project provides education in large chemical data analysis for drug discovery, to be able to process and use the growing amount of biomedical data in chemistry and life sciences. 7 The EU ExCAPE project 8 applies the power of supercomputers and machine learning to accelerate drug discovery.
It must be said that compared to the use of big data in other fields (e.g., finance, economics, etc.), the data in toxicology are still relatively small scale, but with regard to what went before, they have broken the mould and can be considered big data. Typical examples of the types of big data in the context of toxicology are high throughput/high content screening data and data generated with omics technologies, gene arrays etc. Compared to ā€œabsoluteā€ or ā€œstaticā€ toxicity data, in addition to doseā€“response curves, resolution of responses in time is becoming a major area of investigation, including toxicokinetic measurements ā€“ or models ā€“ to take into account, for example, the rate of absorption or metabolism. As a consequence, more data are being measured. This is particularly true for the area of high throughput kinetics. Furthermore, as discussed in the 2017 US National Academies of Sciences, Engineering, and Medicine (NAS) report on ā€œUsing 21st Century Science to Improve Risk-Related Evaluationsā€, 9 the use of epidemiological data will play an important role in chemical risk assessment in the future. Monitoring data, either environmental monitoring or human biomonitoring, will also contribute to risk assessment and predictive toxicology, and form an important part of the increasing big data in the area.
Similar challenges in terms of compiling, storing, processing, analysing and integrating these big data apply as for other areas and types of big data. Their quality, adequacy and appropriateness for specific purposes need to be investigated. They do, however, offer opportunities to tackle problems which were previously out of reach, and make completely new applications possible, to support toxicology evaluation and chemical safety assessment.

1.3 The Big Vs of Predictive Toxicology Data

The characteristic attributes of big data, the ā€œBig Vsā€, have been discussed frequently in the literature, 10 starting from the three major V attributes that define the characteristics of big data, i.e., Volume, Velocity, Variety. Currently, there are seven to up to 10 or more Vs described for big data, 11,14 which point at other attributes of big data and issues that need to be taken into account when dealing with big data; these vary slightly according to situation and use. The additional Vs include: Veracity, Variability, Validity, Visibility, Visualisation, Volatility, Vulnerability and Value (see Table 1.1). This section addresses how these characteristics of big data apply to the field of (predictive) toxicology; the 11th V, Vulnerability, is not considered in this context.
Table 1.1 Ten Vs of big data and their applicability in predictive toxicology; related opportunities and challenges
Big V Applicability in predictive toxicology Opportunities Challenges
Volume High number of data generated, e.g., high content screening (HCS), omics test read-outs, epidemiological and (bio)monitoring data Broader data basis for modelling and elucidation of modes of action and pathways Storing and processing large amounts of data; finding the relevant data in the flood of information; limits of capacities for data curation
Velocity Speed of data generation increased, e.g., high throughput screening (HTS) More rapid generation of data to fill gaps; ability to generate time-dependent data Speed of data generation overtaking speed of storage, analysis and processing capacities
Variety Many different types of data, e.g., chemical structures, results from a variety of assays, omics, time-dependent kinetics etc. Different types of information that can be combined to get the full picture Integration of different types of data. Representation and informatics processing of chemical structures is a challenge, especially if 3D transformations of structures have to be computed for many chemicals. Comparability of data from varied sources might not always be given
Veracity Data quality, accuracy and reliability; uncertainties requiring data curation and evaluation of data quality Large amounts of data might statistically compensate for inaccuracy in individual data when integrated Data curation and evaluation of data quality for large amount of data
Variability Intrinsic variability of biological data, e.g., inter- and intra- individual variations, genetic, population variations Availability of a large amount of data might enable taking into account variations of parameters with the models built, enabling better prediction of population variations Processing large amounts of variable data
Validity Validity of the data for a specific application, e.g., for prediction of toxicity (for a specific endpoint) Generation of many (types) of data possible in a targeted way, tailored for the specific prediction and toxicity assessment goal Finding/choosing the relevant data for the specific application
Visibility Data sharing, access to data sources leading to centralised databases and repositories Large data sets more visible than small disparate data sets, preferred storage in centralised repositories or linked via hubs/portals Making the data available and visible in an appropriate way
Visualisation Representation of the data content Supports and facilitates making sense of the varied and complex data sets, also supporting the organisation of the data Visualisation of complex data difficult, methods to improve representations of the data content in a clear way are needed
Volatility Data access might cease, or repositories disappear, which affects the sustainability of data resources N/A Appropriate storage and sustainability concept necessary
Value Adequacy and usefulness for predictive toxicology and hazard/risk assessment, depending on specific risk assessment goal Availability of many (types) of data as a broad data basis to understand mechanisms and pathways, in order to build informed predictive models Extracting and distilling the knowledge from the large amount of data
Volume: one of the three major defining characteristics of big data is the amount of data generated. In the field of toxicology, the large datasets produced are currently high throughput/content screening (HTS/HCS) assay data, high throughput kinetics, and omics technology read-outs; it is highly probable that in the future the use of large epidemiological and (bio)monitoring data sets will increase. Examples of large data sets and databases making these data available are ToxCast/Tox21 15,18 data for high throughput screening data ā€“ screening thousands of chemicals in about 70 assays covering over 125 processes in the organism, over 120 million data points have been produced so far 19 (see Chapter 8); ChEMBL 20 from the European Bioinformatics Institute (EMB...

Table of contents