Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare
eBook - ePub

Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare

Mark Chang

  1. 352 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare

Mark Chang

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare covers exciting developments at the intersection of computer science and statistics. While much of machine-learning is statistics-based, achievements in deep learning for image and language processing rely on computer science's use of big data. Aimed at those with a statistical background who want to use their strengths in pursuing AI research, the book:

· Covers broad AI topics in drug development, precision medicine, and healthcare.

· Elaborates on supervised, unsupervised, reinforcement, and evolutionary learning methods.

· Introduces the similarity principle and related AI methods for both big and small data problems.

· Offers a balance of statistical and algorithm-based approaches to AI.

· Provides examples and real-world applications with hands-on R code.

· Suggests the path forward for AI in medicine and artificial general intelligence.

As well as covering the history of AI and the innovative ideas, methodologies and software implementation of the field, the book offers a comprehensive review of AI applications in medical sciences. In addition, readers will benefit from hands on exercises, with included R code.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare un PDF/ePUB en línea?
Sí, puedes acceder a Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare de Mark Chang en formato PDF o ePUB, así como a otros libros populares de Economia y Statistiche per il settore aziendale ed economico. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Año
2020
ISBN
9781000767308

1

Overview of Modern Artificial Intelligence

1.1 Brief History of Artificial Intelligence

The term, artificial intelligence (AI), was coined by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon in 1955. AI is tied to what we used to think of what comprised a robot’s brain, or to a function of such a brain. In a general sense, AI includes robotics. The term AI often emphasizes the software aspects, while the term robot includes a physical body as an important part. The notions of AI and robotics come from a long way back.
Early in 1854, George Boole argued that logical reasoning could be performed systematically in the same manner as solving a system of equations. Thus a logical approach played an essential role in early AI studies. Examples: the Spanish engineer Leonardo Torres Quevedo (1914) demonstrates the first chess-playing machine, capable of king and rook against king endgames without any human intervention, Claude Shannon’s (1950) “Programming a Computer for Playing Chess” is the first published article on developing a chess-playing computer program, and Arthur Samuel (1952) develops the first computer checkers-playing program and the first computer program to learn on its own. In 1997 Deep Blue becomes the first computer chess-playing program to beat a reigning world chess champion. Herbert Simon and Allen Newell (1955) develop the Logic Theorist, the first artificial intelligence program, which eventually would prove 38 of the first 52 theorems in Whitehead and Russell’s Principia Mathematica. In 1961 James Slagle develops SAINT (Symbolic Automatic INTegrator), a heuristic program that solved symbolic integration problems in freshman calculus. This is perhaps the predecessor of powerful AI software often used in present-day mathematics (Gil Press, 2016).
In robotics, Nikola Tesla (1898) makes a demonstration of the world’s first radio-controlled (“a borrowed mind” as Tesla described) vessel, an embryonic form of robots. Czech writer Karel Čapek (1921) introduces the word robot, a Czech word meaning forced work, in his play Rossum’s Universal Robots. Four years later, a radio-controlled driverless car was released, travelling the streets of New York City. In 1929, Makoto Nishimura designs the first robot built in Japan, which can change its facial expression and move its head and hands using an air pressure mechanism. The first industrial robot, Unimate, starts working on an assembly line in a General Motors plant in New Jersey in 1961. In 1986, Bundeswehr University built the first driverless car, which drives up to 55 mph on empty streets. In 2000 Honda’s ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in a restaurant setting. In 2009 Google starts developing, in secret, a driverless car. In 2014 it became the first to pass, in Nevada, a U.S. state self-driving test.
In artificial neuro-network (ANN) development, Warren S. McCulloch and Walter Pitts publish (1943) “A Logical Calculus of the Ideas Immanent in Nervous Activity” to mimic the brain. The authors discuss networks of simplified artificial “neurons” and how they might perform simple logical functions. Eight years later, Marvin Minsky and Dean Edmunds build SNARC (Stochastic Neural Analog Reinforcement Calculator), the first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons. In 1957, Frank Rosenblatt develops the Perceptron, an early artificial neural network enabling pattern recognition based on a two-layer computer learning network. Arthur Bryson and Yu-Chi Ho (1969) describe a backpropagation learning algorithm for multi-layer artificial neural networks, an important precursor contribution to the success of deep learning in the 2010s, once big data become available and computing power was sufficiently advanced to accommodate the training of large networks. In the following year, AT&T Bell Labs successfully applies backpropagation in ANN to recognizing handwritten ZIP codes, though it took 3 days to train the network, given the hardware limitations at the time. In 2006 Geoffrey Hinton publishes “Learning Multiple Layers of Representation,” summarizing the ideas that have led to “multilayer neural networks that contain top-down connections and training them to generate sensory data rather than to classify it,” i.e., a new approach to deep learning. In March 2016, Google DeepMind’s AlphaGo defeats Go champion Lee Sedol.
Computational linguistics originated with efforts in the United States in the 1950s to use computers to automatically translate texts from foreign languages, particularly Russian scientific journals, into English (John Hutchins, 1999). To translate one language into another, one has to understand the grammar of both languages, including morphology (the grammar of word forms), syntax (the grammar of sentence structure), the semantics, and the lexicon (or “vocabulary”), and even something of the pragmatics of language use. Thus, what started as an effort to translate between languages evolved into an entire discipline devoted to understanding how to represent and process natural languages using computers. Long before modern computational linguistics, Joseph Weizenbaum develops ELIZA in 1965, an interactive program that carries on a dialogue in English language on any topic. ELIZA surprised many people who attributed human-like feelings to the computer program. In 1988 Rollo Carpenter develops the chat-bot Jabberwacky to “simulate natural human chat in an interesting, entertaining and humorous manner.” It is an early attempt at creating artificial intelligence through human interaction. In 1988, IBM’s Watson Research Center publishes “A Statistical Approach to Language Translation,” heralding the shift from rule-based to probabilistic methods of machine translation. This marks a broader shift from a deterministic approach to a statistical approach in machine learning. In 1995, inspired by Joseph Weizenbaum’s ELIZA program, Richard Wallace develops the chatbot A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) with natural language sample data collection at an unprecedented scale, enabled by the advent of the Web. In 2011, a convolutional neural network wins the German Traffic Sign Recognition competition with 99.46% accuracy (vs. humans at 99.22%). In the same year, Watson, a natural language question-answering computer, competes on Jeopardy and defeats two former champions. In 2009, computer scientists at the Intelligent Information Laboratory at Northwestern University develop Stats Monkey, a program that writes sport news stories without human intervention.
In the areas referred to today as machine learning, data mining, pattern recognition, and expert systems, progress may be said to have started around 1960. Arthur Samuel (1959) coins the term machine learning, reporting on programming a computer “so that it will learn to play a better game of checkers than can be played by the person who wrote the program.” Edward Feigenbaum, et al. (1965) start working on DENDRAL at Stanford University, the first expert system for automating the decision-making process and constructing models of empirical induction in science. In 1978 Carnegie Mellon University developed the XCON program, a rule-based expert system that automatically selects computer system components based on the customer’s requirements. In 1980, researchers at Waseda University in Japan built Wabot, a musician humanoid robot able to communicate with a person, read a musical score, and play tunes on an electronic organ. In 1988 Judea Pearl publishes Probabilistic Reasoning in Intelligent Systems. Pearl was the 2011 Turing Award recipient for creating the representational and computational foundation for the processing of information under uncertainty, and is credited with the invention of Bayesian networks. In reinforcement learning, Rodney Brooks (1990) publishes “Elephants Don’t Play Chess,” proposing a new approach to AI from the ground up based on physical interaction with the environment: “The world is its own best model… the trick is to sense it appropriately and often enough.”
Bioinformatics involves AI or machine learning (ML) studies in biology and drug discovery. As an interdisciplinary field of science, bioinformatics combines biology, computer science, and statistics to analyze biological data such as the identification of candidate genes and single nucleotide polymorphisms (SNPs) for better understanding the genetic basis of disease, unique adaptations, desirable properties, or differences between populations. In the field of genetics and genomics, bioinformatics aids in sequencing and annotating genomes and their observed mutations. Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures. Since AI methods were introduced to biotech companies in the late 1990s, supervised and unsupervised learning have contributed significantly to drug discovery.
Another unique approach in the study of AI is called genetic programming, an evolutionary AI that can program itself for improvement. The idea of GP was inspired by genetics. From the 1980s onward, genetic programming has produced many human-competitive inventions and reinventions in disparate fields, including electronics and material engineering.

1.2 Waves of Artificial Intelligence

DARPA, a well-known AI research agency in the US, has recently characterized AI development using three waves (Figure 1.1). It’s dedicated to funding “crazy” projects – ideas that are completely outside the accepted norms and paradigms, including contribution to the establishment of the early internet and the Global Positioning System (GPS), as well as a flurry of other bizarre concepts, such as legged robots, prediction markets, and self-assembling work tools (Roey Tzezana, 2017).

1.2.1 First Wave: Logic-Based Handcrafted Knowledge

In the first wave of AI, domain-experts devised algorithms and software according to available knowledge. This approach led to the creation of chessplaying computers, and of delivery optimization software. Weizenbaum’s 1965 ELIZA (see Section 1.1), an AI agent that can carry on grammatically correct conversations with a human, was a logical rule-based agent. Even most of the software in use today is based on AI of this kind — think of robots in assembly lines and Google Maps. In this wave AI systems are usually based on clear and logical rules or decision trees. The systems examine the most important parameters in every situation they encounter, and reach a conclusion about the most appropriate action to take in each case without any involvement of probability theory. As a result, when the tasks involve too many parameters, many uncertainties or hidden parameters or confounders affect the outcomes in a complex system, and it is very difficult for first wave systems to deal with the complexity appropriately. Determining drug effects in human and disease diagnosis and prognosis are examples of such complex biological systems that first wave AI cannot handle well.
In summary, first-wave AI systems are capable of implementing logical rules for well-defined problems but are incapable of learning, and not able to deal with problems with a large underlying uncertainty.

1.2.2 Second Wave: Statistical Machine Learning

Over nearly two decades, emphasis has shifted from logic to probabilities, or more accurately to mixing logic and probabilities, thanks to the availability of “big data”, viable computer power, and the involvement of statisticians. Much great ongoing AI effort in both industrial applications as well as in academic research falls into this category. Here comes the second wave of AI. It is so statistics-focussed that Thomas J. Sargent, winner of the 2011 Nobel Prize in Economics, recently told the World Science and Technology Innovation Forum that artificial intelligence is actually statistics, but in a very gorgeous phrase, “it is statistics.” Many formulas are very old, but all AI uses statistics to solve problems. I see his point but do not completely agree with him, as you will see in the later chapters
To deal with complex systems with great uncertainties, probability and statistics are naturally effective tools. However, it cannot be the exact same statistical methods we have used in classical settings. As Leo Breiman (2001) pointed out in his Statistical Modeling: The Two Cultures: “There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown. The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems. Algorithmic modeling, both in theory and practice, has developed rapidly in fields outside statistics. It can be used both on large complex data sets and as a more accurate and informative alternative to data modeling on smaller data sets. If our goal as a field is to use data to solve problems, then we need to move away from exclusive dependence on data models and adopt a more diverse set of to...

Índice

Estilos de citas para Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare

APA 6 Citation

Chang, M. (2020). Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare (1st ed.). CRC Press. Retrieved from https://www.perlego.com/book/1505881/artificial-intelligence-for-drug-development-precision-medicine-and-healthcare-pdf (Original work published 2020)

Chicago Citation

Chang, Mark. (2020) 2020. Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare. 1st ed. CRC Press. https://www.perlego.com/book/1505881/artificial-intelligence-for-drug-development-precision-medicine-and-healthcare-pdf.

Harvard Citation

Chang, M. (2020) Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare. 1st edn. CRC Press. Available at: https://www.perlego.com/book/1505881/artificial-intelligence-for-drug-development-precision-medicine-and-healthcare-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Chang, Mark. Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare. 1st ed. CRC Press, 2020. Web. 14 Oct. 2022.