Religion, Neuroscience and the Self
eBook - ePub

Religion, Neuroscience and the Self

A New Personalism

Patrick McNamara

Compartir libro
  1. 186 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

Religion, Neuroscience and the Self

A New Personalism

Patrick McNamara

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

The purpose of this book is to use neuroscience discoveries concerning religious experiences, the Self and personhood to deepen, enhance and interrogate the theological and philosophical set of ideas known as Personalism. McNamara proposes a new eschatological form of personalism that is consistent with current neuroscience models of relevant brain functions concerning the self and personhood and that can meet the catastrophic challenges of the 21st century.

Eschatological Personalism, rooted in the philosophical tradition of "Boston Personalism", takes as its starting point the personalist claim that the significance of a self and personality is not fully revealed until it has reached its endpoint, but theologically that end point can only occur within the eschatological realm. That realm is explored in the book along with implications for personalist theory and ethics. Topics covered include the agent intellect, dreams and the imagination, future-orientation and eschatology, phenomenology of Time, social ethics, Love, the challenge of AI, privacy and solitude and the individual ethic of autarchy.

This book is an innovative combination of the neuroscientific and theological insights provided by a Personalist viewpoint. As such, it will be of great interest to scholars of Cognitive Science, Theology, Religious Studies and the philosophy of the mind.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Religion, Neuroscience and the Self un PDF/ePUB en línea?
Sí, puedes acceder a Religion, Neuroscience and the Self de Patrick McNamara en formato PDF o ePUB, así como a otros libros populares de Theologie & Religion y Religionspsychologie. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Editorial
Routledge
Año
2019
ISBN
9780429671432
Edición
1

1 The need for an eschatological personalism

Why do we need an eschatological form of personalism in the first place? We need this new form of personalism because we are entering a new axial age in human history. Some scientists have argued1 that in the past, cumulative human knowledge doubled approximately every century until 1900. By 1950, human knowledge was doubling every 25 years. In the first decade of the new millennium human knowledge was doubling every 13 months, and now in the age of globalized internet communications the doubling time of human knowledge is reduced to a day. This surfeit of information and knowledge is just one manifestation of the new axial age we are entering.
The first axial age involved the taming of fire by our most distant ancestors over a million years ago.2 The ability to use fire created the human lineage proper and thus it is not mere hyperbole to say that a new technology, fire, created human beings. Our fire-wielding ancestors used that technology to transform their bodies and physiologies as well as the very worlds they lived-in. The ecological niche for our ancestors, the great apes, was some limited tree canopies and grasslands in some limited forests and jungles in limited parts of the old-world. The use of fire, however, meant that our ancestors could leave their limited ecological niche and then colonize vast new areas of the globe. Controlled use of fire could be used to dramatically alter whole ecological landscapes via the use of planned and strategic burns. Fire also allowed our ancestors to create warm hearths, cook food (which transformed our physiologies), and forge new tools and weapons that made them a deadly and efficient predator species. Fire also allowed our ancestors to colonize the night thus opening-up whole new opportunities for action and social interactions. Night activity also very likely promoted the social sharing of dreams, thus increasing the strategic importance of dreams, a fact that would carry huge significance for later visionary religious forms, rituals, and activities. It is possible that religious ideas inspired the first controlled use of fire and that, conversely, the new technology facilitated adoption of other new religious ideas.
Some one million years after the discovery of fire (and only about 8,000 years before the birth of Christ) the second axial age began. It too was associated with the rise of a revolutionary new technology: agriculture. The rise of farming and agriculture may have been given a boost by new religious ideas (as the work at Gobekli Tepe has shown3), and it, in turn, eventually (some 5,000 years later) promoted the rise of new city states and then the rise of new philosophical and religious movements such as Zoroastrianism in Persia, Confucianism in China, Buddhism in India, the Hebrew prophets in Israel, and the philosophers in Greece. The rise of Zoroastrianism in the Neolithic period heralded the first appearance of eschatological thought as well. All the major elements of eschatology as we understand it today are articulated at its birth among the Zoroastrians: a messianic figure, the Saoshyant born to a Virgin, sacrifices himself to birth a new age and to bring the evil age to an end, a final battle between good and evil, a final judgment of the individual conscience, resurrection of the dead, and an ultimate paradise on earth where love and justice reign forever.4
Some 2,500 years after this Neolithic revolution, we are entering a third axial age. Once again the revolutionary transition in human consciousness and behavior is associated with the rise of a new technology. In this case that new technology began with the industrial machines of the 19th century but is coming to fruition with the rise of the super-intelligent, semi-conscious artificially intelligent (AI) machines.
An eschatological personalism is necessary at this point in history, not just because we are facing all the familiar planetary-wide crises such as climate change, massive population growth and transfers from the global south to the north, the continuing threat of regional wars, pandemic disease outbreaks, terrorism, and the use of weapons of mass destruction, but in addition we are entering in a whole new relationship with the machines in our lives. The rise of intelligent machines is already transforming what it means to be a person. These machines both enhance the personal and annihilate it. We need a personalism to learn how to benefit from, rather than be threatened by these new machines.
A threat facing persons right on the immediate horizon is that these new machines are being used to enhance powers of the modern super-state and the capital accumulation powers of the modern business corporation.5 These machines are enhancing both the corporation’s and the state’s surveillance abilities, its weapons capacities, its policing abilities, its taxing abilities, and its coercive abilities to name a few. The AI machines allow the state to intrusively follow; tabulate; catalog; number; manipulate; nudge; and control every man, woman, and child in the world from the cradle (nay from conception) to the grave. The unprecedented challenges associated with the rise of AI, in particular, will require a personalism capable of safeguarding the dignity, freedom, and indeed the very life of the human person when the inevitable “singularity” arrives in the form of AI “persons” who are smarter than their flesh and blood counterparts.
To understand the huge challenge the rise of AI presents to personalism’s fundamental claims, it is necessary to understand why the claim that AI machines will not only supersede human intelligence (exhibiting super-intelligent properties) but will also almost certainly exhibit all the signs of “consciousness” is very likely true. We need an eschatological personalism because we will, in the near future, very likely encounter on an almost daily basis conscious, super-intelligent machines. The only models we have to help us understand entities like these conscious robots are perhaps the human savants who are obviously conscious beings but who can do superhuman things like calculate at lightning speeds. But many super-intelligent AI machines will be good at far more than lightning speed mathematical calculations. AI machines can draw upon huge masses of data from sources including e-commerce, businesses, social media, smart phones, science, and government, which regularly provide the raw material for dramatically improved machine learning approaches and algorithms. Their computing power is orders of magnitude larger than any human being could ever hope to achieve. In addition, most new AI machines are fitted out with artificial neural network (ANN) learning algorithms so that they can depart from original coding instructions and do unexpected things including innovate solutions to very complex problems.
An ANN is modeled on the human brain in that it typically has anywhere from dozens to millions of artificial neurons—called units—arranged in a series of layers. The input layer receives various forms of information from human coders or from sensors and millions of data input devices. The input layer formats and sends the data to the hidden layers for computations and further re-formatting which then allows the output layers to construct and repackage the information to meet standards set by human coders.
The connections between units within each layer are assigned different weights so that programmers can allow some information “to count for more” when computing outputs. If, for example, the standard the machine is aiming for is to find the best match in a database of billions of faces for a face presented at input, then those units handling information related to eyes and mouth might be given greater weight than those handling overall shape of the face and so on. ANNs can learn if they are given a massive database of materials to work with (called a training set). When, for example, you are trying to teach an ANN how to differentiate a human face from a monkey face, the training set would provide thousands of images tagged as human, so the network would eventually begin to learn the basic visual features associated with a human face. Each time it accurately classifies a face as human it is “rewarded” by sending that information back into the training set and weighting the appropriate hidden units (those that facilitated the more accurate classification of the input stimulus) accordingly. This is called back propagation. It works like any prediction–error schema. A predictive standard is compared against what is actually obtained and the difference between the standard and the obtained (the error signal) is computed. Then the machine is directed to minimize that error/difference in future computations by up-regulating those units that performed best in the identification task in each next iteration. Known as deep learning, this is how both ANNs and the human brain learn and they are what make a machine intelligent.
Deep convolutional neural networks (DCNNs) have been particularly successful in making machines intelligent. Not surprisingly perhaps they are even more like the human brain than their ANN ancestors. DCNNs are particularly adept at the difficult task of object recognition in natural scenes. For human brains object recognition appears to be built up from very primitive visual features like edges and contrasts. Then these features are combined into larger shapes further down the visual processing pathways until the whole object is put together (along with the subjective feeling of “recognition”) in the inferior temporal cortex or in the occipital visual cortex. Same with DCNNs. Features selectively detected by lower layers of a DCNN bear striking similarities to the low-level features processed by the early centers in the human visual processing pathway. Note that these similarities occur even though DCNNs were not explicitly designed to model the visual system. Instead the DCNN exhibits these functional and hierarchical visual processing similarities after training on object recognition tasks. A more recent development in ANN architectures is generative adversarial networks (GANs). These are deep neural net architectures comprised of two nets. Each net is given the goal to out-predict or outperform the other thus pitting one against the other. These GANs have proven to be remarkably effective at all kinds of tasks from object recognition to speech recognition and they learn far more efficiently than do traditional ANNs.

The Predictive Processing Framework (PPF)

These ANNs, DCNNs, and GANs operate using principles broadly consistent with predictive processing frameworks (PPFs6) or accounts of brain and cognitive processing. I will use the PPF in this book to present and understand neuroscience findings concerning brain/Mind functions, especially findings concerning the self and personhood. In predictive processing theories of Mind and brain the brain is modeled as a prediction machine. It seeks to predict or guess or anticipate what will occur and then it samples incoming sensory information about what actually did occur in order to compare the actual data against the predicted simulation. It then computes the difference and attempts to minimize that difference in future simulations/predictions. These predictive simulations are theorized to occur at every level of the neuraxis from primary motor levels right up to the most cognitively abstract levels subserved by most recently evolved areas of the prefrontal lobes. While error signals propagate up the neuraxis from distal sensory receptors up to the prefrontal cortex mostly via glutamatergic alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid receptors, predictive signals may be sent downward from higher hierarchical levels predominantly via glutamatergic N-methyl-D-aspartate receptor (NMDAR) signaling. Glutamatergic signaling systems, in turn, are known to be modulated by dopaminergic and cholinergic systems depending on the particular inferential hierarchy.
When predictions or models are confirmed by sensory input, dopaminergic signaling is either unchanged or slightly down-regulated. But if expectations are violated (there is mis-match between predicted model and sensory feedback) then dopaminergic signaling is upregulated, thus activating glutamatergic signaling in the neural hierarchy facilitating processing of the error signal as novel and therefore valuable information. Novel signals are registered in the dopaminergic reward system, thus reinforcing learning and model updating. Brain oscillatory signals have been related to predictive coding, with feedback signaling of predictions going down the neuraxis being mediated predominantly by the alpha/beta frequency bands and feedforward error signaling ascending up the neuraxis by rapid eye movement (REM) sleep or REM theta activity and cortical gamma-band activity.
In short, the brain uses error signals to infer the causes of incoming sensory data. Models are not generated merely via sampling incoming sensory information and related error signals. Models are also generated via active inference—that is, by acting on the world, sampling only that sensory information relevant to the action that best minimizes error. When hierarchical Bayesian inference entails modeling ourselves as agents who operate on the world or who select among possible world models or simulations, then experiences such as choice, agency, and selfhood can arise from or inferred from the consequences of our own actions.
In any case it is an interesting fact that the models that best account for the intelligent activity we see associated with both the human brain and intelligent AI machines are neural network and PPF inspired models.

AI and the imago Dei

Theologically one mark of personhood and of human dignity lies in the fact that human beings are made in the image and likeness of God—we carry within us the imago Dei. If AI machines one day become conscious (as I believe they will) will they then possess the imago Dei as well? What precisely is the imago Dei? If it is “reason” or intelligence, then AI machines will possess the imago Dei as well. If rationality is the key ingredient, as many classical personalistic theories contend, then these machines will soon be seen as persons possessing the imago Dei as their decision-making capacities become more complex. If it is consciousness, then AI machines must be expected to one day carry the imago Dei as most scientists (including this author) believe that these machines will, sooner rather later, one day be conscious—especially if you define consciousness in terms of the Turing machine test or the more exacting integrated information theory test (IIT, Tononi 2014). The Turing test boils down to whether a machine can fool a human being into believing that they are in fact dealing with another human being when in fact they are interacting with a machine. Most scientists would agree that intelligent AI machines have passed the Turing test. For IIT an experience is conscious if it satisfies several conditions: it is actual and occurrent, is structured (composed of differing phenomenal elements), is specific and distinctive (it can be uniquely differentiated from other experiences), is unified (is experienced as one integrated whole), and is definite. Arguably many object recognition events accomplished via AI DCNN machines meet this criteria for consciousness. Consider, for example, Google’s Deep Dream machine productions.7 Anyone perusing the artistic productions of this machine, especially those involving human-machine collaborations, is forced to entertain the possibility that it displays a kind of aesthetic sensibility. Consider further the case of Facebook’s 2017 attempt to create bots that could interact fluently with human beings in a standard online chatbox format. Facebook engineers wanted to create AI machines that could do more than provide formulaic replies to standard customer questions.8 They wanted their machines to have the ability to negotiate with customers. Thus, these AI dialog machines were given the ability to build mental models of their interlocutor’s intentions/minds and “think ahead” or anticipate directions a conversation was going to take in the future. Based on analysis of millions of previous conversations involving negotiations the “dialog AI” simulates a future conversation by rolling out a dialog model to the end of the conversation, so that an utterance with the maximum expected future reward can be chosen and presented to the customer. Modeling the intentions and minds of interlocutors is a capacity we call mind-reading in the cognitive neuroscience literature. AI machines appear able to do some minimal forms of mind-reading now. These machines can model human minds and respond accordingly. Indeed, they can likely do mind-reading better than humans. When Facebook engineers tested the chat bot dialog agents with real human customers most people did not realize they were talking to a bot rather than another person (again implying that AI machines can pass the Turing test). There were cases where the machines initially feigned interest in an item, only to later “compromise” by conceding it—thus strategically implementing effective negotiating tactics that people use regularly. This behavior was not programmed by the researchers but was discovered by the bot itself. All these accomplishments are amazing in themselves and speak to some level of consciousness in these machines. But then the Facebook engineers allowed these mind-reading dialog machines to speak to one another. They appeared to develop their own human-machine hybrid language when conversing with one another. We have only a small piece of the conversation that ensued between the machines, Bob and Alice: Bob: “I can i i everything else” Alice: “balls have zero to me to me to me to me to me to me to me to me to”...

Índice