Computational Models Of Cognitive Processes - Proceedings Of The 13th Neural Computation And Psychology Workshop
eBook - ePub

Computational Models Of Cognitive Processes - Proceedings Of The 13th Neural Computation And Psychology Workshop

  1. 288 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Computational Models Of Cognitive Processes - Proceedings Of The 13th Neural Computation And Psychology Workshop

About this book

Computational Models of Cognitive Processes collects refereed versions of papers presented at the 13th Neural Computation and Psychology Workshop (NCPW13) that took place July 2012, in San Sebastian (Spain). This workshop series is a well-established and unique forum that brings together researchers from such diverse disciplines as artificial intelligence, cognitive science, computer science, neurobiology, philosophy and psychology to discuss their latest work on models of cognitive processes.

Contents:

  • Language:
    • Modelling Language — Vision Interactions in the Hub and Spoke Framework (A C Smith, P Monaghan and F Huettig)
    • Modelling Letter Perception: The Effect of Supervision and Top-Down Information on Simulated Reaction Times (M Klein, S Frank, S Madec and J Grainger)
    • Encoding Words into a Potts Attractor Network (S Pirmoradian and A Treves)
    • Unexpected Predictability in the Hawaiian Passive (Ō Parker Jones and J Mayor)
    • Difference Between Spoken and Written Language Based on Zipf's Law Analysis (J S Kim, C Y Lee and B T Zhang)
    • Reading Aloud is Quicker than Reading Silently: A Study in the Japanese Language Demonstrating the Enhancement of Cognitive Processing by Action (H-F Yanai, T Konno and A Enjyoji)
  • Development:
    • Testing a Dynamic Neural Field Model of Children's Category Labelling (K E Twomey and J S Horst)
    • Theoretical and Computational Limitations in Simulating 3- to 4-Month-Old Infants' Categorization Processes (M Mermillod, N Vermeulen, G Kaminsky, E Gentaz and P Bonin)
    • Reinforcement-Modulated Self-Organization in Infant Motor Speech Learning (A S Warlaumont)
    • A Computational Model of the Headturn Preference Procedure: Design, Challenges, and Insights (C Bergmann, L Ten Bosch and L Boves)
    • Right Otitis Media in Early Childhood and Language Development: An ERP Study (M F Alonso, P Uclés and P Saz)
  • High-Level Cognition:
    • The Influence of Implementation on “Hub” Models of Semantic Cognition (O Guest, R P Cooper and E J Davelaar)
    • Hierarchical Structure in Prefrontal Cortex Improves Performance at Abstract Tasks (R Tukker, A C Van Rossum, S Frank and W F G Haselager)
    • Interactive Activation Networks for Modelling Problem Solving (P Monaghan, T Ormerod and U N Sio)
    • On Observational Learning of Hierarchies in Sequential Tasks: A Dynamic Neural Field Model (E Sousa, W Erlhagen and E Bicho)
    • Knowing When to Quit on Unlearnable Problems: Another Step Towards Autonomous Learning (T R Shultz and E Doty)
    • A Conflict/Control-Loop Hypothesis of Hemispheric Brain Reserve Capacity (N Rendell and E J Davelaar)
  • Action and Emotion:
    • Modeling the Actor-Critic Architecture by Combining Recent Work in Reservoir Computing and Temporal Difference Learning in Complex Environments (J J Rodny and D C Noelle)
    • The Conceptualisation of Emotion Qualia: Semantic Clustering of Emotional Tweets (E Y Bann and J J Bryson)
    • A Neuro-Computational Study of Laughter (M F Alonso, P Loste, J Navarro, R Del Moral, R Lahoz-Beltra and P C Marijuán)


Readership: Students and researchers in biocybernetics, neuroscience, cognitive science, psychology and artificial intelligence and those interested in neural models of psychological phenomena.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Computational Models Of Cognitive Processes - Proceedings Of The 13th Neural Computation And Psychology Workshop by Julien Mayor, Pablo Gomez in PDF and/or ePUB format, as well as other popular books in Biological Sciences & Science General. We have over one million books available in our catalogue for you to explore.

Information

Language

MODELLING LANGUAGE – VISION INTERACTIONS IN THE
HUB AND SPOKE FRAMEWORK

A. C. SMITH
Max Planck Institute for Psycholinguistics
Nijmegen, The Netherlands
P. MONAGHAN
Department of Psychology, Lancaster University
Lancaster LA1 4YF, UK
F. HUETTIG
Max Planck Institute for Psycholinguistics
Nijmegen, The Netherlands
Multimodal integration is a central characteristic of human cognition. However our understanding of the interaction between modalities and its influence on behaviour is still in its infancy. This paper examines the value of the Hub & Spoke framework1,4 as a tool for exploring multimodal interaction in cognition. We present a Hub and Spoke model of language–vision information interaction and report the model’s ability to replicate a range of phonological, visual and semantic similarity word-level effects reported in the Visual World Paradigm.5,6 The model provides an explicit connection between the percepts of language and the distribution of eye gaze and demonstrates the scope of the Hub-and-Spoke architectural framework by modelling new aspects of multimodal cognition.

1. Introduction

Hub and Spoke (H&S) models1-4 are characterised by a central resource that integrates modality specific information. The approach reflects the increased interest and awareness within cognitive science of multimodal cognitive interactions. To date computational implementations of the H&S framework have been used in conjunction with neuropsychological data to offer both explanation for a range of semantic related neuropsychological disorders and insight into how semantic processing may be implemented within the brain. This paper aims to highlight the potential for broader application of this framework. Research within cognitive neuroscience has demonstrated the difficulty of assigning modality specific functions to distinct neural processing regions (see Anderson7; Poldrack8). An increased understanding of how modality specific information may be integrated with information from other modalities and how such a system may behave could therefore prove valuable to neuropsychology. The H&S computational modelling framework offers a tool for investigating such complex interactive aspects of multimodal cognition.

2. Virtues of the Hub & Spoke Framework

When hearing a spoken word such as ā€œappleā€ it is possible to bring to mind its visual form. When seeing an object such as an ā€˜apple’ it is also possible to bring to mind the spoken word used to describe the object ā€œappleā€. How are modality specific representations connected across modalities, what is the nature of representation in each modality and how are connections between representations acquired? Previous H&S models have offered answers to each of these questions.
The H&S framework has proved successful by providing a parsimonious architecture in which single modality models can be drawn together to examine the consequences of multimodal interaction and representation. Due to the complexity inherent in multimodal processing, predicting the connectivity between modalities without explicit implementation can be challenging. For instance, an apparent dissociation between lexical and semantic performance in semantic dementia patients suggested the need for separate systems supporting lexical and semantic processing. However, the H&S offered a means of testing the compatibility of a fully integrated model with the behavioural data, and Dilkina et al.3,4 demonstrated that counter to previous assumptions the pattern of behaviour observed was consistent with a single system H&S model.
The H&S framework offers a single system architecture with minimal architectural assumptions, and this makes it possible to isolate the influence of two further major determinants of emergent behaviour in such complex multimodal systems, 1) the structure of representations and/or 2) the tasks or mappings demanded by the learning environment.
Plaut1 and Rogers et al2 present two alternative means of exploring the role of representational structure through use of the H&S framework. Plaut1 focused on a single aspect of representational structure, specifically systematic or arbitrary relationships between modalities. By abstracting out additional complexity within representations Plaut was able to investigate the emergent properties of this single factor. In contrast Rogers et al2 and Dilkina et al3,4 provided a richer representation of the structure available within the learning environment by deriving semantic representations from attribute norming studies. This enabled the authors to examine the emergent properties a single system multimodal model is capable of developing with such richer input. It is through simulating such complexity within the learning environment that their model was able to replicate the broad variability displayed by semantic dementia patients that had previously been viewed as challenging for single system accounts. Such approaches demonstrate the framework’s potential for providing a more detailed understanding of how representational structure shapes multimodal cognition.
As the H&S framework allows a model to perform multiple mappings, decisions are required as to which mappings are performed, how frequently they are performed and how these variables might change over the course of development. Dilkina et al.4 introduced stages of development within the model training process. They attempted to provide a more accurate depiction of the constraints placed on systems during development by manipulating the frequency and period in which given tasks are performed by the model (e.g.,mapping orthography to phonology was only performed during the second stage of development). This is an example of how the framework can be used to explore the relationship between environmental constraints, such as the type and frequency of mappings performed during development, and the emergent behaviour displayed by the system.
To date the H&S framework has been used primarily in conjunction with neuropsychological data. This approach provides clear advantages when aiming to map network architecture onto neural populations and has brought significant progress in this direction with evidence emerging for a mapping of the semantic hub (integrative layer) onto neural populations in the anterior temporal lobe.9 The framework however also offers scope for examining the factors underlying individual differences within non-patient populations, a feature yet to be exploited. For example, as we have described, the framework makes it possible to examine how contrasts in the learning environment, be it in the input to the system (e.g., richness or diversity of input) or the mappings demanded (e.g., learning to read: orthography to phonology), can result in variation in behaviour both across development and in mature systems.
Further as multimodal integration is central to many aspects of human cognition the H&S framework has the potential to provide insight into many new areas of cognitive processin...

Table of contents

  1. Cover
  2. Halftitle page
  3. Frontmatter
  4. Title Page
  5. Copyright
  6. Preface
  7. Language
  8. Development
  9. High-Level Cognition
  10. Action and Emotion