10th Annual Conference Cognitive Science Society Pod
eBook - ePub

10th Annual Conference Cognitive Science Society Pod

  1. 788 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

10th Annual Conference Cognitive Science Society Pod

About this book

First Published in 1988. A collection ofĀ papers, presentations and poster summaries from the tenth annual conference of the Cognitive Science Society in Montreal, Canada August 1988.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access 10th Annual Conference Cognitive Science Society Pod by Cognitive Science Society in PDF and/or ePUB format, as well as other popular books in Psychology & Cognitive Psychology & Cognition. We have over one million books available in our catalogue for you to explore.

Recursive Auto-Associative Memory: Devising Compositional Distributed Representations

Jordan Pollack

Computing Research Laboratory
New Mexico State University

INTRODUCTION

A major outstanding problem for connectionist models is the representation of variable-sized recursive and sequential data structures, such as trees and stacks, in fixed-resource systems. Such representational schemes are crucial to efforts in modeling high-level cognitive faculties, such as Natural Language processing. Pure connectionism has thus far generated somewhat unsatisfying systems in this domain, for example, which parse fixed length sentences (Cottrell, 1985; Fanty 1985; Selman, 1985; Hanson & Kegl, 1987), or flat ones (McClelland & Kawamoto, 1986).1
Thus, one of the main attacks on connectionism has been on the inadequacy of its representations, especially on their lack of compositionality (Fodor & Pylyshyn, 1988).
However, some design work has been done on general-purpose distributed representations with limited capacity for sequential or recursive structures. For example, Touretzky has developed a coarse-coded memory system and used it both in a production system (Touretzky & Hinton, 1985) and in two other symbolic processes (Touretzky, 1986ab). In the past-tense model, Rumelhart and McClelland (1986) developed an implicitly sequential representation, where a pattern of well-formed overlapping triples could be interpreted as a sequence.
Although both representations were successful for their prescribed tasks, there remain some problems.
• First, a large amount of human effort was involved in the design, compression and tuning of these representations.
• Second, both require expensive and complex access mechanisms, such as pullout networks (Mozer, 1984) or clause-spaces (Touretzky & Hinton, 1985).
• Third, they can only encode structures composed of a fixed tiny set of representational elements, (i.e. like triples of 25 tokens), and can only represent a small number of these element-structures before spurious elements are introduced2. These representational spaces are, figuratively speaking, like a ā€œprairieā€ covered in short bushes of only a few species.
• Finally, they utilize only binary codes over a large set of units.
The compositional distributed representations devised by the technique to be described below demonstrate somewhat opposing, and, I believe, better properties:
• Less human work in design by letting a machine do the work,
• Simple and deterministic access mechanisms,
• A more flexible notion of capacity in a ā€œtropicalā€ representational space: a potentially very large number of primitive species, which combine into tall, but sparse structures.
• Finally, the utilization of analog encodings.
The rest of this paper is organized as follows. First, I describe the strategy for learning to represent stacks and trees, which involves the co-evolution of the training environment along with the access mechanisms and distributed representations. Second, I allude to several experiments using this strategy, and provide the details of an experiment in developing representations for binary syntactic trees. And, finally, some crucial issues are discussed.

RECURSIVE AUTO-ASSOCIATIVE MEMORY

Learning To Be A Stack

Consider a variable-depth stack of L-bit items. For a particular application, both the set of items (i.e. a subset of the 2L patterns) and the order in which they are pushed and popped are much more constrained than, say, all possible sequences of N such patterns, of which there are 2LN . Given this fact, it should be possible to build a stack with less than LN units (as in a shift-register approach) but more than L units with less than N bits of analog resolution, as in an approach using fractional encodings such as the one I used in the construction of a ā€œneuring machineā€ (Pollack, 1987a).
The problem is finding such a stack for a particular application.
Image
Figure 1.
Proposed inverse stack mechanisms in single-layered feedforward networks.
Consider representing a stack in a activity vector of M bounded analog values, where M>L. Pushing a L-bit vector onto the stack is essentially a function that would have L+M inputs, for the new item to push plus the current value of the stack, and M outputs, for the new value of the stack. Popping the stack is a function that would have M input units, for the current value of the stack, and L+M output units, for the top item plus the representation for the remaining stack. Potential mechanisms in the form of single-layered networks are shown in figure 1. The operation performed by a single layer is a vector-by-matrix multiplication and then a non-linear scaling of the output vector to between 0 and 1 by a logistic function.
All we need for a stack mechanism then, are these two functions plus a distinguished M-vector of numbers, ε, the empty vector. To push elements onto the stack, simply encode the element plus the current stack; to pop the stack, decode the stack into the top element and the former stack. Note that this is a recursive definition, where previously encoded stacks are used in further encodings. The problem is that it is not at all clear how to design these functions, which involve some magical way to recursively encode L+M numbers into M numbers while preserving enough information to consistently decode the L +M numbers back.
One clue for how to do this comes from the Encoder Problem (Ackley, Hinton, & Sejnowski, 1985), where a sparse set of fixed-width patterns are encoded i...

Table of contents

  1. Cover
  2. Title Page
  3. Copyright Page
  4. Table of Contents
  5. The Place of Cognitive Architectures in a Rational Analysis
  6. Transitions in Strategy Choices
  7. VITAL, A Connectionist Parser
  8. Applying Contextual Constraints in Sentence Comprehension
  9. Recursive Auto-Associative Memory: Devising Compositional Distributed Representations
  10. Experiments with Sequential Associative Memories
  11. Representing Part-Whole Hierarchies in Connectionist Networks
  12. Using Rules and Task Division to Augment Connectionist Learning
  13. Analyzing a Connectionist Model as a System of Soft Rules
  14. How to Summarize Text (and Represent it too)
  15. On-Line Processing of a Procedural Text
  16. Understanding Stories in their Social Context
  17. Context Effects in the Comprehension of Idioms
  18. Action Planning: Routine Computing Tasks
  19. Using Conversation MOPs to Integrate Intention and Convention in Natural Language Processing
  20. A Theory of Simplicity
  21. Basic Levels in Hierarchically Structured Categories
  22. Flexible Natural Language Processing and Roschian Category Theory
  23. The Induction of Mental Structures While Learning to Use Symbolic Systems
  24. Transitory Stages in the Development of Medical Expertise: The ā€œIntermediate Effectā€ in Clinical Case Representation Studies
  25. Integrating Marker Passing and Connectionism for Handling Conceptual and Structural Ambiguities
  26. Learning Subgoals and Methods for Solving Problems
  27. Opportunistic Use of Schemata for Medical Diagnosis
  28. Integrating Case-Based and Causal Reasoning
  29. Modeling Software Design Within a Problem-Space Architecture
  30. Modeling Human Syllogistic Reasoning in Soar
  31. Integrated Commonsense and Theoretical Mental Models in Physics Problem Solving
  32. A Connectionist Model of Selective Attention in Visual Perception
  33. How Near is too Far? Talking About Visual Images
  34. An Adaptive Model for Viewpoint - Invariant Object Recognition
  35. Spatial Reasoning Using Sinusoidal Oscillations
  36. An Unsupervised PDP Learning Model for Action Planning
  37. Representation and Recognition of Biological Motion
  38. When half right is not half bad: Hypothesis testing under conditions of uncertainty and complexity
  39. A Theory of Scientific Problem Solving
  40. Empirical Analyses and Connectionist Modeling of Real-Time Human Image Understanding
  41. Learning to Represent and Understand Locative Prepositional Phrases
  42. A Computational Model of Syntactic Ambiguity as a Lexical Process
  43. A Parallel Model for Adult Sentence Processing
  44. Parsing Metacommunication in Natural Language Dialogue to Understand Indirect Requests
  45. Interpretation of Quantifier Scope Ambiguities
  46. Multiple Simultaneous Interpretations of Ambiguous Sentences
  47. The Role of Analogy in a Theory of Problem Solving
  48. The Architecture of Children’s Physics Knowledge: A Problem-Solving Perspective
  49. Hierarchical Problem Solving as a Means of Promoting Expertise
  50. Collaborative Cognition
  51. Explorations in Understanding How Physical Systems Work
  52. Instructional Strategies for a Coached Practice Environment
  53. Creatures of Habit: A Computational System to Enhance and Illuminate the Development of Scientific Thinking
  54. Propositional Attitudes, Commonsense Reasoning and Metaphor
  55. A Process-Oriented, Intensional Model of Knowledge and Belief
  56. Subcognitive Probing: Hard Questions for the Turing Test
  57. The Pragmatics of Expertise in Medicine
  58. Cognitive Flexibility Theory: Advanced Knowledge Acquisition in Ill-Structured Domains
  59. A Hybrid Connectionist/Production System Interpretation of Age Differences in Perceptual Learning
  60. Language Experience and Prose Processing in Adulthood
  61. Effects of Age and Skill on Domain-Specific Visual Search
  62. Patching Up Old Plans
  63. Systematicity as a Selection Constraint in Analogical Mapping
  64. Abstraction Processes During Concept Learning: A Structural View
  65. Explanatory Coherence and Belief Revision in Naive Physics
  66. Access and Use of Previous Solutions in a Problem Solving Situation
  67. The Use of Explanations for Completing and Correcting Causal Models
  68. Varieties of Learning from Problem Solving Experience
  69. The Process of Learning LISP
  70. Interactive Medical Problem Solving in the Context of the Clinical Interview: The Nature of Expertise
  71. A Dynamical Theory of the Power-Law of Learning in Problem-Solving
  72. Assessing the Structure of Knowledge in a Procedural Domain
  73. Improvement in Medical Knowledge Unrelated to Stable Knowledge
  74. Sequential Connectionist Networks for Answering Simple Questions about a Microworld
  75. The Usefulness of the Script Concept for Characterizing Dream Reports
  76. The Right of Free Association: Relative-Position Encoding for Connectionist Data Structures
  77. Unsupervised Learning of Correlational Structure
  78. On the Application of Medical Basic Science Knowledge in Clinical Reasoning: Implications for Structural Knowledge Differences Between Experts and Novices
  79. Similarity-Based and Explanation-Based Learning of Explanatory and Nonexplanatory Information
  80. Causal Reasoning about Complex Physiological Mechanisms by Novices
  81. A Comparison of Context Effects for Typicality and Category Membership Ratings
  82. Naive Materialistic Belief: An Underlying Epistemological Commitment
  83. Writing Expertise and Second Language Proficiency: Algorithms and Implementations?
  84. The Minimal Chain Principle: A Cross Linguistic Study of Syntactic Parsing Strategies
  85. Multiple Character Recognition - A Simulation Model
  86. NETZSPRECH - Another Case for Distributed ā€˜Rule’ Systems
  87. Intuitive Notions of Light and Learning About Light
  88. Signalling Importance in Spoken Narratives: The Cataphoric Use of the Indefinit THIS
  89. Planning and Implementation Errors in Algorithm Design
  90. Conceptual Slippage and Analogy-Making: A Report on the Copycat Project
  91. Direct Inferences in a Connectionist Knowledge Structure
  92. Problem Solving is what you do when you don’t know what to do
  93. Cirrus: Inducing Subject Models from Protocol Data
  94. Three Kinds of Concepts?
  95. Constructing Coherent Text Using Rhetorical Relations
  96. Defeasibility in Concept Combination: A Critical Approach
  97. The Comprehension of Architectural Plans By Expert and Sub-Expert Architects
  98. Processing Aspectual Semantics
  99. Gain Variation in Recurrent Error Propagation Networks
  100. Conjoint Syntactic and Semantic Context Effects: Tasks and Representations
  101. Generalization by Humans and Multi-Layer Adaptive Networks
  102. Pattern-Based Parsing Word-Sense Disambiguation
  103. Multiple Theories in Scientific Discovery
  104. Text Comprehension: Macrostructure and Frame
  105. The Structure of Social Mind: Empirical Patterns of Large-scale Knowledge Organization
  106. A Model of Meter Perception in Music
  107. Acquiring Computer Skills by Exploration versus Demonstration
  108. A Hybrid Model for Controlling Retrieval of Episodes
  109. The Role of Mapping in Analogical Transfer
  110. Spatial Attention and Subitizing: An Investigation of the FINST Hypothesis
  111. A Computational Model of Reactive Depression
  112. Reasoning by Rule or Model ?