Neural Networks for Knowledge Representation and Inference
eBook - ePub

Neural Networks for Knowledge Representation and Inference

  1. 528 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Neural Networks for Knowledge Representation and Inference

About this book

The second published collection based on a conference sponsored by the Metroplex Institute for Neural Dynamics -- the first is Motivation, Emotion, and Goal Direction in Neural Networks (LEA, 1992) -- this book addresses the controversy between symbolicist artificial intelligence and neural network theory. A particular issue is how well neural networks -- well established for statistical pattern matching -- can perform the higher cognitive functions that are more often associated with symbolic approaches. This controversy has a long history, but recently erupted with arguments against the abilities of renewed neural network developments. More broadly than other attempts, the diverse contributions presented here not only address the theory and implementation of artificial neural networks for higher cognitive functions, but also critique the history of assumed epistemologies -- both neural networks and AI -- and include several neurobiological studies of human cognition as a real system to guide the further development of artificial ones.

Organized into four major sections, this volume:
* outlines the history of the AI/neural network controversy, the strengths and weaknesses of both approaches, and shows the various capabilities such as generalization and discreetness as being along a broad but common continuum;
* introduces several explicit, theoretical structures demonstrating the functional equivalences of neurocomputing with the staple objects of computer science and AI, such as sets and graphs;
* shows variants on these types of networks that are applied in a variety of spheres, including reasoning from a geographic database, legal decision making, story comprehension, and performing arithmetic operations;
* discusses knowledge representation process in living organisms, including evidence from experimental psychology, behavioral neurobiology, and electroencephalographic responses to sensory stimuli.

Trusted by 375,005 students

Access to over 1 million titles for a fair monthly price.

Study more efficiently using our study tools.

II

ARCHITECTURES FOR KNOWLEDGE REPRESENTATION

5

Representing Discrete Structures in a Hopfield-Style Network

Arun Jagota
State University of New York at Buffalo
We have developed a variant (essentially a special case) of the discrete Hopfield network, which we call Hopfield-Style Network (HSN). The stable states of HSN are the maximal cliques of an underlying graph. We exploit this graph-theoretic characterization to represent — as associative memories — several discrete structures in HSN. All representable structures are stored in HSN via its associative memory storage rule. We describe representations of sets (with PDP schemata Uas example), relations (with PDP “Jets and Sharks” as example), multi-relations (with word-dictionaries as example), graphs (with PDP schemata and binary relations as examples), Boolean formulae, and *-free regular expressions (with restaurant script as example). We also discuss robustness of these representations. Our main result is that several different kinds of discrete structures are representable — in distributed fashion — in HSN — a simple Hopfield-type energy-minimizing (constraint-satisfaction) parallel-distributed network. For knowledge representation and retrieval, we have extended the scope of representations possible in Hopfield networks, while retaining (and improving upon) the good features of such networks: (1) spontaneous constraint-satisfaction and (2) retrieval of stored schemata from noisy and incomplete information.

1. INTRODUCTION

Knowledge representation (KR) is a key issue in artificial intelligence (AI). As compared with traditional KR systems, neural networks (connectionist models) provide novel (e.g., distributed) means of representing knowledge, in “natural” analogies with the brain. For them to have serious applications in AI however, novelty and brain-analogies alone are insufficient — efficient representability of a variety of knowledge that AI systems deal with is also required.
Knowledge representation issues using neural networks have been much studied. For a good overview see Zeidenberg (1990, Chapter 4). One model that has received considerable attention is the PDP schemata model of Rumelhart, Smolensky, McClelland, and Hinton (1986). Their approach is distributed — schemata are stored implicitly as collections of micro-features, one network unit per micro-feature. Units (micro-features) can appear in multiple schemata. This model is essentially a Hopfield constraint-satisfaction network (Hopfield, 1982). The stored knowledge is retrieved by Hopfield energy-minimizing (relaxation) computations. Implicit schemata and error-correcting associative retrieval emerge spontaneously as collective properties of the units and the weights. This is the main significance of their work, and also that of the Hopfield network.
Whereas the above properties are attractive, the PDP/Hopfield approach to AI has in practice1 been limited to representing non-recursive finite structures. In AI, recursive structures (e.g., trees) are frequently needed. Jordan Pollack observed that this representational inadequacy is a common feature of most connectionist models to date. He suggests a connectionist architecture (Pollack, 1989) which forms recursive distributed representations, as a solution to this problem.
In this chapter we take the (opposite) bottom-up approach. Rather than attempting to find a connectionist model to represent structures (perhaps difficult to represent in connectionist models) that might be required for certain AI tasks, we start with a simple connectionist model and explore what it can represent.2 Our simple structure is a Hopfield-Style Network (HSN), a variant (essentially a special-case) of the Hopfield network, that we proposed recently (Jagota, 1990a). As with the Hopfield network, structures are “representable” in HSN exactly as stable states. In contrast to the Hopfield network, the stable states of HSN are characterised simply and exactly — as exactly the maximal cliques of an underlying graph. This has allowed us to theoretically ascertain what HSN can represent. Generally, any discrete structure is representable if it can be transformed to a graph so that the maximal cliques of the latter represent the desired information. Here we specifically describe how HSN can represent sets, relations, “multi-relations”, graphs, Boolean formulae, and *-free regular expressions. For all the above structures except sets and graphs3, HSN provides “perfect” stable storageany collection (up to order of 2N) memories can be stored stably in a network of 2N units. Spurious memories do develop however, but have a graph-theoretic interpretation. The stable storage properties of HSN are near-optimal in space and time. Due to spurious memories, however, what scales poorly is not the complexity, but the functional performance. Nevertheless, we think that this near-optimal space and time complexity is of importance to knowledge representation in AI.
We illustrate applications of the above representable structures to knowledge representation in AI. For example, PDP-style schemata are represented in HSN as graphs. Constraints are represented by edges and schemata emerge as maximal cliques. Schemata can also be represented explicitly, in which case the set representation is used. Since HSN is a Hopfield-type Network, the PDP approach and ours are very similar at the macro-level. As with theirs, our representations are distributed, and associative retrieval is a natural emergent property. The theory of HSN provides additional support, however. The schemata (stable states) are characterised exactly. It is easier to see if a schema will be stored stably and if spurious schemata will emerge. Thus the application of HSN to PDP-style schemata is likely to scale better because storage properties are easier to ascertain, both analytically, and by inspection. In Section 3.4.1, we compare the PDP approach and ours in more detail.
We should mention that this idea of representing knowledge in maximal cliques dates back to at least the early 1970s — in the guise of graph theoretical cluster techniques. Augustson and Minker (1970) discuss how thesauri can be represented as maximal cliques. The terms used to index documents are represented by the vertices. An edge between two vertices denotes a positive association between terms. A maximal clique denotes a maximal set of terms which are pairwise positively associated. The emergence of maximal cliques from pairwise positive associations is termed “clustering.”
Our work goes further in the sense that it relates stable states of a Hopfield-type network to maximal cliques. Another way to say this is that the Hopfield-Style Network serves as a natural implementation vehicle/computa...

Table of contents

  1. Front Cover
  2. Half Title
  3. Title Page
  4. Copyright
  5. Contents
  6. Preface
  7. List of contributors
  8. SECTION I. NEURONS AND SYMBOLS: TOWARD A RECONCILIATION
  9. SECTION II. ARCHITECTURES FOR KNOWLEDGE REPRESENTATION
  10. SECTION III. APPLICATIONS OF CONNECTIONIST REPRESENTATION
  11. SECTION IV. BIOLOGICAL FOUNDATIONS OF KNOWLEDGE
  12. Author Index
  13. Subject Index

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn how to download books offline
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 990+ topics, we’ve got you covered! Learn about our mission
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more about Read Aloud
Yes! You can use the Perlego app on both iOS and Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Yes, you can access Neural Networks for Knowledge Representation and Inference by Daniel S. Levine,Daniel S. Levine, Daniel S. Levine,Manuel Aparicio IV in PDF and/or ePUB format, as well as other popular books in Psychology & Cognitive Psychology & Cognition. We have over one million books available in our catalogue for you to explore.