Connectionism and the Mind
Parallel Processing, Dynamics, and Evolution in Networks
Häftad, Engelska, 2001
Av William Bechtel, Adele Abrahamsen, Adele A. Abrahamsen, Bechtel, A. Abrahamsen a.
609 kr
Produktinformation
- Utgivningsdatum2001-11-08
- Mått174 x 248 x 32 mm
- Vikt753 g
- FormatHäftad
- SpråkEngelska
- Antal sidor432
- Upplaga2
- FörlagJohn Wiley and Sons Ltd
- ISBN9780631207139
Tillhör följande kategorier
William Bechtel is Professor of Philosophy at the University of California, San Diego and Editor of the journal Philosophical Psychology. His publications include Philosophy of Mind (1988), Philosophy of Science (1988), and Discovering Complexity (1993, with Robert Richardson), A Companion to Cognitive Science (with George Graham, Blackwell 1999), Philosophy and the Neurosciences (with Pete Mandik, Jennefer Mundale and Robert S. Stufflebeam, Blackwell 2001). Adele Abrahamsen is Associate Professor of Psychology and Undergraduate Director of the Philosophy-Neuroscience-Psychology and Linguistics Programs at Washington University in St. Louis. She is the author of Child Language (1977).
- Preface xiii1 Networks Versus Symbol Systems: Two Approaches To Modeling Cognition 11.1 A Revolution in the Making? 11.2 Forerunners of Connectionism: Pandemonium and Perceptrons 21.3 The Allure of Symbol Manipulation 71.3.1 From logic to artificial intelligence 71.3.2 From linguistics to information processing 101.3.3 Using artificial intelligence to simulate human information processing 11 1.4 The Decline and Re-emergence of Network Models 121.4.1 Problems with perceptrons 121.4.2 Re-emergence: The new connectionism 131.5 New Alliances and Unfinished Business 15Notes 17Sources and Suggested Readings 172 Connectionist Architectures 192.1 The Flavor of Connectionist Processing: A Simulation of Memory Retrieval 192.1.1 Components of the model 202.1.2 Dynamics of the model 222.1.2.1 Memory retrieval in the Jets and Sharks network 222.1.2.2 The equations 232.1.3 Illustrations of the dynamics of the model 242.1.3.1 Retrieving properties from a name 242.1.3.2 Retrieving a name from other properties 262.1.3.3 Categorization and prototype formation 262.1.3.4 Utilizing regularities 282.2 The Design Features of a Connectionist Architecture 292.2.1 Patterns of connectivity 292.2.1.1 Feedforward networks 292.2.1.2 Interactive networks 312.2.2 Activation rules for units 322.2.2.1 Feedforward networks 322.2.2.2 Interactive networks: Hopfield networks and Boltzmann machines 342.2.2.3 Spreading activation vs. interactive connectionist models 372.2.3 Learning principles 382.2.4 Semantic interpretation of connectionist systems 402.2.4.1 Localist networks 412.2.4.2 Distributed networks 412.3 The Allure of the Connectionist Approach 452.3.1 Neural plausibility 452.3.2 Satisfaction of soft constraints 462.3.3 Graceful degradation 482.3.4 Content-addressable memory 492.3.5 Capacity to learn from experience and generalize 512.4 Challenges Facing Connectionist Networks 512.5 Summary 52Notes 52Sources and Recommended Readings 533 Learning 543.1 Traditional and Contemporary Approaches to Learning 543.1.1 Empiricism 543.1.2 Rationalism 553.1.3 Contemporary cognitive science 563.2 Connectionist Models of Learning 573.2.1 Learning procedures for two-layer feedforward networks 583.2.1.1 Training and testing a network 583.2.1.2 The Hebbian rule 583.2.1.3 The delta rule 603.2.1.4 Comparing the Hebbian and delta rules 673.2.1.5 Limitations of the delta rule: The XOR problem 673.2.2 The backpropagation learning procedure for multi-layered networks 693.2.2.1 Introducing hidden units and backpropagation learning 693.2.2.2 Using backpropagation to solve the XOR problem 743.2.2.3 Using backpropagation to train a network to pronounce words 773.2.2.4 Some drawbacks of using backpropagation 783.2.3 Boltzmann learning procedures for non-layered networks 793.2.4 Competitive learning 803.2.5 Reinforcement learning 813.3 Some Issues Regarding Learning 823.3.1 Are connectionist systems associationist? 823.3.2 Possible roles for innate knowledge 843.3.2.1 Networks and the rationalist–empiricist continuum 843.3.2.2 Rethinking innateness: Connectionism and emergence 85Notes 87Sources and Suggested Readings 884 Pattern Recognition and Cognition 894.1 Networks as Pattern Recognition Devices 904.1.1 Pattern recognition in two-layer networks 904.1.2 Pattern recognition in multi-layered networks 934.1.2.1 McClelland and Rumelhart’s interactive activation model of word recognition 934.1.2.2 Evaluating the interactive activation model of word recognition 1004.1.3 Generalization and similarity 1014.2 Extending Pattern Recognition to Higher Cognition 1024.2.1 Smolensky’s proposal: Reasoning in harmony networks 1034.2.2 Margolis’s proposal: Cognition as sequential pattern recognition 1034.3 Logical Inference as Pattern Recognition 1064.3.1 What is it to learn logic? 1064.3.2 A network for evaluating validity of arguments 1094.3.3 Analyzing how a network evaluates arguments 1124.3.4 A network for constructing derivations 1154.4 Beyond Pattern Recognition 117Notes 118Sources and Suggested Readings 1195 Are Rules Required to Process Representations? 1205.1 Is Language Use Governed by Rules? 1205.2 Rumelhart and McClelland’s Model of Past-tense Acquisition 1225.2.1 A pattern associator with Wickelfeature encodings 1225.2.2 Activation function and learning procedure 1265.2.3 Overregularization in a simpler network: The rule of 78 1275.2.4 Modeling U-shaped learning 1305.2.5 Modeling differences between different verb classes 1335.3Pinker and Prince’s Arguments for Rules 1355.3.1 Overview of the critique of Rumelhart and McClelland’s model 1355.3.2 Putative linguistic inadequacies 1365.3.3 Putative behavioral inadequacies 1395.3.4 Do the inadequacies reflect inherent limitations of PDP networks? 1405.4 Accounting for the U-shaped Learning Function 1415.4.1 The role of input for children 1425.4.2 The role of input for networks: The rule of 78 revisited 1465.4.3 Plunkett and Marchman’s simulations of past-tense acquisition 1485.5 Conclusion 152Notes 153Sources and Suggested Readings 1556 Are Syntactically Structured Representations Needed? 1566.1 Fodor and Pylyshyn’s Critique: The Need for Symbolic Representations with Constituent Structure 1566.1.1 The need for compositional syntax and semantics 1566.1.2 Connectionist representations lack compositionality 1586.1.3 Connectionism as providing mere implementation 1606.2 First Connectionist Response: Explicitly Implementing Rules and Representations 1636.2.1 Implementing a production system in a network 1636.2.2 The variable binding problem 1656.2.3 Shastri and Ajjanagadde’s connectionist model of variable binding 1666.3Second Connectionist Response: Implementing Functionally Compositional Representations 1706.3.1 Functional vs. concatenative compositionality 1706.3.2 Developing compressed representations using Pollack’s RAAM networks 1716.3.3 Functional compositionality of compressed representations 1756.3.4 Performing operations on compressed representations 1776.4 Third Connectionist Response: Employing Procedural Knowledge with External Symbols 1786.4.1 Temporal dependencies in processing language 1796.4.2 Achieving short-term memory with simple recurrent networks 1806.4.3 Elman’s first study: Learning grammatical categories 1816.4.4 Elman’s second study: Respecting dependency relations 1846.4.5 Christiansen’s extension: Pushing the limits of SRNs 1876.5 Using External Symbols to Provide Exact Symbol Processing 1906.6 Clarifying the Standard: Systematicity and Degree of Generalizability 1946.7 Conclusion 197Notes 198Sources and Suggested Readings 1997 Simulating Higher Cognition: a Modular Architecture For Processing Scripts 2007.1 Overview of Scripts 2007.2 Overview of Miikkulainen’s DISCERN System 2017.3Modular Connectionist Architectures 2037.4 FGREP: An Architecture that Allows the System to Devise Its Own Representations 2067.4.1 Why FGREP? 2067.4.2 Exploring FGREP in a simple sentence parser 2087.4.3 Exploring representations for words in categories 2107.4.4 Moving to multiple modules: The DISCERN system 2127.5 A Self-organizing Lexicon Using Kohonen Feature Maps 2127.5.1 Innovations in lexical design 2127.5.2 Using Kohonen feature maps in DISCERN’s lexicon 2137.5.2.1 Orthography: From high-dimensional vector representations to map units 2137.5.2.2 Associative connections: From the orthographic map to the semantic map 2167.5.2.3 Semantics: From map unit to high-dimensional vector representations 2167.5.2.4 Reversing direction: From semantic to orthographic representations 2167.5.3 Advantages of Kohonen feature maps 2167.6 Encoding and Decoding Stories as Scripts 2177.6.1 Using recurrent FGREP modules in DISCERN 2177.6.2 Using the Sentence Parser and Story Parser to encode stories 2187.6.3 Using the Story Generator and Sentence Generator to paraphrase stories 2217.6.4 Using the Cue Former and Answer Producer to answer questions 2237.7 A Connectionist Episodic Memory 2237.7.1 Making Kohonen feature maps hierarchical 2237.7.2 How role-binding maps become self-organized 2257.7.3 How role-binding maps become trace feature maps 2257.8 Performance: Paraphrasing Stories and Answering Questions 2287.8.1 Training and testing DISCERN 2287.8.2 Watching DISCERN paraphrase a story 2297.8.3 Watching DISCERN answer questions 2297.9 Evaluating DISCERN 2317.10 Paths Beyond the First Decade of Connectionism 233Notes 234Sources and Suggested Readings 2348 Connectionism and the Dynamical Approach toCognition 2358.1 Are We on the Road to a Dynamical Revolution? 2358.2 Basic Concepts of DST: The Geometry of Change 2378.2.1 Trajectories in state space: Predators and prey 2378.2.2 Bifurcation diagrams and chaos 2408.2.3 Embodied networks as coupled dynamical systems 2428.3Using Dynamical Systems Tools to Analyze Networks 2438.3.1 Discovering limit cycles in network controllers for robotic insects 2448.3.2 Discovering multiple attractors in network models of reading 2468.3.2.1 Modeling the semantic pathway 2488.3.2.2 Modeling the phonological pathway 2498.3.3 Discovering trajectories in SRNs for sentence processing 2538.3.4 Dynamical analyses of learning in networks 2568.4 Putting Chaos to Work in Networks 2578.4.1 Skarda and Freeman’s model of the olfactory bulb 2578.4.2 Shifting interpretations of ambiguous displays 2608.5 Is Dynamicism a Competitor to Connectionism? 2648.5.1 Van Gelder and Port’s critique of classic connectionism 2648.5.2 Two styles of modeling 2658.5.3 Mechanistic versus covering-law explanations 2668.5.4 Representations: Who needs them? 2708.6 Is Dynamicism Complementary to Connectionism? 2768.7 Conclusion 280Notes 280Sources and Suggested Readings 2819 Networks, Robots, and Artificial Life 2829.1 Robots and the Genetic Algorithm 2829.1.1 The robot as an artificial lifeform 2829.1.2 The genetic algorithm for simulated evolution 2839.2 Cellular Automata and the Synthetic Strategy 2849.2.1 Langton’s vision: The synthetic strategy 2849.2.2 Emergent structures from simple beings: Cellular automata 2869.2.3 Wolfram’s four classes of cellular automata 2889.2.4 Langton and l at the edge of chaos 2899.3Evolution and Learning in Food-seekers 2919.3.1 Overview and study 1: Evolution without learning 2919.3.2 The Baldwin effect and study 2: Evolution with learning 2939.4 Evolution and Development in Khepera 2959.4.1 Introducing Khepera 2959.4.2 The development of phenotypes from genotypes 2969.4.3 The evolution of genotypes 2989.4.4 Embodied networks: Controlling real robots 2989.5 The Computational Neuroethology of Robots 3009.6 When Philosophers Encounter Robots 3019.6.1 No Cartesian split in embodied agents? 3019.6.2 No representations in subsumption architectures? 3029.6.3 No intentionality in robots and Chinese rooms? 3039.6.4 No armchair when Dennett does philosophy? 3049.7 Conclusion 305Sources and Suggested Readings 30510 Connectionism and the Brain 30610.1 Connectionism Meets Cognitive Neuroscience 30610.2 Four Connectionist Models of Brain Processes 30910.2.1 What/Where streams in visual processing 30910.2.2 The role of the hippocampus in memory 31310.2.2.1 The basic design and functions of the hippocampal system 31310.2.2.2 Spatial navigation in rats 31510.2.2.3 Spatial versus declarative memory accounts 31610.2.2.4 Declarative memory in humans and monkeys 31810.2.3 Simulating dyslexia in network models of reading 32310.2.3.1 Double dissociations in dyslexia 32310.2.3.2 Modeling deep dyslexia 32710.2.3.3 Modeling surface dyslexia 33110.2.3.4 Two pathways versus dual routes 33510.2.4 The computational power of modular structure in neocortex 33810.3The Neural Implausibility of Many Connectionist Models 34110.3.1 Biologically implausible aspects of connectionist networks 34210.3.2 How important is neurophysiological plausibility? 34310.4 Whither Connectionism? 346Notes 347Sources and Suggested Readings 348Appendix A: Notation 349Appendix B: Glossary 350Bibliography 363Name Index 384Subject Index 395
"Much more than just an update, this is a thorough and exciting re-build of the classic text. Excellent new treatments of modularity, dynamics, artificial life, and cognitive neuroscience locate connectionism at the very heart of contemporary debates. A superb combination of detail, clarity, scope, and enthusiasm." Andy Clark, University of Sussex"Connectionism and the Mind is an extraordinarily comprehensive and thoughtful review of connectionism, with particular emphasis on recent developments. This new edition will be a valuable primer to those new to the field. But there is more: Bechtel and Abrahamsen's trenchant and even-handed analysis of the conceptual issues that are addressed by connectionist models constitute an important original theoretical contribution to cognitive science." Jeff Elman, University of California at San Diego