Dynamics and Computations in Recurrent Neural Networks
Randomly connected recurrent neural networks (RNNs) serve as a parsimonious model of cortical dynamics, and could be used to model memory, decision making, and cognition. Machine-learning based variants of RNNs have recently gained popularity due to their utility in a wide array of application including speech recognition, medical outcome prediction, handwriting recognition, or robot control. However, the methods used in these applications are either too far removed from biological RNNs, or not biologically plausible. A biologically plausible method could both provide insight into how biological RNNs function, and improve performance in artificial systems.
Here I describe my contributions towards biologically plausible algorithms for RNNs. I begin with an extension of balanced network theory, which creates a parsimonious model of neural dynamics. Existing RNN algorithms use simplified neuron models due to difficulties using more complex or realistic neuron models. Accounting for the spatially dependent structure observed in real cortical networks increases the reliability of the reservoir network, allowing it work in realistic spiking networks. Finally, I develop a biologically-inspired RNN algorithm, which solves an issue of unrealistic supervision found in most existing algorithms.
History
Date Modified
2019-08-08Defense Date
2019-04-16CIP Code
- 27.9999
Research Director(s)
Robert J. RosenbaumCommittee Members
Alan Lindsay Lizhen LinDegree
- Doctor of Philosophy
Degree Level
- Doctoral Dissertation
Alternate Identifier
1111684901Library Record
5172328OCLC Number
1111684901Additional Groups
- Applied and Computational Mathematics and Statistics
Program Name
- Applied and Computational Mathematics and Statistics