Dynamics and Computations in Recurrent Neural Networks

Doctoral Dissertation


Randomly connected recurrent neural networks (RNNs) serve as a parsimonious model of cortical dynamics, and could be used to model memory, decision making, and cognition. Machine-learning based variants of RNNs have recently gained popularity due to their utility in a wide array of application including speech recognition, medical outcome prediction, handwriting recognition, or robot control. However, the methods used in these applications are either too far removed from biological RNNs, or not biologically plausible. A biologically plausible method could both provide insight into how biological RNNs function, and improve performance in artificial systems.

Here I describe my contributions towards biologically plausible algorithms for RNNs. I begin with an extension of balanced network theory, which creates a parsimonious model of neural dynamics. Existing RNN algorithms use simplified neuron models due to difficulties using more complex or realistic neuron models. Accounting for the spatially dependent structure observed in real cortical networks increases the reliability of the reservoir network, allowing it work in realistic spiking networks. Finally, I develop a biologically-inspired RNN algorithm, which solves an issue of unrealistic supervision found in most existing algorithms.


Attribute NameValues
Author Ryan Pyle
Contributor Robert J. Rosenbaum, Research Director
Contributor Alan Lindsay, Committee Member
Contributor Lizhen Lin , Committee Member
Degree Level Doctoral Dissertation
Degree Discipline Applied and Computational Mathematics and Statistics
Degree Name Doctor of Philosophy
Banner Code

Defense Date
  • 2019-04-16

Submission Date 2019-05-14
Record Visibility Public
Content License
Departments and Units
Catalog Record


Please Note: You may encounter a delay before a download begins. Large or infrequently accessed files can take several minutes to retrieve from our archival storage system.