Feed-forward convolutional neural networks (CNNs) are currently state-of-the-art for object classification tasks such as ImageNet. Further, they are quantitatively accurate models of temporally-averaged responses of neurons in the primate brain’s visual system. However, biological visual systems have two ubiquitous architectural features not shared with typical CNNs: local recurrence within cortical areas, and long-range feedback from downstream areas to upstream areas. Here we explored the role of recurrence in improving classification performance. We found that standard forms of recurrence (vanilla RNNs and LSTMs) do not perform well within deep CNNs on the ImageNet task. In contrast, novel cells that incorporated two structural features, bypassing and gating, were able to boost task accuracy substantially. We extended these design principles in an automated search over thousands of model architectures, which identified novel local recurrent cells and long-range feedback connections useful for object recognition. Moreover, these task-optimized ConvRNNs matched the dynamics of neural activity in the primate visual system better than feedforward networks, suggesting a role for the brain’s recurrent connections in performing difficult visual behaviors.

Poster

We used TensorFlow 1.13.1. To load the model, you will need the following three files: 1. Meta File 2. Index File 3. Data File

All of the model code can be found here: Code Directory. Specifically, the model function is here: Model Function. The function which loads the HyperOPT TPE identified model architecture parameters is here: Model Parameter Loader. The RNN cell graph for loading the above checkpoint is here: RNN Cell. A more cleaned up version (that will not work with the above checkpoint), but can be useful for other projects if training from scratch is here: Final RNN Cell. In case, it is useful, the image preprocessing used on ImageNet is here: ImageNet Preproc.