All Publications

Benchmarking the Performance of Neuromorphic and Spiking Neural Simulators

Shruti R. Kulkarni, Maryam Parsa, J. Parker Mitchell and Catherine D. Schuman

March, 2021

Neurocomputing, Volume 447, March, 2021.

https://doi.org/10.1016/j.neucom.2021.03.028

PDF not available yet, or is only available from the conference/journal publisher.

Abstract

Software simulators play a critical role in the development of new algorithms and system architectures in any field of engineering. Neuromorphic computing, which has shown potential in building brain-inspired energy-efficient hardware, suffers a slow-down in the development cycle due to a lack of flexible and easy-to-use simulators of either neuromorphic hardware itself or of spiking neural networks (SNNs), the type of neural network computation executed on most neuromorphic systems. While there are sev- eral openly available neuromorphic or SNN simulation packages developed by a variety of research groups, they have mostly targeted computational neuroscience simulations, and only a few have targeted small-scale machine learning tasks with SNNs. Evaluations or comparisons of these simulators have often targeted computational neuroscience-style workloads. In this work, we seek to evaluate the performance of several publicly available SNN simulators with respect to non-computational neuroscience workloads, in terms of speed, flexibility, and scalability. We evaluate the performance of the NEST, Brian2, Brian2GeNN, BindsNET and Nengo packages under a common front-end neuromorphic framework. Our evaluation tasks include a variety of different network architectures and workload types to mimic the computation common in different algorithms, including feed-forward network inference, genetic algo- rithms, and reservoir computing. We also study the scalability of each of these simulators when running on different computing hardware, from single core CPU workstations to multi-node supercomputers. Our results show that the BindsNET simulator has the best speed and scalability for most of the SNN work- loads (sparse, dense, and layered SNN architectures) on a single core CPU. However, when comparing the simulators leveraging the GPU capabilities, Brian2GeNN outperforms the others for these workloads in terms of scalability. NEST performs the best for small sparse networks and is also the most flexible sim- ulator in terms of reconfiguration capability NEST shows a speedup of at least 2x compared to the other packages when running evolutionary algorithms for SNNs. The multi-node and multi-thread capabilities of NEST show at least 2x speedup compared to the rest of the simulators (single core CPU or GPU based simulators) for large and sparse networks. We conclude our work by providing a set of recommendations on the suitability of employing these simulators for different tasks and scales of operations. We also pre- sent the characteristics for a future generic ideal SNN simulator for different neuromorphic computing workloads.

Citation Information

Text


author      S. Kulkarni and M. Parsa and J. P. Mitchell and C. D. Schuman
title       Benchmarking the Performance of Neuromorphic and Spiking Neural Simulators
journal     Neurocomputing
publisher   Elsevier
volume      447
pages       145-160
year        2021
where       https://doi.org/10.1016/j.neucom.2021.03.028

Bibtex


@ARTICLE{kpm:21:bpn,
    author = "S. Kulkarni and M. Parsa and J. P. Mitchell and C. D. Schuman",
    title = "Benchmarking the Performance of Neuromorphic and Spiking Neural Simulators",
    journal = "Neurocomputing",
    publisher = "Elsevier",
    volume = "447",
    pages = "145-160",
    year = "2021",
    where = "https://doi.org/10.1016/j.neucom.2021.03.028"
}