All Publications

Low Power Hardware-In-The-Loop Neuromorphic Training Accelerator

J. P. Mitchell and C. D. Schuman

July, 2021

ICONS: International Conference on Neuromorphic Systems

https://dl.acm.org/doi/10.1145/3477145.3477150

PDF not available yet, or is only available from the conference/journal publisher.

Abstract

The training process for spiking neural networks can be very computationally intensive. Approaches such as evolutionary algorithms may require evaluating thousands or millions of candidate solutions. In this work, we propose using neuromorphic cores implemented on a Xilinx Zynq system on chip to accelerate and improve the energy efficiency of the evaluation step of an evolutionary training approach. We demonstrate this can significantly reduce the required energy to evolve a network with some cases showing greater than 10 times improvement as compared to a CPU-only system.

Citation Information

Text


author       J. P. Mitchell and C. D. Schuman
title        Low Power Hardware-In-The-Loop Neuromorphic Training Accelerator
booktitle    International Conference on Neuromorphic Computing Systems (ICONS)
publisher    ACM
pages        1-8
year         2021
url          https://doi.org/10.1145/3477145.3477150
doi          10.1145/3477145.3477150

Bibtex


@INPROCEEDINGS{ms:21:lph,
    author = "J. P. Mitchell and C. D. Schuman",
    title = "Low Power Hardware-In-The-Loop Neuromorphic Training Accelerator",
    booktitle = "International Conference on Neuromorphic Computing Systems (ICONS)",
    publisher = "ACM",
    pages = "1-8",
    year = "2021",
    url = "https://doi.org/10.1145/3477145.3477150",
    doi = "10.1145/3477145.3477150"
}