Parallel Evolutionary Optimization for Neuromorphic Network Training
Catherine D. Schuman, Adam Disney, Susheela P. Singh, Grant Bruer, J. Parker Mitchell, Aleksander Klibisz and James S. Plank
November, 2016
Machine Learning in HPC Environments, Supercomputing 2016
http://ornlcda.github.io/MLHPC2016/program.html
Abstract
One of the key impediments to the success of current neuromorphic computing architectures is the issue of how best to program them. Evolutionary optimization (EO) is one promising programming technique; in particular, its wide applicability makes it especially attractive for neuromorphic architectures, which can have many different characteristics. In this paper, we explore different facets of EO on a spiking neuromorphic computing model called DANNA. We focus on the performance of EO in the design of our DANNA simulator, and on how to structure EO on both multicore and massively parallel computing systems. We evaluate how our parallel methods impact the performance of EO on Titan, the U.S.’s largest open science supercomputer, and BOB, a Beowulf-style cluster of Raspberry Pi’s. We also focus on how to improve the EO by evaluating commonality in higher performing neural networks, and present the result of a study that evaluates the EO performed by Titan.Citation Information
Text
author C. D. Schuman and A. Disney and S. P. Singh and G. Bruer and J. P. Mitchell and A. Klibisz and J. S. Plank title Parallel Evolutionary Optimization for Neuromorphic Network Training booktitle Machine Learning in HPC Environments, Supercomputing 2016 year 2016 month November address Salt Lake City
Bibtex
@INPROCEEDINGS{sds:16:peo, author = "C. D. Schuman and A. Disney and S. P. Singh and G. Bruer and J. P. Mitchell and A. Klibisz and J. S. Plank", title = "Parallel Evolutionary Optimization for Neuromorphic Network Training", booktitle = "Machine Learning in HPC Environments, Supercomputing 2016", year = "2016", month = "November", address = "Salt Lake City" }