Research Projects

Please see the Overview for a high-level description of our research project in Neuromorphic computing. On this page, we briefly describe the major projects on which our team is actively working.

Memristive DANNA (mrDANNA)

This research project is led by Garrett Rose.

Memristive technology is a relatively new analog technology that has promise as the backbone of Dynamic Learning architectures. The "mrDANNA" project aims to exploit this technology to build dynamic arrays of neuromorphic computing elements. Features of the research project include:

  • Circuit design for neuromorphic elements and their connections using memristive technology.
  • Architectural design for embedding neuromorphic elements within computational systems for programming and application support.
  • Software simulations of memristive neuromorphic arrays.
  • Evolutionary optimization of memristive neuromorphic arrays.
  • Application development.

DANNA in VLSI and continued FPGA research

This research project is led by Mark Dean, and has three major thrusts:

  • Continued development of the current FPGA implementations of DANNA. This includes the hardware and software required to support these implementations in real installations. It also includes research into the composition of multiple FPGA boards to build larger arrays.
  • Further model development of DANNA. DANNA is a research project, and there are many facets of DANNA array elements that require exploration and research, such as central pattern generators, different kinds of long-term potentiation and depression, monitoring support, debugging support, and evolutionary optimization support.
  • VLSI design. Although the first VLSI design of DANNA has been completed (please see <a href=/publications/2015-08-01-danna-a-neuromorphic-computing-vlsi-chip>Chris Daffron's 2015 Masters Thesis), there is much work to be done to bring the VLSI project to fruition.

Software Support (System Software, Simulation, Software Libraries, Visualization)

This research project is led by James Plank and Catherine D. Schuman.

Hardware implementations require detailed software support. In addition, hardware implementations benefit greatly from cycle-level simulation. At a higher level, both the hardware and the various simulators require common software libraries, so that applications do not need to be hand-tooled for each. Finally, systems with the level of complexity of NIDA, DANNA and eventually mrDANNA require the ability to "see" what they are doing.

All of these projects compose the Software Support part of the research project, and are all under active development. Early results on visualization for NIDA were published in a <a href=/publications/2014-12-01-visual-analytics-for-neuroscience-inspired-dynamic-architectures>2014 FOCI paper. A description of the software support for the first FPGA version of DANNA is described in the <a href=/publications/2016-07-01-an-application-development-platform-for-neuromorphic-computing>2016 IJCNN paper.

Application Development

Please see the Overview for a description of current applications. This is an extremely important project, and we are always looking for help and collaboration in this regard.

Research in Evolutionary Optimization, including Parallel and Distributed Optimizations

This research project is led by Catherine D. Schuman and James Plank.

This research project explores evolutionary optimization in general Dynamic Learning architectures. All three models (NIDA, DANNA, mrDANNA) have common features and parameters that allow us to implement a general optimization framework, which may then be tailored to each model. There are other facets of optimization, such as the simultaneous evaluation of multiple fitness functions, that can improve the effectiveness of optimization. Please see the IJCNN paper on evolutionary optimization for an example.

There are many ways to scale evolutionary optimization to large-scale computing systems composed of multiple heterogeneous computing cores. These range from straightforward task-based parallelism, to partitioning the processors so that they work to evolve populations in isolation, and then coalesce them hierarchically. We are exploring these, using computing resources at Oak Ridge National Laboratory.

Finally, how the initial populations are chosen has a profound impact on the success of evolutionary optimization. We are exploring common sub-networks from our various applications, so that they may be used as building blocks for initial populations, thereby honing the evoluationary optimizations for larger networks, whose search space will be too large for an unhoned search.