Demonstrating hybrid learning in a flexible neuromorphic hardware system

We present results from a new approach to learning and plasticity in neuromorphic hardware systems: to enable flexibility in implementable learning mechanisms while keeping high efficiency associated with neuromorphic implementations, we combine a general-purpose processor with full-custom analog el...

Full description

Saved in:
Bibliographic Details
Main Authors: Friedmann, Simon (Author) , Schemmel, Johannes (Author) , Grübl, Andreas (Author) , Hartel, Andreas (Author) , Hock, Matthias (Author) , Meier, Karlheinz (Author)
Format: Article (Journal)
Language:English
Published: January 26, 2017
In: IEEE transactions on biomedical circuits and systems
Year: 2017, Volume: 11, Issue: 1, Pages: 128-142
ISSN:1940-9990
DOI:10.1109/TBCAS.2016.2579164
Online Access:Verlag, kostenfrei, Volltext: https://dx.doi.org/10.1109/TBCAS.2016.2579164
Verlag, kostenfrei, Volltext: https://ieeexplore.ieee.org/document/7563782/authors
Get full text
Author Notes:Simon Friedmann, Johannes Schemmel, Member, IEEE, Andreas Grübl, Andreas Hartel, Matthias Hock, and Karlheinz Meier
Description
Summary:We present results from a new approach to learning and plasticity in neuromorphic hardware systems: to enable flexibility in implementable learning mechanisms while keeping high efficiency associated with neuromorphic implementations, we combine a general-purpose processor with full-custom analog elements. This processor is operating in parallel with a fully parallel neuromorphic system consisting of an array of synapses connected to analog, continuous time neuron circuits. Novel analog correlation sensor circuits process spike events for each synapse in parallel and in real-time. The processor uses this pre-processing to compute new weights possibly using additional information following its program. Therefore, to a certain extent, learning rules can be defined in software giving a large degree of flexibility. Synapses realize correlation detection geared towards Spike-Timing Dependent Plasticity (STDP) as central computational primitive in the analog domain. Operating at a speed-up factor of 1000 compared to biological time-scale, we measure time-constants from tens to hundreds of micro-seconds. We analyze variability across multiple chips and demonstrate learning using a multiplicative STDP rule. We conclude that the presented approach will enable flexible and efficient learning as a platform for neuroscientific research and technological applications.
Item Description:Gesehen am 08.05.2018
Physical Description:Online Resource
ISSN:1940-9990
DOI:10.1109/TBCAS.2016.2579164