Forskning
Udskriv Udskriv
Switch language
Region Hovedstaden - en del af Københavns Universitetshospital
Udgivet

Computing Generalized Matrix Inverse on Spiking Neural Substrate

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

Harvard

APA

CBE

MLA

Vancouver

Author

Shukla, Rohit ; Khoram, Soroosh ; Jorgensen, Erik ; Li, Jing ; Lipasti, Mikko ; Wright, Stephen. / Computing Generalized Matrix Inverse on Spiking Neural Substrate. I: Frontiers in Neuroscience. 2018 ; Bind 12. s. 115.

Bibtex

@article{a9ff7dca359a473ca35aae6aae673a7a,
title = "Computing Generalized Matrix Inverse on Spiking Neural Substrate",
abstract = "Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.",
author = "Rohit Shukla and Soroosh Khoram and Erik Jorgensen and Jing Li and Mikko Lipasti and Stephen Wright",
year = "2018",
doi = "10.3389/fnins.2018.00115",
language = "English",
volume = "12",
pages = "115",
journal = "Frontiers in Neuroscience",
issn = "1662-4548",
publisher = "Frontiers Research Foundation",

}

RIS

TY - JOUR

T1 - Computing Generalized Matrix Inverse on Spiking Neural Substrate

AU - Shukla, Rohit

AU - Khoram, Soroosh

AU - Jorgensen, Erik

AU - Li, Jing

AU - Lipasti, Mikko

AU - Wright, Stephen

PY - 2018

Y1 - 2018

N2 - Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.

AB - Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.

U2 - 10.3389/fnins.2018.00115

DO - 10.3389/fnins.2018.00115

M3 - Journal article

VL - 12

SP - 115

JO - Frontiers in Neuroscience

JF - Frontiers in Neuroscience

SN - 1662-4548

ER -

ID: 56431535