A deep learning framework for neuroscience

Blake A Richards, Timothy P Lillicrap, Philippe Beaudoin, Yoshua Bengio, Rafal Bogacz, Amelia Christensen, Claudia Clopath, Rui Ponte Costa, Archy de Berker, Surya Ganguli, Colleen J Gillon, Danijar Hafner, Adam Kepecs, Nikolaus Kriegeskorte, Peter Latham, Grace W Lindsay, Kenneth D Miller, Richard Naud, Christopher C Pack, Panayiota PoiraziPieter Roelfsema, João Sacramento, Andrew Saxe, Benjamin Scellier, Anna C Schapiro, Walter Senn, Greg Wayne, Daniel Yamins, Friedemann Zenke, Joel Zylberberg, Denis Therien, Konrad P Kording

Onderzoeksoutput: Bijdrage aan wetenschappelijk tijdschrift/periodieke uitgaveArtikelWetenschappelijkpeer review

241 Downloads (Pure)

Samenvatting

Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.

Originele taal-2Engels
Pagina's (van-tot)1761-1770
Aantal pagina's10
TijdschriftNature Neuroscience
Volume22
Nummer van het tijdschrift11
DOI's
StatusGepubliceerd - 2019

Vingerafdruk

Duik in de onderzoeksthema's van 'A deep learning framework for neuroscience'. Samen vormen ze een unieke vingerafdruk.

Citeer dit