skip to: onlinetools | mainnavigation | content | footer

NICE Workshop

Agenda

Monday
February 24
8:00-8:10 Welcome/Logistics
8:10-8:30 Welcome remarks/Julie Phillips
8:30-9:00 Jacob Vogelstein
9:00-9:20 Karlheinz Meier (Strategy paper overview)
9:20-9:30 Hardware and Systems (Okandan)
9:30-9:40 Operation and Tools
9:40-9:50 Architectures/ Cogn. Systems (Krichmar)
9:50-10 Benchmarks
and Applications
10:00-10:15 break
10:15-11:00 Jeff Hawkins
11:00-11:30 Thomas Sterling
11:30-12:00 Todd Hylton
12:00-1:30 lunch
1:30-2:00 Ajay Divakaran
2:00-2:30 Jeff Krichmar
2:30-3:00 Eric Ryu
3:00-3:15 break
3:15-3:45 Stephen Larson
3:45-4:15 Itamar Arel/
Jeremy Holleman
4:15-6:00 Round-table/
Open discussion
Tuesday
February 25
8:20-8:30 Logistics/
Day 1 review
8:30-9:00 George Bourianoff
9:00-9:30 David Arathorn
9:30-10:00 Stan Williams
10:00-10:15 break
10:15-10:45 Rick Granger
10:45-11:15 George Dyson
11:15-11:45 Anthony Lewis
11:45-12:00 Lloyd Watts
12:00-1:00 lunch
1:00-1:30 Jose Principe
1:30-2:00 Dhireesha Kudithipudi
2:00-2:30 Nima Mesgarani
2:30-3:00 Kevin Gomez
3:00-3:15 break
3:15-3:45 John George
3:45-4:15 Dan Hammerstrom
4:15 - 5:30 Round-table/
Open discussion
Wednesday
February 26
8:20-8:30 Logistics/
Day 2 review
8:30-9:00 Karlheinz Meier
9:00-9:30 Michael Pfeiffer
9:30-10:00 Murat Okandan
10:00-10:15 break
10:15-10:45 Mitchell Nahmias/
Alex Tait
10:45-11:15 Matt Marinella
11:15-11:45 Travis DeWolf
11:45-12:00 Wrap-up/Next steps/Workshop adjourns
12:00-1:00 lunch
1:00-5:00 Neuromorphic Computing Strategy Paper - working session

Confirmed speakers

 


Anthony Lewis

 

Qualcomm, Inc.

Zeroth Processor and Embedded Cognition

 

Abstract: 


What if a computer could perceive the world more like a human being? What if a mobile device could become our cognitive companion, sharing our same experience of the world and hence be able to help us in a more sensible way? What if these computers could learn to anticipate our needs and manage them automatically? Perhaps the resulting machines can make our lives simpler, and more rewarding. 

To enable this vision, we are developing Embedded Cognition (EC). EC has four components: (1) Embedded hardware: for commercial applications this will leverage digital design methodology (2) Always on sensing: allows the phone-companion to continuously model the joint world  of the user and the phone. Always on sensing implies very low power sensing and processing and may highlight the need for efficient neuromorphic technology (3) Online learning: ideally the companion will continuously adapt and be capable of one-shot learning and intelligent handling of forgetting(4) Cognitive algorithms: Cognitive algorithms that go beyond classification and regression used in machine learning.

Travis DeWolf

 

University of Waterloo

Title: Methods for scaling neural computation

 

Abstract: Our lab has recently constructed the world's largest functional model, Spaun.  In this talk, I introduce that model and discuss two ways in which we are extending the functionality of this kind of large-scale neural circuit.  First, we are exploring circuits for nonlinear adaptive perception and control.  I will describe how these circuits map to canonical microcircuits, and demonstrate their useful functionality on simple perception and motor tasks. Second, we have been exploring methods for improving decision making while performing novel tasks. Recently we have developed and implemented the first hierarchical reinforcement learning model in spiking neurons, I will show some recent results.  Integrating these kinds of algorithms in models like Spaun will allow us to develop cortical models that are adaptive, robust, and flexible in a manner similar to natural brains.

Eric Ryu

 

Samsung, Inc.

 

Title: Small animal behavior study and its implication to neuromorphic computing

 

Abstract: C. elegans is the smallest animal which neural network has been studied intensively. We could simulate some typical behavior by modeling its sensory, inter and motor neurons in physiological level. I will present a few hundred neurons can beautifully mimic the complex locomotive patterns of the small animal. This study shows that understanding detail functions of neurons and their networks can be important to build basic blocks for the future complex and multipurpose neuromorphic computing system. I want to discuss some questions about current neuromorphic approaches. 

Todd Hylton

 

Brain Corp.

 

Why Neuromorphic Computing is Difficult and Different

 

In this talk I discuss the basic proposition of neuromorphic computing, the challenges and misunderstandings associated with this proposition, and the lessons that I have learned since beginning work on the SyNAPSE program 7 years ago.  I offer a “Top 10” list of what is hard about neuromorphic computing and thoughts on the path forward. 

Ajay Divakaran

 

SRI International

Deep Visual and Auditory Fusion: Neural Evidence and Computational Models

 

Abstract:

 

We propose a new approach to developing a computational model for multimodal (audio-visual) perception and recognition using temporal deep learning networks. In our current research in neuroscience, we identify brain regions accumulating information over multiple timescales. We did a controlled study in which we collected electrocorticographic (ECoG) signals of subjects watching intact (and scrambled) movies.  Our experiments suggest a connection between slow neuronal population dynamics and temporally extended information processing. We have recently investigated the similarity of neural processing for the same linguistic content conveyed in speech versus in writing in a study – we collected fMRI data on human subjects while they listened to a 7 minute spoken narrative or alternately read a time-locked presentation of its transcript.  Experimental results shows that our ability to extract the same information from spoken and written forms arises from a mixture of modally-selective processes in early (perceptual) and high-order (control) areas, and modality-invariant processe in linguistic and extra-linguistic areas. In a parallel research track in computational vision and machine learning, we have implemented a neuro-inspired, deep-learning theoretic, hybrid computational system for the fusion of temporal multimodal data (e.g. audio and video) at different time scales. We have successfully trained the system for multimodal modeling of affect and social interactions and applied it to a variety of real-world problems.State of the art computational models of multimodal data fusion solving real-world problems follow a similar (at a very coarse level) data processing pipeline as in the human cortex while computational concepts such as  sparse representation have positively impacted neuroscience. This suggests that synergistic development of the two research efforts can lead to significant gains in both areas. In particular we will follow an interactive approach – We will use the understanding of cortical processing of multimodal stimuli to refine our neuro-inspired computational models for multimodal fusion. These will be applied to solve real-world problems. These models will then be used (through a systematic exploration of the immediate space of models), to generate new, testable hypotheses to verify through neuroscientific experimentation thereby enhancing our understanding of involved cortical processes.

Stephen Larson

 

OpenWorm: Aggregating biological simulations in an open science project

 

The OpenWorm project is an open science initiative to aggregate the simulation approaches and biological information of a specific model organism, C. elegans.  The existing computational literature for C. elegans includes models at different levels of inquiry, including models of behavior, biomechanics, and nervous system function.  Despite the existence of these models, a comprehensive model of how C. elegans behavior is derived from the activity of its underlying cells (principally muscles and neurons) still does not exist.  While OpenWorm is principally engaged on building this model first in software, a related effort in neuromorphic engineering has emerged known as Silicon elegans or Si elegans.  This talk will provide an overview of OpenWorm and explain the broader importance of focusing on a specific organism to rally biological simulation and data integration technologies.

Jeffrey Krichmar

 

GPGPU Accelerated Simulation and Parameter Tuning for Neuromorphic Applications

 

Jeffrey L. Krichmar, Michael Beyeler, Kris D. Carlson, Nikil Dutt

University of California, Irvine

 

Neuromorphic engineering takes inspiration from biology to design brain-like systems that are extremely low-power, fault-tolerant, and capable of adaptation to complex environments. The design of these artificial nervous systems involves both the development of neuromorphic hardware devices and the development neuromorphic simulation tools. In this presentation, I describe a simulation environment our group has developed that can be used to design, construct, and run spiking neural networks (SNNs) quickly and efficiently using graphics processing units (GPUs). The simulation environment utilizes the parallel processing power of GPUs to simulate large-scale SNNs. Recent modeling experiments performed using the simulator will be described. Finally, I will introduce an automated parameter tuning framework that utilizes the simulation environment and evolutionary algorithms to tune SNNs. We believe the simulation environment and associated parameter tuning framework presented here can accelerate the development of neuromorphic software and hardware applications by making the design, construction, and tuning of SNNs an easier task.

Kevin Gomez

 

Neuro Inspired Computing in Flash

 

Abstract:

 

NAND Flash dominates the non-volatile memory share of the mobile consumer device market and is currently proving compelling performance and energy efficiency gains in large compute intensive datacenters.  With the recent successful transition of the technology to 3D extending the scalability of NAND another decade or more,  this economy of scale coupled with the significant level of signal processing already needed to sustain Flash density growth suggests a path to painlessly commoditize and scale up cortical computing.  Using results of architecture models, this talk will lay out a case for why now, why neuro-inspired and why Flash.

 

Seagate Technology

Michael Pfeiffer

 

"Building blocks for learning and inference in neuromorphic systems"

 

Abstract:

 

Biological nervous systems are extremely efficient in allowing organisms to interact intelligently and in real-time with their environment, and easily outperform state-of-the-art artificial intelligence for such real-world tasks. However, while conventional computer systems can be easily pre-programmed, or can learn from big volumes of data, it is still unclear how to achieve similar capabilities in neuromorphic systems that emulate the massively parallel, event-based, asynchronous, and adaptive computing paradigm of the brain in silicon. In my talk I will discuss recent results of our group on brain-inspired architectures and algorithms for synthesizing and learning cognitive functions in spiking neural networks, which provide building blocks for sensory processing, learning, and probabilistic inference. Such theoretical models are not only shedding light onto the mechanisms of computation in real nervous systems, but also have a direct use for practical technological applications: Firstly, such spike-based variants of successful machine learning principles be directly applied to streams of events coming from neuromorphic sensors, which provide dynamic features and greatly reduce the amount of data to be processed. And secondly, implementations of spiking algorithms on asynchronous and event-based neuromorphic hardware platforms promise an efficient solution for scalability issues that plague large and powerful neural network algorithms on conventional computers. Specifically, this will be demonstrated in applications of spiking Deep Belief Networks to vision and sensor fusion tasks with silicon retina and cochlea inputs. By providing middle-layer architectures that are well-understood in theory, and map directly onto hardware platforms, we suggest that neuromorphic computing can provide significant advantages over conventional systems for real-time behaving systems, and a great opportunity for applications in the increasingly important field of big data processing.

George Dyson

 

From Analog to Digital and Back: The view from 1945

 

When John von Neumann, who never solved just one problem at one time, supervised the First Draft of a Report on the EDVAC in early 1945, he specified abstract neuro-inspired computational elements, partly because he was already fascinated with neurobiology, and partly because to specify electronic circuits might have restricted the circulation of the report under the wartime classification still in effect at the time. Von Neumann thus not only illuminated the road ahead from analog to digital, but also, still ahead of us, the road back.

Jacob Vogelstein

 

Machine Intelligence from Cortical Networks

 

Abstract

 

Many contemporary theories of neural information processing suggest that within a given cortical region or cognitive/sensory domain, the brain employs algorithms composed of repeated instances of a limited set of computing “primitives.”  These primitives are thought to operate in parallel and communicate with their neighbors above, below, and laterally within and across brain areas.  Until recently, there were few tools available for interrogating the detailed structure and function of the cortical microcircuits believed to embody these primitives.  So while today’s state of the art algorithms for machine learning and machine intelligence have drawn inspiration from theories about the nature of computing in the brain, the detailed operation of these algorithms deviates significantly from the operation of the brain.  Presumably, a significant part of the performance gap separating artificial and biological computing today is due to these deviations.  

 

It may be possible to achieve more human-like performance in learning and pattern recognition tasks using digital and/or analog computing systems if the algorithms that we implement in software and hardware more closely approximate the algorithms employed by the brain.  In this talk, I will describe an idea for how we might leverage the revolution in neuroscience tools and techniques for high-resolution brain mapping to reveal the nature of cortical computing primitives and inspire a new generation of machine learning algorithms that employ facsimiles of these elements as their basis of operation.

 

R. Jacob Vogelstein, Ph.D.

Program Manager

ODNI/IARPA

Jeff Hawkins

 

From Cortical Microcircuits to Machine Intelligence

 

Abstract

 

The path to machine intelligence starts with a detailed understanding of how the neocortex works.   In this talk I will describe recent progress in cortical theory and demonstrate new applications enabled by this progress.   I will speculate on the future of machine intelligence and the technical challenges we face getting there.

 

Jeff Hawkins

Grok Solutions/Numenta

Thomas Sterling

 

Brain Related Computing Beyond Moore’s Law 

Indiana University

 

Abstract

 

Since the end of Dennard scaling almost a decade ago, multicore structures and GPU accelerators have been adopted to concentrate more computing capability into the practical constraints of size, cost, and power. With the end of Moore’s Law looming at the beginning of the next decade, these challenges will become all that more critical combined with the effectiveness of the use of the resources. Both efficiency and scalability demand dramatic improvements if generality of application and user productivity are to be improved sufficiently for exascale capability. The human brain has adapted over a period of half a billion years to meet the needs of survival within the limits of biology to achieve highly complex operations in real time. There is increasing interest in possible lessons to be learned from brain structures and its use to advance computation beyond conventional practices. This presentation will describe two areas of consideration: one related to low-level structure and the other associated with the actual high-level nature of the computation itself. The brain achieves ultra dense packing of neurons that perform local operations relatively slowly and distribute the events across orders of magnitude distributed destination. Specifically, 1011 neurons are contained within a volume of approximately 1450 cubic centimeters each on average communicating with ten thousand other neurons on the average of once a millisecond. A computer that would duplicate the action of the brain would have to operate at 2 to 4 Exaflops. Yet, the brain consumes only about 20 Watts. One brain inspired method that is being explored is that of Continuum Computer Architecture (CCA) based on extended cellular automata techniques. Like the neuron, a CCA functional unit operates locally and independently. But unlike the brain, it is not explicitly connected physically. Rather it uses a packet switching technique through the cellular medium to achieve the same effective connectivity. The operational properties of the brain are very different from that of a semiconductor computer resulting in truly human mental attributes as perceiving beauty, consciousness, pattern recognition, and emotions among others. It is difficult to associate some of these with potential future computing systems. But one, intelligence, can be described to be realizable as an algorithm. This presentation will describe an attempt to estimate a lower bound of the cost (in resources) of a machine exhibiting the property of intelligence through the use of an abstract architecture. Together, these high level and low level perspectives are both brain inspired and extend the means and usage of computing beyond Moore’s law.

David W. Arathorn

 

Map-Seeking Circuit (MSC): A Computational Mechanism for Object Recognition Under Transformation With Digital and Analog Implementations.

 

The Map-Seeking Circuit (MSC) is a mathematically rigorous, performance-predictable mechanism for determining the transformations between two patterns, or more generally, between a pattern somewhere in the input field and one of a number of patterns in memory. MSC yields the parameters of transformation and matching template identity in time proportional to the sum of the number of transformations/templates in each of several stages rather than their product. This efficiency is achieved by applying transformations to superpositions of transforms over multiple stages in a cortex-like bi-directional dataflow, relying on local competition to converge to a solution. For practical ATR from optical imagery convergences are achieved in about 25msec on current high-end GPU hardware. Two entirely analog implementations of MSC have been demonstrated in simulation (one in SPICE). The analog circuit dataflow is identical to the algorithmic version and each mathematical operation in the algorithm has an analog analog. While future performance per watt on GPU or other digital hardware may be nearing a limit, analog or hybrid analog-digital implementations of MSC offer the possibility of several orders of magnitude reduction in power requirements for the same level of performance.

 

The presenter is the inventor/discoverer of MSC, and currently part-time research professor in Montana State University Department of Cell Biology and Neuroscience. In former incarnations he has been CPU architect and hardware designer, big systems technical project manager and successful AI entrepreneur.

 

David W. Arathorn

General Intelligence Corp

and

Montana State University, Dept Cell Biology and Neuroscience

dwa@giclab.com


Please address comments or questions to Linda Wood | Last Modified: