top of page

NEUROMORPHIC COMPUTING SYSTEMS

Hybrid-Structured Deep Neural Network with Computing-in-Memory Strategy

Deep learning strategies have gained unprecedented success in many real-world applications. However, deep learning frameworks pose difficulties for efficient hardware implementation due to the needs of a complex gradient-based learning algorithm and the required high memory bandwidth for synaptic weight storage, especially in today's data-intensive environment.

​

To support the integration of neuromorphic architecture and deep learning frameworks, my goal in this project is to build a new class of hybrid neural network (HNN) by integrating both spatial and temporal information processing capabilities with a unique nature from delay-feedback reservoir (DFR) network. My techniques involve integrating the multilayer perceptron (MLP) and the DFR network to improve the network's learning capability, utilizing the computing-in-memory (CIM) strategy to accelerate the neural computations, and adopting the unique ridge regression learning rule to sidestep the vanishing gradient problem. The resulting spatial-temporal HNN (STHNN) has been successfully fabricated in 180nm CMOS process with the on-chip image classification capability.

HNN.png

High-Performance Delay-Feedback Reservoir Network

The continued success in the development of neuromorphic computing systems has immensely pushed today's artificial intelligence (AI) forward. Recurrent Neural Networks (RNNs) exhibit exceptional performance in processing temporal information, replicating the way that we human learn. Nevertheless, the highly nonlinear nature of RNNs has become a major factor that limits the computational efficiency on the training operation. What is more, the required computational resources lead to a significant hardware overhead, impeding such powerful computing modules to be deployed onto resource-constrain or power-limited portable devices.

​

My goal in this project is to design a new class of nonlinear neural activation and spiking neurons for the reservoir computing network (RCN). The insight of my approach is to adapt by simplifying the conventional RCNs through a single nonlinear neural activation and a delay-feedback topology with a chain of spiking neurons. The resulting delay-feedback reservoir (DFR) network has been successfully fabricated in 130nm BiCMOS process.

DFR.png

Energy-efficient Neuromorphic Computing Accelerator

The aggressive technology scaling and the explosion of “big data” applications impose severe reliability challenges on present-day very large scale integration (VLSI) designs and data processing on conventional computing systems. By imitating brain’s naturally massive parallel architecture with the unique analog domain operations, neuromorphic computing systems are anticipated to deliver superior performance in data processing.

​

My goal in this project is to build a new class of neuromorphic computing accelerator to realize the high-speed parallel operations with the potential to classify non-separable functions. My techniques involve integrating the rate code processors and the memristive crossbar arrays, sidestepping the energy and resources allocated for power-hungry peripherals. The resulting two-layer feedforward neural network (FNN) has been successfully fabricated in 130nm BiCMOS process.

FNN.png
bottom of page