I. Resistive Random Access Memory (RRAM)
The resistive switching phenomenon in oxides refers to the observation that the oxide thin film can reversibly switch between an insulating state and a conducting state. Oxide-based RRAM is regarded as one of the most promising candidates for future nonvolatile memory application due to its simple structure, low switching voltage, fast switching speed. We are interested in the following topics:
1. RRAM device optimization: low-power/energy programming, built-in nonlinearity, reliability improvement;
2. RRAM device modeling: physical numerical modeling, compact modeling for SPICE simulation;
3. RRAM array and architecture design: programming scheme, sensing scheme, peripheral circuitry design;
4. 3-D integration of RRAM: selector for cross-point array, 3-D device prototyping, 3-D array architecture exploration;
5. Radiation effects in RRAM devices and RRAM based systems.
II. Machine/Deep Learning and Neuromorphic Computing with Synaptic Devices
The machine/deep learning and neuromorphic computing algorithms typically require enormous amount of computational and memory resources to perform the training of the model parameters and/or the inference. The back-and-forth data transfer between processing core and memory via the narrow I/O interface imposes a “memory wall” problem to the entire system. Therefore, a radical shift of computing paradigm towards “compute-in-memory” is an attractive solution, where logic and memory array are integrated in a fine-grain fashion and the data-intensive computation is offloaded to the memory periphery. Many on-chip memory arrays (including SRAM, RRAM, and others) could be customized as synaptic arrays for parallelizing the matrix-vector multiplication or weighted sum operations in the neural networks. On-chip implementation of the machine/deep learning and neuromorphic computing requires co-design of devices, circuits and algorithms, which may potentially gain orders of magnitude improvement in the speed and the energy efficiency for performing intelligent tasks such as image or speech recognition. We are interested in the following topics:
1. Synaptic device optimization for multilevel states modulation, low-power/energy/parallel programming;
2. Chip-scale design and integration of synaptic arrays with CMOS neuron circuits or neuron-like devices;
3. Electronic design automation (EDA) tool development for evaluating various synaptic devices and neuro-inspired array architectures;
4. Algorithm customization for hardware implementation including sparse coding, multilayer perceptron, convolutional neural network, recurrent network, spiking neural network, etc.
III. Nanoelectronics based Hardware Security Primitives
Cyber systems and electronic devices are vulnerable targets for the adversary, leading to serious security and privacy issues for defense applications. Classical cryptography relies on the secret key, which may leak out to the adversary by various means, e.g. software attacks (viruses), physical attacks such as invasive, or side-channel attacks, etc. These concerns motivate the development of Physical Unclonable Function (PUF). PUF is a security primitive that leverages the inherent randomness in the physical systems (e.g. the semiconductor manufacturing process) to produce unique responses (outputs) upon the inquiry of challenges (inputs). We plan to leverage the variability of nanoelectronic devices (i.e. RRAM) as the physical mechanism for PUF application. We are interested in the following topics:
1. Designing RRAM based PUF circuits and arrays emphasizing the reliability;
2. Embedding RRAM PUFs into cryptographic modules such as hash function and encryption function.
Sponsors of Research
We acknowledge the support from our sponsors: