Portrait: Jae-Sun Seo

Jae-sun Seo

Associate Professor

Jae-sun Seo received the B.S. degree from Seoul National University in 2001, and the M.S. and Ph.D. degree from the University of Michigan in 2006 and 2010, respectively, all in electrical engineering. He spent graduate research internships at Intel circuit research lab in 2006 and Sun Microsystems VLSI research group in 2008. From January 2010 to December 2013, he was with IBM T. J. Watson Research Center, where he worked on cognitive computing chips under the DARPA SyNAPSE project and energy-efficient integrated circuits for high-performance processors. In January 2014, he joined ASU as an Assistant Professor in the School of ECEE. During the summer of 2015, he was a visiting faculty at Intel Circuits Research Lab. In August 2020, he got promoted to an Associate Professor with tenure.

His research interests include efficient hardware design of machine learning / neuromorphic algorithms and integrated power management. Dr. Seo was a recipient of Samsung Scholarship (2004-2009), IBM Outstanding Technical Achievement Award (2012), NSF CAREER Award (2017), Facebook Distinguished Faculty Award (2020), and Intel Outstanding Researcher Award (2021). He is a IEEE Senior Member, and has served on the technical program committees for ISSCC (2022), DAC (2018-2021), DATE (2021), ICCAD (2018-2020), ISLPED (2013-2019), on the review committee member for ISCAS (2017-2019), and on the organizing committee for ICCD (2015-2017) and AICAS (2021).

Latest News

May 2021: Two of our papers have been accepted for publication at 2021 IEEE International Conference on Field-Programmable Logic and Applications (FPL), which will be held virtually.

  • End-to-End FPGA-based Object Detection Using Pipelined CNN and Non-Maximum Suppression (collaboration with University of Toronto and Intel)
  • FixyFPGA: Efficient FPGA Accelerator for Deep Neural Networks with High Element-Wise Sparsity and without External Memory Access ” (collaboration with Arm)

April 2021: Our paper titled “Impact of On-Chip Interconnect on In-Memory Acceleration of Deep Neural Networks” is accepted for publication at ACM Journal on Emerging Technologies in Computing Systems (JETC) (collaboration with University of Wisconsin, Madison).

April 2021: Our paper titled “PIMCA: A 3.4-Mb Programmable In-Memory Computing Accelerator in 28nm for On-Chip DNN Inference” is accepted for publication at 2021 IEEE Symposium on VLSI Circuits, which will be held virtually (collaboration with Columbia University and Samsung Advanced Institute of Technology).

March 2021: Prof. Seo will join the International Technical Program Committee for IEEE International Solid-State Circuits Conference (ISSCC) as a member of the Machine Learning and AI subcommittee.

March 2021: Prof. Seo will serve as an Associate Editor for IEEE Open Journal of the Solid-State Circuits Society (OJ-SSCS).

March 2021: Our paper titled “Total Ionizing Dose Effect on Multi-state HfOx-based RRAM Synaptic Array” is accepted for publication at IEEE Transactions on Nuclear Science (TNS) (collaboration with Georgia Tech).

March 2021: Our paper titled “Structured Pruning of RRAM Crossbars for Efficient In-Memory Computing Acceleration of Deep Neural Networks” is accepted for publication at IEEE Transactions on Circuits and Systems II (TCAS-II) for the special issue on 2021 IEEE International Symposium on Circuits and Systems (ISCAS) (collaboration with Georgia Tech).

February 2021: Our paper titled “Leveraging Noise and Aggressive Quantization of In-Memory Computing for Robustness Improvement of DNN Hardware Against Adversarial Input and Weight Attacks” is accepted for publication at 2021 ACM/IEEE Design Automation Conference (DAC), which will be held in San Francisco (collaboration with Columbia University).

February 2021: Prof. Seo received Intel’s 2020 Outstanding Research Award, together with Prof. Yu Cao and Prof. Sarma Vrudhula, for our project on FPGA-based accelerator designs for training deep learning algorithms.

February 2021: Two of our papers have been accepted for publication at 2021 IEEE International Symposium on Circuits and Systems (ISCAS), which will be held virtually and in Daegu, South Korea.

  • Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks (collaboration with King’s College London)
  • Structured Pruning of RRAM Crossbars for Efficient In-Memory Computing Acceleration of Deep Neural Network” (collaboration with Georgia Tech)

December 2020: Prof. Seo received a gift funding from Facebook Reality Labs (FRL) Silicon Team as a Distinguished Faculty Award.

December 2020: Two of our papers have been accepted for publication at 2021 IEEE International Reliability Physics Symposium (IRPS), which will be held virtually (collaboration with Georgia Tech).

  • Characterization and Mitigation of Relaxation Effects on Multi-level RRAM based In-Memory Computing [Best Student Paper Candidate]
  • Impact of Multilevel Retention Characteristics on RRAM based DNN Inference Engine

December 2020: Prof. Seo’s short opinion on a recent MCUNet work by MIT has been featured in a WIRED article.

November 2020: Our paper titled “Modeling and Optimization of SRAM-based In-Memory Computing Hardware Design” is accepted for publication at 2021 IEEE Design, Automation & Test in Europe (DATE), which will be held virtually.

November 2020: Prof. Seo will serve as a technical program committee member of the inaugural TinyML Research Symposium, which will be held in March 2021. The call for papers can be found here.

October 2020: Our paper titled “Energy-Efficient Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access” is accepted for publication at IEEE Journal on Solid-State Circuits (JSSC) for the special issue on 2020 IEEE Custom Integrated Circuits Conference (CICC).

September 2020: Our paper titled “Two-Step Write-Verify Scheme and Impact of the Read Noise in Multilevel RRAM based Inference Engine” is accepted for publication at Semiconductor Science and Technology (collaboration with Georgia Tech).

September 2020: Prof. Seo will make an invited presentation on “Introduction Into In-Memory Computing and Efficient AI Processing at the Edge” at the educational workshop “Edge AI and In-Memory-Computing for energy efficient AIoT solutions” at 2020 European Solid-State Circuits Conference, which will be held virtually.

August 2020: Prof. Seo will serve as a review committee member for the Special Section on Parallel and Distributed Computing Techniques for Non-Von Neumann Technologies for IEEE Transactions on Parallel and Distributed Systems.

August 2020: Prof. Seo will make an invited presentation on “SRAM and RRAM based In-Memory Computing” at the “Neuromorphic AI Semiconductor Online Short Course” organized by The Institute of Semiconductor Engineers (ISE) in South Korea, which will be held virtually.

August 2020: Our invited paper titled “Deep Neural Network Training Accelerator Designs in ASIC and FPGA” will be presented at IEEE International SoC Conference (ISOCC), which will be held both virtually and in Yeosu, South Korea.

August 2020: Our paper titled “Array Level Programming of 3-bit Per Cell Resistive Memory and Its Application for Deep Neural Network Inference” is accepted for publication at IEEE Transactions on Electron Devices (T-ED) (collaboration with Georgia Tech) for the special section on 2020 European Solid-State Device Research Conference (ESSDERC).

August 2020: Our paper titled “High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS” is accepted for publication at IEEE Transactions on Electron Devices (T-ED) (collaboration with Georgia Tech).

August 2020: Our paper titled “A Latency-Optimized Reconfigurable NoC for In-Memory Acceleration of DNNs” is accepted for publication at IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS).

July 2020: Our paper titled “Compressing LSTM Networks with Hierarchical Coarse-Grain Sparsity” is accepted for publication at 2020 INTERSPEECH, which will be held both virtually and in Shanghai, China.

July 2020: Our paper titled “2-Bit-per-Cell RRAM based In-Memory Computing for Area-/Energy-Efficient Deep Learning” is accepted for publication at IEEE Solid-State Circuits Letters (SSC-L) (collaboration with Georgia Tech and POSTECH) for the special section on 2020 European Solid-State Circuits Conference (ESSCIRC).

July 2020: Our paper titled “A Smart Hardware Security Engine Combining Entropy Sources of ECG, HRV and SRAM PUF for Authentication and Secret Key Generation” is accepted for publication at IEEE Journal on Solid-State Circuits (JSSC) (collaboration with Samsung Advanced Institute of Technology) for the special issue on 2019 Asian Solid-State Circuits Conference (A-SSCC).

July 2020: Our paper titled “FPGA-based Low-Batch Training Accelerator for Modern CNNs Featuring High Bandwidth Memory“ is accepted for publication at IEEE International Conference On Computer Aided Design (ICCAD), which will be held virtually (collaboration with Intel).

July 2020: Our paper titled “Regulation Control Design Techniques for Integrated Switched Capacitor Voltage Regulators“ is accepted for publication at IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), which will be held virtually.

June 2020: Our paper titled “Interconnect-Aware Area and Energy Optimization for In-Memory Acceleration of DNNs” is accepted for publication at IEEE Design & Test.

May 2020: Our paper titled “C3SRAM: An In-Memory-Computing SRAM Macro Based on Robust Capacitive Coupling Computing Mechanism” is accepted for publication at IEEE Journal on Solid-State Circuits (JSSC) (collaboration with Columbia University) for the special issue on 2019 European Solid-State Circuits Conference (ESSCIRC).

May 2020: Our paper titled “A 8.93 TOPS/W LSTM Recurrent Neural Network Accelerator Featuring Hierarchical Coarse-Grain Sparsity for On-Device Speech Recognition” is accepted for publication at IEEE Journal on Solid-State Circuits (JSSC) for the special issue on 2019 European Solid-State Circuits Conference (ESSCIRC).

April 2020: Prof. Seo will make an invited presentation on “Monolithically Integrated RRAM-based Analog/ Mixed-Signal In-Memory Computing for Energy-Efficient Deep Learning” at the workshop “Analog Computing Technologies and Circuits for Efficient Machine Learning Hardware” at 2020 Symposia on VLSI Technology and Circuits, which will be held virtually.

April 2020: Our paper titled “Total Ionizing Dose Effect on Multi-state HfOx-based RRAM Synaptic Array” is accepted for publication at 2020 IEEE Nuclear & Space Radiation Effects Conference, which will be held in Sante Fe, NM (collaboration with Georgia Tech).

April 2020: Our paper titled “Investigation of Read Disturb and Bipolar Read Scheme on Multilevel RRAM based Deep Learning Inference Engine” is accepted for publication at IEEE Transactions on Electron Devices (T-ED) (collaboration with Georgia Tech).

March 2020: Our paper titled “Online Knowledge Acquisition with Selective Inherited Model” is accepted for publication at 2020 IEEE International Joint Conference on Neural Networks (IJCNN), which will be held in Glasgow, United Kingdom (collaboration with Oak Ridge National Laboratory).

January 2020: Our paper titled “ECG Authentication Hardware Design with Low-Power Signal Processing and Neural Network Optimization with Low Precision and Structured Compression” is accepted for publication at IEEE Transactions on Biomedical Circuits and Systems (TBioCAS) (collaboration with Samsung Advanced Institute of Technology) for the special section on “AI-Based Biomedical Circuits and Systems“.

January 2020: Our paper titled “An On-Chip Learning Accelerator for Spiking Neural Networks using STT-RAM Crossbar Arrays” is accepted for publication at 2020 IEEE Design, Automation & Test in Europe (DATE), which will be held virtually.

January 2020: Our paper titled “Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access” is accepted for publication at 2020 IEEE Custom Integrated Circuits Conference (CICC) in Boston, MA.

January 2020: Prof. Seo will make a presentation on “Structured Sparsity and Low-Precision Quantization for Energy-/Area-Efficient DNNs” at the forum “Machine Learning at the Extreme Edge” at 2020 International Solid-State Circuits Conference (ISSCC) in San Francisco, CA.

January 2020: Prof. Seo will serve as a panelist on the panel “The Role of NVM, Emerging Memories and In-Memory Compute for Edge AI” at the 2020 TinyML Summit, which will be in San Jose, CA.

January 2020: Prof. Seo will serve as the track chair for DES4 on AI/ML System Design for 2020 IEEE/ACM Design Automation Conference (DAC), which will be held in San Francisco, CA. 

January 2020: Our paper titled “A Variation Robust Inference Engine Based on STT-MRAM with Parallel Read-Out“ is accepted for publication at 2020 IEEE International Symposium on Circuits and Systems (ISCAS), which will be held in Seville, Spain (collaboration with Georgia Tech and Samsung Semiconductor).

January 2020: Our paper titled “Impact of Read Disturb on Multi-level RRAM based Inference Engine: Experiments and Model Prediction“ is accepted for publication at 2020 IEEE International Reliability Physics Symposium (IRPS), which will be held in Dallas, TX (collaboration with Georgia Tech).

January 2020: Our paper titled “XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks” is accepted for publication at IEEE Journal on Solid-State Circuits (collaboration with Columbia University).

November 2019: Our paper titled “A 2.6 TOPS/W 16-bit Fixed-Point Convolutional Neural Network Learning Processor in 65nm CMOS” is accepted for publication at IEEE Solid-State Circuits Letters (SSC-L).

October 2019: Our paper titled “Monolithically Integrated RRAM- and CMOS-Based In-Memory Computing Optimizations for Efficient Deep Learning” is accepted for publication for IEEE Micro for the special issue on “Monolithic 3D Integration” (collaboration with Georgia Tech and POSTECH).

October 2019: Our preprint manuscript titled “High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS” is posted at ArXiv (collaboration with Georgia Tech).

September 2019: Our paper titled “Vesti: Ultra-Energy-Efficient In-Memory Computing Accelerator for Deep Neural Networks” is accepted for publication at IEEE Transactions on Very Large Integration (VLSI) Systems (collaboration with Columbia University).