Portrait: Jae-Sun Seo

Jae-sun Seo

Assistant Professor

Jae-sun Seo received the B.S. degree from Seoul National University in 2001, and the M.S. and Ph.D. degree from the University of Michigan in 2006 and 2010, respectively, all in electrical engineering. He spent graduate research internships at Intel circuit research lab in 2006 and Sun Microsystems VLSI research group in 2008. From January 2010 to December 2013, he was with IBM T. J. Watson Research Center, where he worked on cognitive computing chips under the DARPA SyNAPSE project and energy-efficient integrated circuits for high-performance processors. In January 2014, he joined ASU as an assistant professor in the School of ECEE. During the summer of 2015, he was a visiting faculty at Intel Circuits Research Lab.

His research interests include efficient hardware design of machine learning / neuromorphic algorithms and integrated power management. Dr. Seo was a recipient of Samsung Scholarship (2004-2009), IBM Outstanding Technical Achievement Award (2012), and NSF CAREER Award (2017). He is a IEEE Senior Member, and has served on the technical program committees for ISLPED (2013-2019), DAC (2018-2020), ICCAD (2018-2019), on the review committee member for ISCAS (2017-2019), and on the organizing committee for ICCD (2015-2017).

Latest News

March 2020: Our paper titled “Online Knowledge Acquisition with Selective Inherited Model” is accepted for publication at 2020 IEEE International Joint Conference on Neural Networks (IJCNN), held in Glasgow, United Kingdom (collaboration with Oak Ridge National Laboratory).

January 2020: Our paper titled “ECG Authentication Hardware Design with Low-Power Signal Processing and Neural Network Optimization with Low Precision and Structured Compression” is accepted for publication at IEEE Transactions on Biomedical Circuits and Systems (TBioCAS) (collaboration with Samsung Advanced Institute of Technology) for the special section on “AI-Based Biomedical Circuits and Systems“.

January 2020: Our paper titled “Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access” is accepted for publication at 2020 IEEE Custom Integrated Circuits Conference (CICC) in Boston, MA.

January 2020: Prof. Seo will make a presentation on “Structured Sparsity and Low-Precision Quantization for Energy-/Area-Efficient DNNs” at the forum “Machine Learning at the Extreme Edge” at 2020 International Solid-State Circuits Conference (ISSCC) in San Francisco, CA.

January 2020: Prof. Seo will serve as a panelist on the panel “The Role of NVM, Emerging Memories and In-Memory Compute for Edge AI” at the 2020 TinyML Summit in San Jose, CA.

January 2020: Prof. Seo will serve as the track chair for DES4 on AI/ML System Design for 2020 IEEE/ACM Design Automation Conference (DAC) held in San Francisco, CA. 

January 2020: Our paper titled “A Variation Robust Inference Engine Based on STT-MRAM with Parallel Read-Out“ is accepted for publication at 2020 IEEE International Symposium on Circuits and Systems (ISCAS), held in Seville, Spain (collaboration with Georgia Tech and Samsung Semiconductor).

January 2020: Our paper titled “Impact of Read Disturb on Multi-level RRAM based Inference Engine: Experiments and Model Prediction“ is accepted for publication at 2020 IEEE International Reliability Physics Symposium (IRPS), held in Dallas, TX (collaboration with Georgia Tech).

January 2020: Our paper titled “XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks” is accepted for publication at IEEE Journal on Solid-State Circuits (collaboration with Columbia University).

November 2019: Our paper titled “A 2.6 TOPS/W 16-bit Fixed-Point Convolutional Neural Network Learning Processor in 65nm CMOS” is accepted for publication at IEEE Solid-State Circuits Letters (SSC-L).

October 2019: Our paper titled “Monolithically Integrated RRAM- and CMOS-Based In-Memory Computing Optimizations for Efficient Deep Learning” is accepted for publication for IEEE Micro for the special issue on “Monolithic 3D Integration” (collaboration with Georgia Tech and POSTECH).

October 2019: Our preprint manuscript titled “High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS” is posted at ArXiv (collaboration with Georgia Tech).

September 2019: Our paper titled “Vesti: Ultra-Energy-Efficient In-Memory Computing Accelerator for Deep Neural Networks” is accepted for publication at IEEE Transactions on Very Large Integration (VLSI) Systems (collaboration with Columbia University).