Neuromorphic/Brain-Modeling Computing

Members: Hunjun Lee, Chanmyeong Kim, Eunjin Baek, Yujin Chung, Minseop Kim, Yeongwoo Jang

Motivation

AI-inspired computing has made great advances by exploiting some basic mechanisms of a biological brain (e.g., deep neural network). But, a brain is a complex system whose detailed working mechanism is still unknown to researchers. To take the full potential of AI-inspired computing, we are building FlexBrain, a special-purpose computer system to faithfully simulate and analyze a human-scale complex brain. We expect our system to realize the next-generation brain-inspired computing as well as contribute to the neuroscience area.

Research

Brain Simulation System [ISCA’18, MICRO’19, ASPLOS’21, HPCA’22]. Brain simulation is a key process to understand the detailed working mechanism of a complex brain. A human-scale brain consists of billions of heterogeneous neurons interconnected via synapses, and its communication is based on complex spike deliveries. Therefore, conventional computers are too slow or expensive to faithfully simulate a brain, whereas custom-design brain simulation systems suffer from their limited model coverage. To resolve the issue, we are building a fast and scalable brain-simulation system. In this regard, we propose Flexon and FlexLearn, two cost-effective ASIC-based architectures to dynamically construct heterogeneous biological neuron models and learning rules, respectively. We are also developing FlexBrain, a fast, accurate, and scalable full-stack brain simulation system that efficiently incorporates Flexon and FlexLearn ASICs. We plan to release the RTL code for Flexon and FlexLearn datapath as well. Moreover, we optimize the single-core architecture by adopting an event-driven simulation method. The resulting design, NeuroEngine, significantly improves the simulation speed and energy efficiency.

Applying to AI/Deep Learning [Neurocomputing’21]. A human brain is extremely power efficient when compared to modern AI accelerators. Therefore, the brain’s spiking neural network (SNN) mechanism has been considered highly promising to improve the power efficiency of modern AI accelerators. But, such accurate and power-efficient SNN-based AI accelerators do not exist yet because we do not know how to maintain their advantages while accurately processing AI/DL applications. To resolve the issue, we are developing cost-effective SNN-based AI/DL execution mechanisms and acceleration systems.

Brain-Inspired Workloads. Another major challenge in developing a brain simulation system is how to validate the system and apply it to real-world problems. For example, researchers do not have standardized brain-modeling workloads and their practical usage scenarios. To resolve the issue, we are currently collaborating with researchers working on brain-computer interface and neuroscience to develop the representative workloads to evaluate a brain-modeling system, and their successful use cases which can contribute to both neuroscience and AI/DL areas.

Software release

Publications

  • NeuroSync: A Scalable and Accurate Brain Simulation System using Safe and Efficient Speculation
    Hunjun Lee*, Chanmyeong Kim*, Minseop Kim, Yujin Chung, and Jangwoo Kim
    25th IEEE International Symposium on High-Performance Computer Architecture (HPCA), Apr. 2022
  • An Accurate and Fair Evaluation Methodology for SNN-Based Inferencing with Full-Stack Hardware Design Space Explorations
    Hunjun Lee, Chanmyeong Kim, Eunjin Baek, and Jangwoo Kim
    Neurocomputing, Sep 2021
  • NeuroEngine: A Hardware-based Event-driven Simulation System for Advanced Brain-inspired Computing
    Hunjun Lee*, Chanmyeong Kim*, Yujin Chung, and Jangwoo Kim
    ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), Apr. 2021
  • FlexLearn: Fast and Highly Efficient Brain Simulations Using Flexible On-Chip Learning
    Eunjin Baek*, Hunjun Lee*, Youngsok Kim, and Jangwoo Kim
    ACM/IEEE International Symposium on Microarchitecture (MICRO), Oct. 2019
  • Flexon: a flexible digital neuron for efficient spiking neural network simulations
    Dayeol Lee*, Gwangmu Lee*, Dongup Kwon, Sunghwa Lee, Youngsok Kim, and Jangwoo Kim
    ACM/IEEE International Symposium on Computer Architecture (ISCA), June. 2018

* Contributed equally