Skip to content

Latest commit

 

History

History
16 lines (8 loc) · 696 Bytes

README.md

File metadata and controls

16 lines (8 loc) · 696 Bytes

Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts

Rishov Sarkar1, Hanxue Liang2, Zhiwen Fan2, Zhangyang Wang2, Cong Hao1

1School of Electrical and Computer Engineering, Georgia Institute of Technology
2School of Electrical and Computer Engineering, University of Texas at Austin

ICCAD 2023 paper

Overview

Edge-MoE overall architecture

This is Edge-MoE, the first end-to-end FPGA accelerator for multi-task ViT with a rich collection of architectural innovations.