Special Topic: AI Chips and Systems for Large Language Models
REVIEW Webpage Webpage-cn SpringerLink Google Scholar Cited in SCI: 0

Review of chiplet-based design: system architecture and interconnection
Liu, Yafei; Li, Xiangyu; Yin, Shouyi
Sci China Inf Sci, 2024, 67(10): 200401
Keywords: chiplet-based design; package; architecture; interconnection; silicon interposer
Cite as: Liu Y F, Li X Y, Yin S Y. Review of chiplet-based design: system architecture and interconnection. Sci China Inf Sci, 2024, 67(10): 200401, doi: 10.1007/s11432-023-3926-8

Special Topic: AI Chips and Systems for Large Language Models
POSITION PAPER Webpage Webpage-cn SpringerLink Google Scholar Cited in SCI: 0

Large circuit models: opportunities and challenges
Chen, Lei; Chen, Yiqi; Chu, Zhufei; Fang, Wenji; Ho, Tsung-Yi; Huang, Ru; Huang, Yu; Khan, Sadaf; Li, Min; Li, Xingquan; Li, Yu; Liang, Yun; Liu, Jinwei; Liu, Yi; Lin, Yibo; Luo, Guojie; Pan, Hongyang; Shi, Zhengyuan; Sun, Guangyu; Tsaras, Dimitrios; Wang, Runsheng; Wang, Ziyi; Wei, Xinming; Xie, Zhiyao; Xu, Qiang; Xue, Chenhao; Yan, Junchi; Yang, Jun; Yu, Bei; Yuan, Mingxuan; Young, Evangeline F. Y.; Zeng, Xuan; Zhang, Haoyi; Zhang, Zuodong; Zhao, Yuxiang; Zhen, Hui-Ling; Zheng, Ziyang; Zhu, Binwu; Zhu, Keren; Zou, Sunan
Sci China Inf Sci, 2024, 67(10): 200402
Keywords: AI-rooted EDA; large circuit models; LCMs; multimodal circuit representation learning; circuit optimization
Cite as: Chen L, Chen Y Q, Chu Z F, et al. Large circuit models: opportunities and challenges. Sci China Inf Sci, 2024, 67(10): 200402, doi: 10.1007/s11432-024-4155-7

Special Topic: AI Chips and Systems for Large Language Models
RESEARCH PAPER Webpage Webpage-cn SpringerLink Google Scholar Cited in SCI: 0

TSCompiler: efficient compilation framework for dynamic-shape models
Luo, Xiang; Zhang, Chen; Geng, Chenbo; Yi, Yanzhi; Hu, Jiahui; Zhang, Renwei; Zhang, Zhen; Consolaro, Gianpietro; Yang, Fan; Lu, Tun; Gu, Ning; Shang, Li
Sci China Inf Sci, 2024, 67(10): 200403
Keywords: machine learning; tensor compilers; dynamic shape; operator fusion; code generation; auto-tuning
Cite as: Luo X, Zhang C, Geng C B, et al. TSCompiler: efficient compilation framework for dynamic-shape models. Sci China Inf Sci, 2024, 67(10): 200403, doi: 10.1007/s11432-024-4071-6

Special Topic: AI Chips and Systems for Large Language Models
RESEARCH PAPER Webpage Webpage-cn SpringerLink Google Scholar

Hardware-oriented algorithms for softmax and layer normalization of large language models
Li W J, Lyu D X, Wang G, et al
Sci China Inf Sci, 2024, 67(10): 200404
Keywords: large language model; softmax; layer normalization; hardware architecture; Transformer
Cite as: Li W J, Lyu D X, Wang G, et al. Hardware-oriented algorithms for softmax and layer normalization of large language models. Sci China Inf Sci, 2024, 67(10): 200404, doi: 10.1007/s11432-024-4137-4

Special Topic: AI Chips and Systems for Large Language Models
RESEARCH PAPER Supplementary Webpage Webpage-cn SpringerLink Google Scholar Cited in SCI: 0

CMN: a co-designed neural architecture search for efficient computing-in-memory-based mixture-of-experts
Han, Shihao; Liu, Sishuo; Du, Shucheng; Li, Mingzi; Ye, Zijian; Xu, Xiaoxin; Li, Yi; Wang, Zhongrui; Shang, Dashan
Sci China Inf Sci, 2024, 67(10): 200405
Keywords: mixture-of-experts; computing-in-memory; neural architecture search; resistive random-access memory; static random-access memory
Cite as: Han S H, Liu S S, Du S C, et al. CMN: a co-designed neural architecture search for efficient computing-in-memory-based mixture-of-experts. Sci China Inf Sci, 2024, 67(10): 200405, doi: 10.1007/s11432-024-4144-y

Special Topic: AI Chips and Systems for Large Language Models
RESEARCH PAPER Webpage Webpage-cn SpringerLink Google Scholar Cited in SCI: 0

SpikingMiniLM: energy-efficient spiking transformer for natural language understanding
Zhang, Jiayu; Shen, Jiangrong; Wang, Zeke; Guo, Qinghai; Yan, Rui; Pan, Gang; Tang, Huajin
Sci China Inf Sci, 2024, 67(10): 200406
Keywords: spiking neural networks; natural language understanding; spiking Transformer; spike-based attention; multi-step encoding; ANN-to-SNN distillation
Cite as: Zhang J Y, Shen J R, Wang Z K, et al. SpikingMiniLM: energy-efficient spiking transformer for natural language understanding. Sci China Inf Sci, 2024, 67(10): 200406, doi: 10.1007/s11432-024-4101-6