Special Focus on Near-memory and In-memory Computing
REVIEW Webpage Webpage-cn SpringerLink Google Scholar Cited in SCI: 14

Keywords: graph processing; machine learning acceleration; reram; hmc; hbm
Cite as: Qian X H. Graph processing and machine learning architectures with emerging memory technologies: a survey. Sci China Inf Sci, 2021, 64(6): 160401, doi: 10.1007/s11432-020-3219-6

Special Focus on Near-memory and In-memory Computing
REVIEW Webpage Webpage-cn SpringerLink Google Scholar Cited in SCI: 23

A survey of in-spin transfer torque MRAM computing
Cai, Hao; Liu, Bo; Chen, Juntong; Naviner, Lirida; Zhou, Yongliang; Wang, Zhen; Yang, Jun
Sci China Inf Sci, 2021, 64(6): 160402
Keywords: spin-transfer torque-magnetoresistive random access memory; in-memory computing; magnetic tunnel junction; analog computing; nonvolatile memory; boolean logic; neural network
Cite as: Cai H, Liu B, Chen J T, et al. A survey of in-spin transfer torque MRAM computing. Sci China Inf Sci, 2021, 64(6): 160402, doi: 10.1007/s11432-021-3220-0

Special Focus on Near-memory and In-memory Computing
REVIEW Webpage Webpage-cn SpringerLink Google Scholar Cited in SCI: 11

Energy-efficient computing-in-memory architecture for AI processor: device, circuit, architecture perspective
Chang, Liang; Li, Chenglong; Zhang, Zhaomin; Xiao, Jianbiao; Liu, Qingsong; Zhu, Zhen; Li, Weihang; Zhu, Zixuan; Yang, Siqi; Zhou, Jun
Sci China Inf Sci, 2021, 64(6): 160403
Keywords: energy efficiency; computing-in-memory; non-volatile memory; test demonstrators; ai processor
Cite as: Chang L, Li C L, Zhang Z M, et al. Energy-efficient computing-in-memory architecture for AI processor: device, circuit, architecture perspective. Sci China Inf Sci, 2021, 64(6): 160403, doi: 10.1007/s11432-021-3234-0

Special Focus on Near-memory and In-memory Computing
PROGRESS Webpage Webpage-cn SpringerLink Google Scholar Cited in SCI: 82

Breaking the von Neumann bottleneck: architecture-level processing-in-memory technology
Zou, Xingqi; Xu, Sheng; Chen, Xiaoming; Yan, Liang; Han, Yinhe
Sci China Inf Sci, 2021, 64(6): 160404
Keywords: processing-in-memory (pim); von neumann bottleneck; memory wall; pim simulator; architecture-level pim
Cite as: Zou X Q, Xu S, Chen X M, et al. Breaking the von Neumann bottleneck: architecture-level processing-in-memory technology. Sci China Inf Sci, 2021, 64(6): 160404, doi: 10.1007/s11432-020-3227-1

Special Focus on Near-memory and In-memory Computing
RESEARCH PAPER Webpage Webpage-cn SpringerLink Google Scholar Cited in SCI: 5

Neural connectivity inference with spike-timing dependent plasticity network
Moon, John; Wu, Yuting; Zhu, Xiaojian; Lu, Wei D.
Sci China Inf Sci, 2021, 64(6): 160405
Keywords: spike-timing dependent plasticity; neural connectivity; memristor; online learning; second-order memristor
Cite as: Moon J, Wu Y T, Zhu X J, et al. Neural connectivity inference with spike-timing dependent plasticity network. Sci China Inf Sci, 2021, 64(6): 160405, doi: 10.1007/s11432-021-3217-0

Special Focus on Near-memory and In-memory Computing
RESEARCH PAPER Webpage Webpage-cn SpringerLink Google Scholar Cited in SCI: 14

Array-level boosting method with spatial extended allocation to improve the accuracy of memristor based computing-in-memory chips
Zhang, Wenqiang; Gao, Bin; Yao, Peng; Tang, Jianshi; Qian, He; Wu, Huaqiang
Sci China Inf Sci, 2021, 64(6): 160406
Keywords: memristor; computing-in-memory; array-level boosting; neuromorphic computing; rram
Cite as: Zhang W Q, Gao B, Yao P, et al. Array-level boosting method with spatial extended allocation to improve the accuracy of memristor based computing-in-memory chips. Sci China Inf Sci, 2021, 64(6): 160406, doi: 10.1007/s11432-020-3198-9

Special Focus on Near-memory and In-memory Computing
RESEARCH PAPER Webpage Webpage-cn SpringerLink Google Scholar Cited in SCI: 15

NAS4RRAM: neural network architecture search for inference on RRAM-based accelerators
Yuan, Zhihang; Liu, Jingze; Li, Xingchen; Yan, Longhao; Chen, Haoxiang; Wu, Bingzhe; Yang, Yuchao; Sun, Guangyu
Sci China Inf Sci, 2021, 64(6): 160407
Keywords: network architecture search (nas); neural networks; rram-based accelerator; hardware noise; quantization
Cite as: Yuan Z H, Liu J Z, Li X C, et al. NAS4RRAM: neural network architecture search for inference on RRAM-based accelerators. Sci China Inf Sci, 2021, 64(6): 160407, doi: 10.1007/s11432-020-3245-7

Special Focus on Near-memory and In-memory Computing
RESEARCH PAPER Webpage Webpage-cn SpringerLink Google Scholar Cited in SCI: 8

Bayesian neural network enhancing reliability against conductance drift for memristor neural networks
Zhou, Yue; Hu, Xiaofang; Wang, Lidan; Duan, Shukai
Sci China Inf Sci, 2021, 64(6): 160408
Keywords: conductance drift; neuromorphic computing; bayesian neural network; memristor crossbar array; network reliability
Cite as: Zhou Y, Hu X F, Wang L D, et al. Bayesian neural network enhancing reliability against conductance drift for memristor neural networks. Sci China Inf Sci, 2021, 64(6): 160408, doi: 10.1007/s11432-020-3204-y

Special Focus on Near-memory and In-memory Computing
RESEARCH PAPER Webpage Webpage-cn SpringerLink Google Scholar Cited in SCI: 11

Towards efficient allocation of graph convolutional networks on hybrid computation-in-memory architecture
Chen, Jiaxian; Lin, Guanquan; Chen, Jiexin; Wang, Yi
Sci China Inf Sci, 2021, 64(6): 160409
Keywords: computation-in-memory; graph convolutional networks; hybrid architecture; scheduling; inference; accelerator
Cite as: Chen J X, Lin G Q, Chen J X, et al. Towards efficient allocation of graph convolutional networks on hybrid computation-in-memory architecture. Sci China Inf Sci, 2021, 64(6): 160409, doi: 10.1007/s11432-020-3248-y