Author

Yiming Cui

Research Scientist, Harbin Institute of Technology & iFLYTEK Research - Cited by 3,104 - Machine Reading Comprehension - Question Answering - Pre-trained Language Model - Explainable AI - Natural Language Processing

Biography

Mr. Cui Yiming has been a research director in McGill University in the Department of Natural Resource Sciences Located in Canada . Department of Pharmacology and Therapeutics, McGill University, Canada  
Title
Cited by
Year
Pre-training With Whole Word Masking for Chinese BERT
Y Cui, W Che, T Liu, B Qin, Z YangIEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP) 29 …, 2021202
865
2021
Attention-over-Attention Neural Networks for Reading Comprehension
Y Cui, Z Chen, S Wei, S Wang, T Liu, G HuACL 2017, 593–602, 2017201
496
2017
Revisiting Pre-Trained Models for Chinese Natural Language Processing
Y Cui, W Che, T Liu, B Qin, S Wang, G HuFindings of EMNLP 2020, 657–668, 2020202
439
2020
CLUE: A Chinese Language Understanding Evaluation Benchmark
L Xu, X Zhang, L Li, H Hu, C Cao, W Liu, J Li, Y Li, K Sun, Y Xu, Y Cui, ...COLING 2020, 4762–4772, 2020202
208
2020
A Span-Extraction Dataset for Chinese Machine Reading Comprehension
Y Cui, T Liu, L Xiao, Z Chen, W Ma, W Che, S Wang, G HuEMNLP-IJCNLP 2019, 593–602, 2018201
155
2018
Consensus Attention-based Neural Networks for Chinese Reading Comprehension
Y Cui, T Liu, Z Chen, S Wang, G HuCOLING 2016, 1777–1786, 2016201
106
2016
Exploiting Persona Information for Diverse Generation of Conversational Responses
H Song, WN Zhang, Y Cui, D Wang, T LiuIJCAI-19, 5190–5196, 2019201
101
2019
Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting
S Chen, Y Hou, Y Cui, W Che, T Liu, X YuEMNLP 2020, 7870-–71, 2020202
88
2020
CharBERT: Character-aware Pre-trained Language Model
W Ma, Y Cui, C Si, T Liu, S Wang, G HuCOLING 2020, 39–50, 2020202
71
2020
Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution
T Liu, Y Cui, Q Yin, S Wang, W Zhang, G HuACL 2017, 102–111, 2017201
57
2017
CJRC: A Reliable Human-Annotated Benchmark DataSet for Chinese Judicial Reading Comprehension
X Duan, B Wang, Z Wang, W Ma, Y Cui, D Wu, S Wang, T Liu, T Huo, Z Hu, ...CCL 2019, 439–451, 2019201
52
2019
Cross-Lingual Machine Reading Comprehension
Y Cui, W Che, T Liu, B Qin, S Wang, G HuEMNLP-IJCNLP 2019, 1586–1595, 2019201
50
2019
Is Graph Structure Necessary for Multi-hop Question Answering?
N Shao, Y Cui, T Liu, S Wang, G HuEMNLP 2020, 7187-7192, 2020202
49
2020
Context-Sensitive Generation of Open-Domain Conversational Responses
W Zhang, Y Cui, Y Wang, Q Zhu, L Li, L Zhou, T LiuCOLING 2018, 2437–24, 2018201
47
2018
LSTM Neural Reordering Feature for Statistical Machine Translation
Y Cui, S Wang, J LiNAACL 2016, 977–982, 2016201
40
2016
TextBrewer: An Open-Source Knowledge Distillation Toolkit for Natural Language Processing
Z Yang, Y Cui, Z Chen, W Che, T Liu, S Wang, G HuACL 2020: System Demonstrations, 9–16, 2020202
34
2020
Convolutional Spatial Attention Model for Reading Comprehension with Multiple-Choice Questions
Z Chen, Y Cui, W Ma, S Wang, G HuAAAI-19, 6276–6283, 2018201
29
2018
Dataset for the First Evaluation on Chinese Machine Reading Comprehension
Y Cui, T Liu, Z Chen, W Ma, S Wang, G HuLREC 2018, 2721–27, 2017201
25
2017
Benchmarking Robustness of Machine Reading Comprehension Models
C Si, Z Yang, Y Cui, W Ma, T Liu, S WangFindings of ACL-IJCNLP 2021, 634–644, 2020202
24
2020
PERT: pre-training BERT with permuted language model
Y Cui, Z Yang, T LiuarXiv preprint arXiv:03.06906, 202
22
2022