Sijie Cheng

Sijie Cheng

Postgraduate

Fudan University

Biography

Sijie Cheng is a third-year postgradute at Fudan University in School of Computer Science, Shanghai, China, where she is advised by Prof. Yanghua Xiao at Knowledge Work Lab. She had been working on analyzing and exploiting tacit knowledge inside foundation models, and currently devote to work on world model.

What’s past is prologue. Download my resumé.

Interests
  • World Model
  • Foundation Models
  • Reasoning
Education
  • M.Sc in Software Engineering, 2020-2023

    Fudan University

  • B.Sc. in Software Engineering, 2016-2020

    Chongqing University of Posts and Telecommunications

Research Experience

 
 
 
 
 
Natural Language Processing Group, Pujiang Lab
Research Intern
Mar 2022 – Present Shanghai, China
Free-text explanation for reasoning, working with Prof. Lingpeng Kong.
 
 
 
 
 
Institue for AI Industry Research, Tsinghua University
Joint-Supervision
Jul 2021 – Feb 2022 Beijing, China
Large models as continual knowledge bases, advised by Prof. Yang Liu and Prof. Yang Liu.
 
 
 
 
 
Natural Language Understanding Group, Meituan NLP Center
Research Intern
Nov 2020 – Jun 2021 Beijing, China
Taxonomy relational knowledge extraction, working with Rui Xie. -> ICDE 2022 and ACL 2022
 
 
 
 
 
Text Intelligence Lab, Westlake University
Visiting Fellow
Sep 2019 – Sep 2020 Hangzhou, China
Analysis of commonsense knowledge in pre-trained language model, advised by Prof. Yue Zhang. -> ACL 2021
 
 
 
 
 
Deecamp, Peking University
Summer School
Jul 2018 – Aug 2018 Beijing, China
Movie recommendation with knowledge graph, advised by Dr. Fuzheng Zhang.

Recent Publications

Quickly discover relevant content by filtering publications.
(2022). Can Pre-trained Language Models Interpret Similes as Smart as Human?. In ACL 2022.

PDF Cite

(2022). Learning What You Need from What You Did: Product Taxonomy Expansion with User Behaviors Supervision. In ICDE 2022.

PDF Cite

(2022). Unsupervised Editing for Counterfactual Stories. In AAAI 2022.

PDF Cite Code

(2021). On commonsense cues in bert for solving commonsense tasks. In ACL-IJCNLP 2021.

PDF Cite

(2018). NEARM: Natural Language Enhanced Association Rules Mining. In ICDM Workshops 2018.

PDF Cite