About me

Welcome to my homepage! I am Duanyu Feng (冯端宇), a PhD student at the School of Computer Science, Sichuan University, under the supervision of Prof. Wenqiang Lei.

My research delves into the theoretical foundations of deep learning and machine learning. Furthermore, I’m exploring how tools from numerical optimization and natural language processing can be used to create a more reliable and robust scientific algorithm for addressing social issues.

Specifically, my research lies in the span of the following topics:

  • Data Synthesis and Data Mining
  • Alignment of Large Language Models
  • Applications of Large Language Models in Legal, Medical, and Financial Industries

Bio

Sep. 2023 - Present: Ph.D at School of Computer Science, Sichuan University, supervised by Prof. Wenqiang Lei.

Sep. 2020 - Jun. 2023: M.S at School of Mathematics, Sichuan University, supervised by Prof. Bing Hu (with Prof. Hao Wang and Prof. Shiquan Zhang).

Sep. 2016 - Jun. 2020: B.S at School of Mathematics, Sichuan University, supervised by Prof. Hao Wang.

Selected Publications

Year 2025

Method/Theory for LLM

  1. (First author) Duanyu Feng, Bowen Qin, Chen Huang, Youcheng Huang, Zheng Zhang, Wenqiang Lei. Legend: Leveraging Representation Engineering to Annotate Safety Margin for Preference Datasets. AAAI 2025 (CCF-A)

  2. Youcheng Huang, Chen Huang, Duanyu Feng, Wenqiang Lei, Jiancheng Lv. Cross-model Transferability among Large Language Models on the Platonic Representations of Concepts. ACL 2025 (Long Paper, CCF-A) [stay tuned]

LLM for Application

  1. (Under my guidance) Yongfu Dai, Duanyu Feng, Jimin Huang, Haochen Jia, Qianqian Xie, Yifang Zhang, Weiguang Han, Wei Tian, Hao Wang. LAiW: A Chinese Legal Large Language Models Benchmark. COLING 2025 (Long Paper, CCF-B)

  2. (Cofirst authors) Ruoli Gan*, Duanyu Feng*, Chen Zhang, Zhihang Lin, Haochen Jia, Hao Wang, Zhenyang Cai, Lei Cui, Qianqian Xie, Jimin Huang, Benyou Wang. UCL-Bench: A Chinese User-Centric Legal Benchmark for Large Language Models. NAACL 2025 (Findings, Long Paper, CCF-B)

Year 2024

Method/Theory for LLM

  1. (First author) Duanyu Feng, Bowen Qin, Chen Huang, Zheng Zhang, Wenqiang Lei. Towards analyzing and understanding the limitations of dpo: A theoretical perspective. Arxiv
    (Although this paper has not been officially published, I think it is the most interesting work I have done this year.)

  2. (Under my guidance) Yuxin Wang, Duanyu Feng, Yongfu Dai, Zhengyu Chen, Jimin Huang, Sophia Ananiadou, Qianqian Xie, Hao Wang. HARMONIC: Harnessing LLMs for Tabular Data Synthesis and Privacy Protection. NeurIPS 2024 (CCF-A)

LLM for Application

  1. Xiao Zhang, Ruoyu Xiang, Chenhan Yuan, Duanyu Feng, Weiguang Han, Alejandro Lopez-Lira, Xiao-Yang Liu, Meikang Qiu, Sophia Ananiadou, Min Peng, Jimin Huang, Qianqian Xie. Dólares or Dollars? Unraveling the Bilingual Prowess of Financial LLMs Between Spanish and English. KDD 2024 (CCF-A)

  2. Qianqian Xie, Weiguang Han, Zhengyu Chen, Ruoyu Xiang, Xiao Zhang, Yueru He, Mengxi Xiao, Dong Li, Yongfu Dai, Duanyu Feng, Yijing Xu, Haoqiang Kang, Ziyan Kuang, Chenhan Yuan, Kailai Yang, Zheheng Luo, Tianlin Zhang, Zhiwei Liu, Guojun Xiong, Zhiyang Deng, Yuechen Jiang, Zhiyuan Yao, Haohang Li, Yangyang Yu, Gang Hu, Jiajia Huang, Xiao-Yang Liu, Alejandro Lopez-Lira, Benyou Wang, Yanzhao Lai, Hao Wang, Min Peng, Sophia Ananiadou, Jimin Huang. FinBen: An Holistic Financial Benchmark for Large Language Models. NeurIPS 2024 (CCF-A)

Professional Experience

Year 2025

  1. Exchange Student: Cancer Science Institute of Singapore, National University of Singapore.
    work on the module construction for RNA large language models, under the supervision of Prof. Yang Li.

Year 2024

  1. IJCAI 2024, FinNLP-AgentScen Workshop, shared task organizer of the Financial Challenges in Large Language Models.
  2. COLING 2025, The joint workshop of FinNLP, FNP, and LLMFinLegal, program committee member.

Year 2023

  1. Internship: Data Research Group, eijing Academy of Artificial Intelligence.
    work on data analysis for alignment of large language models, under the supervision of Dr. Zheng Zhang.

Vistors of this Site