Contact Us

I-X Research Talk: Can LLMs Achieve Causal Reasoning and Cooperation? with Dr Zhijing Jin

Key Details:

Time: 11.00 – 12.00
Date: Friday, 11 July
Location: In-person & On-line (via MS Teams)
I-X LRT608A | Level 6 |  Translation and Innovation Hub (I-HUB)

Imperial White City Campus
84 Wood Lane
W12 0BZ

11.00 - 12.00
11/07/2025
Register

Speaker

Zhijing Jin

Zhijing Jin (she/her) is an Assistant Professor at the University of Toronto. Her research focuses on Large Language Models and Causal Reasoning, and AI Safety in Multi-Agent LLMs. She has received three Rising Star awardstwo Best Paper awards at NeurIPS 2024 Workshops, two PhD Fellowships, and a postdoc fellowship. She has authored over 80 papers, many of which appear at top AI conferences (e.g., ACL, EMNLP, NAACL, NeurIPS, ICLR, AAAI), and her work have been featured in CHIP Magazine, WIRED, and MIT News. She co-organizes many workshops (e.g., several NLP for Positive Impact Workshops at ACL and EMNLP, and Causal Representation Learning Workshop at NeurIPS 2024), and leads the Tutorial on Causality for LLMs at NeurIPS 2024, and Tutorial on CausalNLP at EMNLP 2022. To support diversity, she organizes the ACL Year-Round Mentorship. More information can be found on her personal website: zhijing-jin.com

Talk Title

Can LLMs Achieve Causal Reasoning and Cooperation?

Talk Summary

Causal reasoning is a cornerstone of human intelligence and a critical capability for artificial systems aiming to achieve advanced understanding and decision-making. While large language models (LLMs) excel on many tasks, a key question remains: How can these models reason better about causality? Causal questions that humans can pose span a wide range of fields, from Newton’s fundamental question, “Why do apples fall?” which LLMs can now retrieve from standard textbook knowledge, to complex inquiries such as, “What are the causal effects of minimum wage introduction?”—a topic recognized with the 2021 Nobel Prize in Economics. My research focuses on automating causal reasoning across all types of questions. To achieve this, I explore the causal reasoning capabilities that have emerged in state-of-the-art LLMs, and enhance their ability to perform causal inference by guiding them through structured, formal steps. Further, I also introduce how causality of individual behavior can link to group outcomes, and cover findings in our multi-agent simulacra work about whether LLMs learn to cooperate. Finally, I will outline a future research agenda for building the next generation of LLMs capable of scientific-level causal reasoning.

More Events

Jul
08

This talk will explore the use of AI in medical diagnosis.