On Formulating and Evaluating Language Agents
Language agents are emerging AI systems that use large language models (LLMs) to interact with the world. While various methods and demos have been developed, it is often hard to systematically understand or evaluate them. In this talk, we present Cognitive Architectures for Language Agents (CoALA), a theoretical framework grounded in the classical research of cognitive architectures. We show how CoALA can simplify the understanding of existing agents, and provide actionable insights for future agent development.
We also present three benchmarks (WebShop, InterCode, SWE-Bench) to develop and evaluate language agents using web, programming, and GitHub repos. Notably, all three are scalable, practical, and challenging for current LLMs or language agents, with simple and faithful evaluation metics that do not rely on human or LLM scoring.
Shunyu Yao is a final year PhD student with Karthik Narasimhan at Princeton NLP Group. His research focuses on language agents, and is supported by the Harold W. Dodds Fellowship from Princeton. Homepage: https://ysymyth.github.io/
Time: 14.00 – 15.30
Date: Tuesday 7 November