Alumni Insights: Shaping a Responsible Future with AI for Good
12/06/2025
18:30 - 20.45How can we ensure that artificial intelligence doesn’t just disrupt the world, but improves it?
How can we ensure that artificial intelligence doesn’t just disrupt the world, but improves it?
In this presentation, Leandro will share several accomplishments of the BigCode project, an open-scientific collaboration working on the responsible development and use of LLMs for code generation.
Language agents are emerging AI systems that use large language models (LLMs) to interact with the world. While various methods and demos have been developed, it is often hard to systematically understand or evaluate them.
For both humans and machines, the essence of learning is to pinpoint which components in its information processing pipeline are responsible for an error in its output – a challenge that is known as credit assignment. It has long been assumed that credit assignment is best solved by backpropagation, which is also the foundation of modern machine learning.
Symmetries play a paramount role in the mathematical sciences. For example, they describe the crystal structure of atoms in materials, and understanding symmetries often goes a long way towards the solution of a problem.
Attention-based neural network sequence models such as transformers have the capacity to act as supervised learning algorithms: They can take as input a sequence of labeled examples and output predictions for unlabeled test examples.
Battery companies want to know the relationship between their manufacturing parameters and the performance of the resulting cells, so that they can optimise their products for particular applications, reduce costs, and improve yield. The literature contains many examples of physics-based models of the various manufacturing processes (including mixing, coating, drying and calendaring), but these systems are hugely complex, and as a result they are expensive to simulate and hard to validate.