POSTPONED: The talk by Joe Stacey (University of Sheffield) will take place in February 2026.

The second talk in the Vienna Circuits series has been postponed to 2026 due to illness. The new date will be announced as soon as possible.

Topic: Improving the robustness of fine-tuned LLMs

Abstract
In this talk, Joe will discuss how to improve the robustness of fine-tuned models, focusing on the task of Natural Language Inference. He will present several strategies for improving robustness and explain why debiasing methods are often not the most effective approach. He will then examine the trade-off between in-distribution and out-of-distribution performance when fine-tuning LLMs, showing how strategically selecting the training data can improve out-of-distribution performance. Finally, he will discuss outstanding questions and possible future directions for understanding and improving model robustness.

Speaker
Joe Stacey is a postdoctoral researcher at the University of Sheffield, working on uncertainty quantification under the supervision of Nafise Moosavi, Benjamin Heinzerling, and Kentaro Inui. He is a former Apple AI/ML Scholar and completed his PhD at Imperial College London under the supervision of Marek Rei, focusing on improving the robustness and interpretability of NLI models. Before his PhD, Joe worked as a strategy consultant and as a mathematics teacher in a challenging school in Birmingham.

 

If you would like to be updated about future talks in the Vienna Circuits series, you can subscribe to the series'  » mailing list.