Zach Studdiford

researching language/cognition at UW-Madison

prof_pic.jpg

studdiford at wisc dot edu

I’m an undergraduate researcher at the University of Wisconsin-Madison working with Dr. Gary Lupyan (Lupyan Lab) and Dr. Tim Rogers (Knowledge and Concepts Lab). Broadly, I’m interested in what language models can tell us about human cognition, and conversely how we can build human-like representations and inductive biases into language models. What is it about learning language that supports the kinds of emergent abilities we see in LLMs? What does this tell us about the extent to which human cognitive abilities (causal reasoning, cognitive control) rely on language? How can the representations and circuits learned by LLMs inform our theories of human cognition? To make progress on these questions, I use recent advances in mechanistic interpretability and behavioral evaluations rooted in cognitive science.

Currently, I’m in my final year at Wisconsin completing my BS in Computer Science and Psychology. During my undergrad I’ve had the privilege of leading projects with Gary Lupyan and Tim Rogers featured at ICML, EMNLP, and CogSci. I’ve also been fortunate to receive an NSF REU grant and a Hilldale Research Fellowship.

Outside of research, I teach jazz and classical piano, and play jazz piano at various restaurants and venues in Madison.

news

Oct 02, 2025 My latest work (currently under review) Uncovering the Computational Ingredients of Human-Like Representations in LLMs, is out on arxiv!
Jul 10, 2025 Contextual Effects in LLM and Human Causal Reasoning just accepted to the ICML 2025 Workshop on Assessing World Models!
May 10, 2025 Evaluating Steering Techniques using Human Similarity Judgments under review and out on arxiv!
Mar 10, 2025 My collaboration with Qiawen Liu Evolution on the Lexical Workbench: Disentangling Frequency, Centrality, and Polysemy in Language Evolution accepted at CogSci 2025!

selected publications

  1. Contextual Effects in LLM and Human Causal Reasoning
    Zach Studdiford and Gary Lupyan
    In ICML 2025 Workshop on Assessing World Models, 2025
  2. Uncovering the Computational Ingredients of Human-Like Representations in LLMs
    Zach Studdiford, Timothy T Rogers, Kushin Mukherjee, and 1 more author
    arXiv preprint arXiv:2510.01030, 2025
  3. Evaluating Steering Techniques using Human Similarity Judgments
    Zach Studdiford, Timothy T Rogers, Siddharth Suresh, and 1 more author
    arXiv preprint arXiv:2505.19333, 2025