About me

I am a cognitive scientist and AI safety researcher. With my work, I want to contribute to making sure that the AI systems that we build are robustly aligned to human interests.

Currently, I am a visiting fellow at Constellation, where I investigate introspection in Large Language Models with Owain Evans.

My PhD work investigates agent-environment interactions during planning. Some of the things that we do in the world (such as rearranging things, feeling how heavy something is, or looking at a problem from different angles) make it easier for us to find solutions to difficult planning problems. How can we understand this in computational terms?
My approach is best described as computational cognitive science: trying to discover the high-level algorithms of cognition using agent-based simulations, computational models, and behavioral experiments.

In one project, I am exploring how the visual structure of the environment can guide planning. I also think about the models underlying physical understanding in humans and machines (and where they differ).

I'm a fifth (and final) year PhD student at the Department of Cognitive Science at UC San Diego and a visiting scholar at Stanford University. I work with Judith Fan (Stanford), David Kirsh (UCSD) and Marcelo Mattar (NYU).

I also work as a VJ and visual artist—find my artistic work at vj.felixbinder.net.

Find my resume and CV here.


Looking Inward: Language Models Can Learn About Themselves by Introspection

Are LLMs capable of introspection, i.e. special access to their own inner states? Can they use this access to report facts about themselves that are not in the training data? Yes — in simple tasks at least! We find that LLMs are capable of introspection on simple tasks. We discuss potential implications of introspection for interpretability and the moral status of AIs. More …

Towards a Steganography Evaluation Protocol

Large Language Models, by default, think out in the open. There is no inner memory, all information has to be output as text. Can they hide information in that text such that a human observer cannot detect it? Here, I propose a way of detecting whether models hide the results of intermediate reasoning steps to be able to answer questions more correctly. More …