At the link above, we report some developing work from the Anthropic Interpretability team on Crosscoder Model Diffing, which might be of interest to researchers working actively in this space.
As ever, we'd ask readers to treat these results like those of a colleague sharing some thoughts or preliminary experiments for a few minutes at a lab meeting, rather than a mature paper.
Related content
Announcing the Anthropic Economic Index Survey
We're launching the Anthropic Economic Index Survey, a monthly survey conducted through Anthropic Interviewer.
Read moreWhat 81,000 people told us about the economics of AI
Our recent survey study with 81,000 Claude users provides a way to connect people’s economic concerns with what we’ve quantified in Claude traffic.
Read moreAutomated Alignment Researchers: Using large language models to scale scalable oversight
Can Claude develop, test, and analyze alignment ideas of its own? We ran an experiment to find out.
Read more