1. The Facts
The document, which circulated privately among AI researchers before appearing on a public repository last week, describes a series of internal experiments conducted by Anthropic researchers over 18 months. The experiments were designed to probe what the company calls "experience-adjacent behaviors" in its Claude models.
The findings are preliminary, and the document itself is careful to hedge. But the fact that a frontier AI lab has been systematically studying whether its systems might have something like subjective experience is significant regardless of what the results show.
"The question isn't whether Claude is conscious," said Dr. Thomas Okafor, a philosopher of mind at Oxford who reviewed the document at The Variance's request. "The question is whether we have the conceptual apparatus to even ask that question meaningfully, and whether we should be asking it at all before we've settled some basic governance questions."
The AI-native answer engine
Stop searching. Start knowing. Perplexity gives you instant, accurate answers with cited sources.
Try for free →Anthropic confirmed in a statement that the research program exists and described it as part of its broader safety work. The company declined to share results or methodology.