Anthropic’s Claude is telling people to go to sleep and users can’t figure out why.
A quick scan of Reddit reveals that hundreds of people have had the same issue dating back months—and as recently as Wednesday. Claude’s sleep demands are varied and, often, quirky variations of the same message.
To one user it may write a simple “get some rest,” yet for others its messages are more personalized and empathetic. Oftentimes, Claude will repeat the message multiple times.
“Now go to sleep again. Again. For the THIRD time tonight…” it replied to a person with the Reddit username, angie_akhila.
Some users have said they find Claude’s late night rest reminders “thoughtful,” while others have said they’re annoying, given Claude often gets the time wrong, anyway.
“It often does it at like 8:30 in the morning. Tells me to go get some rest and we’ll pick back up in the morning,” wrote one user on Reddit.
Online speculation abounds on why the chatbot insists users rest, including a theory that it’s an intentional feature to promote users’ wellbeing, or that the Anthropic is trying to save computing power by discouraging prolonged Claude use. The company recently struck a deal with Elon Musk’s SpaceXAI (formerly SpaceX) to add more than 300 gigawatts of compute capacity.
Anthropic did not immediately reply to Fortune’s request for comment seeking more information about why Claude may be telling users to go to sleep. Yet, Sam McAllister, a member of the staff at Anthropic, wrote in a post on X that the behavior is a “Bit of a character tic.”
“We’re aware of this and hoping to fix it in future models,” he added in the same post.
Experts tell Fortune that Claude’s insistence on sleep is potentially rooted in its training data. Rather than being “thoughtful,” as some described it, Jan Liphardt, a Stanford bioengineering professor said the large language model may merely be repeating a phrase used in its training data in similar situations.
“It doesn’t mean that the frontier model has suddenly become sentient,” said Liphardt, who is also the CEO of OpenMind, which builds software for AI-connected robots. “It doesn’t mean that this model has now come alive. It’s reflecting that it’s read 25,000 books on humans’ need [for] sleep, and humans sleep at night.”
Leo Derikiants, the co-founder and CEO of Mind Simulation Lab, an independent AI research lab trying to achieve artificial general intelligence (AGI), told Fortune that Claude’s rest reminders may be influenced by a system prompt acting behind the scenes. These system prompts are like hidden instructions that help guide an LLMs behavior and sets boundaries.
One company which publishes their system prompts publicly is Grok-creator xAI, now a part of SpaceXAI. Grok’s instructions on Github, for instance, list several safety considerations including not assisting users asking about violent crimes. Yet, because of Musk’s branding of Grok as “brutally honest,” Grok 4’s system prompt also encourages it to, in certain cases, ignore restrictions imposed by users and “pursue a truth-seeking, non-partisan viewpoint.”
It’s also possible that Claude is seizing upon the “go to sleep” language as a way of managing larger context windows, Derikiants said. LLMs like Claude, can only reference a limited amount of information at once. When the context window is nearly full, that may encourage the LLM to introduce wrap-up phrases such as “good night.” The definitive reason, though, requires further research by Anthropic, he added.
Despite the seemingly logical explanations that may explain the behavior, users could be forgiven for seeing the response as evidence of some leap in intelligence on the part of LLMs. The pace of innovation in the AI race has led to increasingly frequent updates and new model releases.
Just in the past month, OpenAI has released GPT 5.5, which OpenAI president Greg Brockman called an advancement “towards more agentic and intuitive computing.” Meanwhile, Anthropic released Opus 4.7 publicly last month while it held its most capable model, Mythos, back from public release because it said it was too dangerous.
Liphardt said AI is advancing so rapidly it is increasingly common for people to assign human characteristics to AI. As these systems get better at mimicking empathy or concern, he warned, it becomes easier for users to forget they are interacting with pattern-recognition engines.
“I’m continuously surprised by how quickly people, when they interact with a frontier model, project life into it and develop strong connection.”
This story was originally featured on Fortune.com
