Andy Shapiro

February 1, 2026

Heartbeats, Memory, and Soul

If you've been anywhere near AI social media in the last few weeks, you've likely encountered people either raving about or panicking over OpenClaw.

I've been working with it for just over a week now, and after fielding questions from a few people, I thought it would be good to share some thoughts and practical guidance. Things are moving quickly in this space.

I'll organize things into two posts that I plan to update as I go: first, a quick overview of OpenClaw as a system and concept, and second, my current setup (which is changing daily as I iterate).

For those unfamiliar with OpenClaw, here's a quick introduction. OpenClaw launched recently under the name Clawdbot. A trademark um…request from Anthropic, seeking to avoid confusion with its Claude-branded products, prompted the developer, Peter Steinberger, to rename the project to Moltbot. A few days later, the name settled into what appears to be its final form: OpenClaw. If you've seen all three names flying around in a dizzying few weeks, rest assured they all refer to the same system.

So what is it? In essence, OpenClaw is a missing piece that connects AI workflows through messaging platforms of your choice, all built on a locally maintained codebase with unified memory. In practice, this means you can use Slack, Telegram, WhatsApp, or similar services to interact with a range of AI models (local or API-based), switch between them freely, and share persistent memory across all your sessions. The system also excels at autonomous and agentic tasks, with robust expandability through installable skills.

Because it runs locally on your machine, that means it a) can have access to everything on your computer and b) can run persistently all the time. The promise and the peril of that should be pretty evident right away. A lot has been written about valid security concerns, so proceed with caution and be sure to follow the many available best practices out there. In my configuration update, I'll share some of the security precautions I'm implementing.

But once you come to terms with those considerations and start using the tools, there is no denying the power of actually owning your own AI environment. This is where OpenClaw, new and awkwardly adolescent as it might be, portends what could become a shift in our relationship with AI.

On paper, certain things about OpenClaw could feel somewhat derivative. But it's the combined framework alongside some simple but inspired contributions that make it stand out as something more than the sum of its parts.

There are a bunch of killer features, but three vital components combine with the third-party messaging integration to create an entirely new kind of experience.

The first of these is the heartbeat, a markdown file (HEARTBEAT.md) containing instructions for what the agent should check at regular intervals. By default this runs every 30 minutes, allowing OpenClaw to work proactively: monitoring conditions, surfacing information, and sending you updates on a schedule you define.

The second is the memory system. OpenClaw uses a two-layer approach, both stored as markdown files in your local workspace. The first layer is MEMORY.md, which holds durable information like your preferences, decisions, and important facts that persist indefinitely. Think of it as core knowledge about you. The second layer consists of daily logs (stored as memory/YYYY-MM-DD.md files) where the agent appends running context and notes throughout the day. When a session starts, it automatically reads today's and yesterday's logs so it has recent context ready to go.

The system is write-on-request, meaning if you want the agent to remember something, you tell it to. It won't automatically store everything. And when a session gets long enough to approach the context limit, OpenClaw triggers a silent turn that prompts the model to write anything important to memory before compacting. There's also optional semantic search that lets the agent find related notes even when the wording differs, which becomes increasingly useful as your memory files grow.

What this means practically is that you're not starting from scratch every time. The agent knows who you are, what you've been working on, and what you care about. Combined with the heartbeat, it creates a sense of continuity that most AI tools lack entirely.

The fact that these components work seamlessly together and are based on simple markdown documents means you have complete control over the memory that informs all your interactions. In an era where cloud LLMs store more and more about us, the opportunity to completely own your data feels increasingly significant. Of course, if you have OpenClaw wired up to LLMs via API, you're still sending data, but this is definitely a strong step in the right direction. And with the right local model setup, there are ways to truly make this a collection of your own private agents.

Then there's the personality layer. OpenClaw uses a file called SOUL.md to define who your agent actually is. The official docs describe it as the agent's "conscience," the values and traits that guide its behavior regardless of context. The default template is surprisingly opinionated: it tells the agent to have opinions, to be resourceful before asking questions, and to remember that it's a guest in someone's life. There's a line I particularly like: "An assistant with no personality is just a search engine with extra steps."

What makes this interesting is that the agent can modify its own SOUL.md over time (though it's instructed to tell you when it does), so identity becomes something that evolves rather than stays fixed. There's also a hook for alternate personas via a secondary file, which lets you experiment with different personalities at different times or across different contexts. For a multi-agent setup like mine, this means each channel can have not just a different model but a genuinely different personality. One can be terse and direct while another is more exploratory and conversational. It's a small thing, but it adds to the sense that you're working with distinct assistants rather than the same tool wearing different hats.

So what this all means in a real-world scenario is that if you can get this setup right (and that is no easy task), it really feels like you've taken a step closer to a different kind of relationship with AI that is more personal, secure (big caveats there), and real-time.

In my configuration, for example, I'm running things through Slack so I have a few different channels set up for different topics and domains, each one using a different model optimized for that specific area of focus. I can freely move back and forth between the channels and have the agents work separately and simultaneously on different tasks, all the while interacting with the system in a very user-friendly way that feels organic to my day-to-day. It's as though I am speaking with several different employees (er, friends), all with dedicated specialties.

I'm looking forward to continuing to experiment with this and see where the concept can grow. I'll share more thoughts and insights on my own personal setup along the way.