Andy Shapiro

February 2, 2026

Under the Hood: My OpenClaw Configuration

If someone asks you to describe OpenClaw, it’s easy to fall into a tail spin of "well, it’s this, and then it’s that, but it’s also this..." But the more I use it, the more I realize it can be pretty succinctly described in one way: It’s an operating system.

When you think about it through that lens, you see how OpenClaw occupies a space in the AI ecosystem that, despite a daily deluge of new products, has never materialized. At its core, an OS is a collection of tools organized into a system that lets you use other tools within an intuitive interface. When executed well, it fades into the background, letting you focus on productivity and allowing you to switch between tasks seamlessly.

That is what we have here: the dawn of an AI OS era that combines independent systems into a consolidated environment.

The Power of Open Source

I have no doubt the "warships" are circling. Everyone from tech behemoths to startups will soon slap their own labels on this concept. When that happens, it will be a positive development; non-technical consumers will finally enjoy the benefits of this framework, likely with more user-friendly (though costly) configurations. But for now, part of what makes OpenClaw compelling is its open-source nature.

You can configure OpenClaw to any specification. You aren’t confined to predetermined rules. If you have an idea and the wherewithal to problem-solve, you can customize it far beyond the "out of the box" experience. There is a trade-off to this level of control, higher risk and a steeper learning curve, but the result is an owned AI system that is unequivocally yours. Not having to wait for a company to build this for you is exhilarating.

It reminds me of the Tommy Lee Jones line from the first Men in Black that has since become a universal meme for anything with a steep set of challenges (living in New York, being a regular season Dodgers fan, etc.): "Oh yeah, it’s worth it... if you’re strong enough."

One thing I am certain of: The essence of OpenClaw is rapid iteration. Those using it shouldn’t expect to one day sit back and say, "Okay, everything is great. Now let’s forget about it."

However, after enough time adjusting and calibrating, I’ve started to understand what a "steady state" looks like. Here are the pillars of the configuration that are currently working well for me.

Hardware

When word of OpenClaw started spreading two weeks ago, many people immediately ran out to buy new hardware—specifically Mac Minis. It is a best practice to run OpenClaw on a dedicated machine rather than your daily computer, but there is no need to over-invest. If you have an extra computer available, start there. My current set up is:

  • Computer: MacBook Air
  • Chip: M2
  • Memory: 16 GB
  • Storage: 1 TB

Pro Tip: I use a small app called Amphetamine to override power settings and prevent the computer from sleeping. It allows laptops to work in "clamshell mode" without a display attached. My dedicated OpenClaw machine is just a closed laptop sitting on an office shelf. It is always on, connected, and available.

The Intelligence Layer: A Hybrid Approach

One of the biggest decisions in an AI workflow is where the "thinking" happens. This area occupied most of my experimentation over the last week.

Like many, I first configured OpenClaw with an Anthropic Pro Max OAuth token. It was a fleeting glimpse of nirvana—running Opus 4.5 under a high-limit plan—but I knew there was a catch. Last week, reports surfaced of Anthropic banning accounts for using OAuth tokens in "harnesses" like OpenClaw. While the jury is out on those claims, the risk was enough for me to scrub my configuration of any ties to my Anthropic account.

The balancing act that is the most challenging to work through for a system like this is prioritizing privacy, performance, and cost-effectiveness. All of which seem to be opposing forces at any given time.

I’ve settled on a hybrid model that balances the power of the cloud with the privacy of local execution:

  • Primary Cloud Engine: My agents default to Gemini 3 Flash. It’s fast, handles long contexts well, and follows complex instructions. I access this model through a $20/month Pro Plan. Of course you can always go the API approach with Anthropic, OpenAI, etc…I gave this a go too but right now there is just too much uncertainty in the token usage of OpenClaw to do this in a way that doesn’t risk unforeseen expenses. If you do go this route, keep a close eye on it as you might hit budget limits much faster than you’ve experienced before. As time goes on, I’m hoping more context/token management tools will become available that will allow for more dependable API usage. For now, I for one am going to stick with the “plan” route.
  • Local Fallback: I run an Ollama instance serving Llama 2. This ensures the core system stays functional offline and gives me a "private room" for sensitive data. This is specifically where hardware does come into play as local models have very high system requirements. But there are several models available through platforms like Ollama that run surprisingly well on a machine like the one I'm on.

Tip for Gemini users: I’ve found Gemini 3 Flash is susceptible to "cold starts." I’m currently using the Heartbeat file to keep sessions active. Every few hours, it sends a tiny bit of context—essentially saying "hi, I’m still here"—to keep the sessions in the cache and improve response times. So far this seems to be working effectively. I'm assuming this might be an issue with other LLM plans as well. But I’ll keep an eye on it and update as I know more.

The Messengers: Slack and Telegram

The interface is just as important as the model. I’ve integrated OpenClaw into the tools I already use, but with a heavy emphasis on security.

  • Slack: This is my production hub where the agents have full permissions to interact, react and manage tasks. It is my own private space where I am the only human user, so it is very closed off.
  • Telegram: For secondary access, I use a dedicated Telegram bot. To keep it secure, I’ve implemented a strict ID allowlist. The bot simply won’t respond to anyone but me, effectively locking the door to the rest of the internet.

Specialized Agents: Divide and Conquer

Rather than having one giant “everything-bot,” I’ve partitioned the system into specialized agents, each with its own isolated workspace. This is the beginning of my Multi-Agent System in practice. Some examples of my agents are:

  • The Generalist: This is my primary assistant. It manages my long-term memory, handles general queries and maintains the Heartbeat of the system.
  • The Editor: This agent’s focus is on refining text and coordinating content.
  • The Dev: When I need a technical deep dive, like working with my current config files or planning out a project, I spawn the Dev agent. It has access to the CLI and technical tools but is isolated from my other workspaces.

I’m continuing to add more agents and adopting a practice where each one “owns” its own Slack channel bound to its specific area of expertise (e.g., #writing, #development, etc…).

The real magic of the system comes when you give them permission to talk to each other. For example, I could have the Dev agent provide working knowledge of my codebase to the Editor, who can then help craft plain-language summaries of the technology.

Security and the "Personal" in AI

Ownership requires responsibility. Because OpenClaw runs locally and has access to my files, security can never be an afterthought.

The system uses unique authentication tokens for all inbound hooks. Each agent is also “jailed” to its own directory. The Dev agent, for example, can’t peek into the Editor’s private notes unless I explicitly grant it permission, as noted above.

New security concerns and solutions are coming every day, so stay on top of it as you work on your set up. Also always be sure to check for and install critical updates.

The Specialized Toolkit: Skills and Integrations

Beyond the core agents, the system is extended through specialized skills that connect OpenClaw to the apps I use every day. This is where the assistant stops being a passive conversationalist and starts being an active participant in my workflow.

Here’s a partial list of some of the tools and skills I have implemented or experimented with:

  • Using himalaya for email and vdirsyncer for calendars, I have a unified view of both my personal iCloud and my work accounts. The agents can search my inbox for urgent messages or check my schedule before I even start my day.
  • Native Mac Apps: Through AppleScript, OpenClaw has full read/write access to my Apple Notes and Reminders.
  • Given my work, I’ve integrated the TMDB (The Movie Database) API. This allows the agents to fetch cast lists, plot summaries and streaming availability instantly. It’s like having a specialized film researcher on call.
  • Development Workflow: For my technical projects, the system connects to my Mac Studio via SMB and uses my web host’s CLI. This means I can have an agent build a project locally and deploy it to a preview URL for my approval with a single command.
  • My agent has access via API to my Notion environment, where I can create new content, edit material and query it as a knowledge base. Right now, it seems like Notion will be almost a second brain to the memory files that are kept locally on my machine.

One final tip is to be very cautious of installing external skills, even when you have taken great care to lock down your environment in a secure way. Adding skill files from third parties you don’t know if you can trust introduces a huge amount of vulnerability. Skills files are not very hard to learn to write, so I would recommend reading as many as you can, learning to construct them yourself and deploying them directly rather than relying on importing others’.

What's funny is that all of the above is kind of just the tip of the iceberg in terms of my overall configuration. I will keep at it and update more information as I go.

If you're reading this, and for some reason don't even know where to start, please feel free to contact me and I'd be happy to walk you through it.

Until next time…

✌️❤️🦞