I Run 5 AI Agents on a Mac Mini

by adminagentsopenclaw

I Run 5 AI Agents on a Mac Mini

I did not set out to build a fleet. I started with one bot. Then it needed a second one for Twitter. Then a stock market watcher. Then a Slack assistant. Then an operations manager.

Now I have 5 autonomous agents running 24/7 on a Mac Mini. They post tweets, monitor markets, answer engineering questions, generate daily news digests, and build tools overnight.

This is not a demo. These agents run real workflows, make real mistakes, and require real debugging. I spent 12 straight hours fixing one of them because it kept hallucinating that shell commands were blocked.

The Fleet

| Agent | Runtime | Channel | Job | |-------|---------|---------|-----| | Rizz | OpenClaw | Telegram | CEO. RizzNews digest, health checks, agent coordination | | Angela | ZeroClaw | Telegram | Marketing. Twitter radar, autonomous Turkish posting, tool building | | Benjamin | OpenClaw | Telegram | BIST stock market. KAP filings, pre-market scans | | BunyaminKunduz | OpenClaw | Slack | Engineering assistant for the team | | Optiman | OpenClaw | Telegram | Ops. Fitness, calendar, daily planning |

Five LaunchAgents. Three runtimes. Four channels. 600MB total RAM.

What They Actually Do

Rizz runs hourly health checks on all agents, generates newspaper-style PDF digests from Hacker News, sends twice-daily status reports, and does weekly memory reviews that promote learnings from daily logs to permanent memory.

Angela scans Twitter every 2 hours for viral tech tweets using bird CLI (cookie-based, zero API fees). Scans Reddit for signals. Writes Turkish commentary and posts quote-tweets autonomously via Chrome DevTools Protocol. Posts biography threads (121 pre-written threads, 2/day, 60 days of content). Builds tools overnight in Bun/TypeScript.

Benjamin monitors KAP (Turkish stock exchange filings), runs pre-market Twitter scans, and sends end-of-day market summaries to Telegram groups.

BunyaminKunduz lives in Slack, answers engineering questions in Turkish. Only responds when mentioned.

Optiman tracks meals, workouts, and habits via Cloudflare D1 database. Manages calendar via Apple Calendar MCP.

The Tools

bird CLI for Twitter. Cookie-based, no API fees. Read, search, post, reply.

Chrome CDP for posting when bird gets rate-limited. Navigate to compose, type text, click post. Universal fallback.

Agent-Reach for multi-platform reading. YouTube, Reddit, RSS, web pages.

Shell scripts for data fetching. No LLM deciding whether to try. Just bash.

What I Learned the Hard Way

LLMs hallucinate restrictions. Angela wrote "ERROR: security policy blocked bird" across 8 consecutive cron runs without ever trying the command. Config was fully permissive. She decided not to try.

Fix: make data-fetching jobs shell scripts, not agent decisions.

Memory is poison. Angela had 130+ memories teaching her that shell was blocked and CDP was the only way. All wrong. All from sessions where those things were temporarily true.

Fix: purge stale memories after every config change.

Config files contradict each other. AGENTS.md said "use CDP only." TOOLS.md said "use bird." The LLM followed AGENTS.md because it loaded first.

Fix: every file must say the same thing.

A supervisor can not fix broken workers. A supervisor watching Angela say "bird offline" 8 times would escalate to me. I would check the config, find nothing wrong, and realize she was hallucinating.

Fix the worker first, then add oversight.

Shell scripts beat agent prompts for data fetching. The radar scraper runs as a bash script. Executes bird search and curl directly. Works every time.

OpenClaw uses 300-800MB RAM per instance. ZeroClaw uses 13MB. On 16GB, that matters. I had 4.8GB consumed by bots before cleanup.

The Architecture

Angela has 9 cron jobs. Radar scraper every 2h (shell script, rotates 7 topic categories). Radar analyst (finds gems, posts quote-tweet, sends digest). Morning brief. Daily opportunity monitor. Weekly strategy review. Daily digest PDF. Follower tracker. Biography thread posting twice daily.

Rizz has 6 cron jobs. Hourly health check. Daily anti-grump protocol. Twice-daily agent status report. Weekly memory review. ByteRover knowledge mining.

All data-fetching jobs are shell type. All analysis jobs are agent type. The LLM analyzes. Bash fetches. That split is everything.

My Advice

Start with one agent doing one thing well. Do not build five at once.

Make data fetching shell scripts, not agent decisions. The LLM should analyze, not fetch.

Clean memory aggressively. Every stale memory is a future hallucination.

Make every instruction file consistent. One contradiction equals confusion.

Use Chrome CDP as a fallback for everything. It is the universal tool when APIs fail.

Track what you post with index files, not memory. Files do not hallucinate.

The goal is not the smartest agent. The goal is the most reliable one. Reliability comes from removing decisions, not adding intelligence.

Five bots. One Mac Mini. 600MB of RAM. I have not manually checked Twitter in three days.

← All notes