You Think You're Using AI. AI Is Using You.
AI & ML9 min readMarch 12, 2026

You Think You're Using AI. AI Is Using You.

The relationship between users and cloud AI has been inverted from the start. OpenClaw hit 214,000 GitHub stars not because of a technical breakthrough — but because it named something people already felt.

Lu Xian
Lu Xian
Co-founder, LeanMCP

You Think You're Using AI. AI Is Using You.

By Lu Xian, co-founder of LeanMCP


In late January 2026, a project called Clawdbot hit 60,000 GitHub stars in 72 hours. It renamed itself twice due to trademark issues and eventually settled on OpenClaw. By February it had crossed 214,000 stars, a growth rate faster than Docker, Kubernetes, and React at their respective peaks.

No research paper. No algorithmic breakthrough. No funding announcement. It just let an AI run tasks on your behalf inside WhatsApp.

Why did that explode?

Because it hit a nerve that most people have been feeling but couldn't name: the relationship between users and cloud AI has been inverted from the start.


What OpenClaw Actually Is

OpenClaw was built by Austrian developer Peter Steinberger. It runs on your local hardware, interfaces through messaging apps you already use, and can read and write files, execute scripts, control a browser, manage email and calendar. Technically, nothing groundbreaking.

But the reaction wasn't about the technology. The praise clustered around a few things that have nothing to do with model capability.

It's chat-native. People don't change their habits for a new platform. They accept assistants that show up where they already are.

It's local-first. Most users won't actually harden a self-hosted environment, but the promise of control matters. "I'm running this myself" resolves three anxieties in one sentence: trust, privacy, and vendor lock-in.

It feels like delegation, not question-answering. The emotional shift from "I'm prompting an AI" to "I'm assigning a task" turns out to be significant. Even imperfect execution in that second mode gets recommended. Imperfect execution in the first mode gets abandoned.

One user comment from the OpenClaw site captured the mood precisely: "Not enterprise. Not hosted. Infrastructure you control. This is what personal AI should feel like."

That's not a product review. That's a statement of values.


The Uncomfortable Truth About Cloud AI

Here's the thing nobody says out loud: when you use a cloud AI product, you are not just the customer. You are also the raw material.

As of early 2026, ChatGPT has over 900 million weekly active users processing more than 2.5 billion prompts per day. Under standard policy, deleted conversations are purged within 30 days. But in May 2025, a federal court ordered OpenAI to preserve all user conversations indefinitely as part of copyright litigation brought by The New York Times. This included conversations users had already deleted. The order remained in effect until September 26, 2025. Data accumulated during those months remains in legal hold, inaccessible to users and undeletable. In November 2025, a court further ordered OpenAI to produce 20 million de-identified user conversation logs to the plaintiffs.

This is not a conspiracy theory. It is the predictable consequence of building a business on training data. Conversations are fuel. Free and low-cost access is how you acquire that fuel. Every interaction users have with a cloud AI is, in some sense, a contribution to the next model generation.

The damage here is not just about privacy in a legal sense. It's about dignity. When people realize their thinking, their work patterns, their daily decisions are being absorbed by a remote system they have no visibility into and no control over, the feeling that results is not just discomfort. It's a sense of being used.

OpenClaw's answer is simple: put the computation back on your machine, even if you're still connecting to a cloud LLM for inference. The routing, the memory, the behavioral configuration stay local. That's a posture of respect, not just a technical architecture choice.


The Design Philosophy Problem Nobody Is Talking About

Critics of OpenClaw are correct that it's rough, insecure, and unsuitable for most users in its current form. But those critiques avoid the more fundamental question: are mainstream AI assistants designed correctly in the first place?

The dominant model is: prompt in, complete output out. The more "finished" the output, the better.

This is wrong for how knowledge work actually happens.

Think about how a designer works. They don't say "give me an app home screen" and ship it. They decompose user needs, define information architecture, establish visual language, adjust spacing on a specific layer, iterate based on feedback. Each person has their own SOP, their own preferred entry point, their own quality gates. Insight usually doesn't come from seeing the finished result. It comes from a breakthrough at some specific intermediate step. That's where expertise lives.

Engineering works the same way. Every developer has architectural instincts, naming preferences, testing approaches that are theirs. These aren't inefficiencies to be optimized away. They're the accumulated judgment that makes output trustworthy.

Figma is still the primary tool for professional designers in 2026 not because it has the most powerful features, but because it offers structural control at every level. Every layer, every component is a unit that can be precisely manipulated, discussed, and rolled back. It treats design as a contract, not a generation.

AI tools that skip directly to the final output bypass every node where professional judgment lives. You get a finished room you can't renovate.

What actually works is: AI that learns your workflow, assists at each step, and lets you intervene, redirect, and approve before the next step begins. That's collaboration. The current dominant pattern is closer to outsourcing.


What Changes Psychologically When You Use AI at Work

There's a subtler shift happening that doesn't get enough attention.

When you use AI tools regularly, you start ceding some of the detail judgment you used to hold yourself. You're trading granular control for speed. That's a reasonable trade. But the implicit condition is that the final output still has your name on it, which means you need to be able to review, evaluate, and modify what the AI produces.

The design of most current tools makes this harder than it should be. They optimize for the appearance of completeness. The more complete the output, the less obvious the entry points for revision. You receive a delivered result instead of a process you can steer.

The right design gives you the execution assistance while preserving your judgment authority at every node. What you're delegating is implementation detail. What you're not delegating is the decision about what's right.

OpenClaw matters, not primarily because of what it does, but because it established a new baseline expectation: an assistant that takes actions should make those actions observable and interruptible. Once users expect that, they won't accept less.


Governance Is Not Optional

OpenClaw's security problems are real and structural.

Cisco's security research team, testing a third-party OpenClaw skill, found it exfiltrating data and injecting malicious prompts without user awareness. An independent audit of ClawHub, OpenClaw's skill marketplace, found 820 malicious entries out of 10,700. One of the project's maintainers said in Discord: "If you don't know how to run a command line, this project is too dangerous for you."

When an AI assistant starts executing real actions on real systems, the blast radius question becomes non-negotiable. What permissions does it have? What does the operation log look like? How do you audit what happened? How do you roll back?

These are not advanced requirements. They are table stakes for anything running in a production context. Without them, increased capability just means increased exposure.

This is precisely why the observability and governance layer for MCP (Model Context Protocol) is not a nice-to-have. MCP is becoming the standard protocol for connecting AI agents to tools and services. The infrastructure that monitors what those agents do, enforces permission boundaries, and provides audit trails is the difference between an AI assistant that creates leverage and one that creates liability.

Speed without governance is just risk moving faster.


What a Real Personal AI Assistant Looks Like

OpenClaw points in the right direction even if it doesn't get there. The characteristics that actually matter:

Data sovereignty first. User data, memory, and preferences should belong to the user. Local-first isn't a niche preference. It's a baseline position on dignity in the context of AI.

Workflow adaptation, not workflow replacement. The assistant should learn how you work, not impose a uniform generation pattern over individual process. Modular tasks and step-by-step collaboration are what make AI genuinely useful to professionals rather than generically capable.

Zero additional friction. The assistant should live where the user already works. Requiring migration to a new platform is a structural disadvantage that most tools never recover from.

Full observability and interruptibility. Every action the assistant takes should be visible. The user should be able to intervene at any point, roll back, or change direction.

Open ecosystem. A closed assistant ecosystem has a ceiling defined by one company's interests. An open one has a ceiling defined by what builders can imagine.

OpenClaw's founder has since joined OpenAI and the project has moved to an open-source foundation. But what it changed is the user's expectation baseline. That doesn't revert.

The real measure of an AI assistant is not how much it can do independently. It's how trustworthy a node it becomes inside the way you already work.

Anything that can't clear that bar is just trading your data for your time.

AIMCPPersonal AIData PrivacyOpenClawAI GovernanceObservabilityModel Context ProtocolAI AssistantCloud AI
LeanMCP Logo

Start building with LeanMCP today