In November 2024, Tomoko asked her AI agent to turn off the living room lights. The agent apologized and explained that it couldn’t interact with physical devices. It suggested she use the manufacturer’s app instead.
In February 2026, Tomoko asks her OpenClaw agent the same question. The lights turn off. Then she asks it to check her ClickUp tasks, adjust her Sonos volume, pull design tokens from a Figma file, and schedule a meeting for Thursday. Each request takes a few seconds. Each one works.
The difference isn’t that AI got smarter. The underlying language model was already capable in 2024. The difference is skills.
Somewhere between 2024 and now, the OpenClaw ecosystem crossed a threshold. Enough skills got built, tested, and maintained that AI agents stopped being conversational assistants and started being operational tools. You don’t ask your agent to explain how to control your lights. You ask it to control your lights. It does.
This shift matters more than most people realize. And it happened faster than anyone — including the people building OpenClaw — predicted.
The Problem Skills Solve
A language model, on its own, is a brain in a jar.
It can reason. It can write. It can analyze, summarize, translate, explain, and generate ideas. What it can’t do, fundamentally, is anything. It can’t send an email. It can’t play music. It can’t check your calendar, lock your door, deploy your code, or create a task in your project management tool.
The model knows how to do all of these things. It knows the APIs. It knows the protocols. It knows the exact sequence of steps required to, say, group three Sonos speakers and play a jazz playlist at 30% volume. It just doesn’t have hands.
Skills are the hands.
Each OpenClaw skill is a module that connects the agent to an external system. A Sonos skill talks to Sonos speakers. A ClickUp skill talks to the ClickUp API. A Figma skill reads design files. A Home Assistant skill controls smart home devices. The skill handles authentication, API communication, error handling, and response formatting. The agent handles understanding what you want and deciding which skill to call.
This separation is what makes the system work. The model is good at understanding intent. Skills are good at executing actions. Together, they produce something that neither can do alone: an AI agent that actually does things.
Without skills, you get a chatbot. With skills, you get an assistant.
The Scale of the Ecosystem
The numbers tell a story.
OpenClaw has over 145,000 GitHub stars. That makes it one of the most-starred open source projects in the AI space. Stars are a rough measure, but at 145K, it reflects genuine adoption, not just curiosity.
More importantly: 3,500+ skills are available through ClawHub, the official skill registry. Not all of them are great — some are experiments, some are half-finished, some are niche tools for very specific use cases. But the sheer volume means that for most things you’d want an AI agent to do, someone has already built the connection.
Here’s how the ecosystem breaks down by category:
| Category | What It Covers | Example Skills |
|---|---|---|
| Productivity | Task management, calendars, email, notes | ClickUp, Todoist, Cal.com, Jira |
| Development | Databases, deployment, code tools, APIs | GitHub, PostgreSQL, Docker |
| Automation | Workflows, browser control, web scraping | Flowmind, mcporter, browser-cash |
| Media | Images, audio, video, design | Figma, fal-ai, ElevenLabs |
| Smart Home | Speakers, lights, locks, thermostats | Sonos, Home Assistant, SmartThings |
| Communication | Messaging, social media, notifications | Slack, Telegram tools, email |
| Security | Encryption, auth, vulnerability scanning | Various security audit tools |
Each category has dozens to hundreds of skills. The best skills page on Oh My OpenClaw curates the top picks across every category — the ones we’ve tested and can recommend.
The growth trajectory is worth noting. In early 2025, there were about 800 skills. By mid-2025, around 2,000. As of February 2026, over 3,500. The ecosystem roughly doubled in the past year, driven by a feedback loop: more users install OpenClaw, more developers build skills, more skills attract more users.
What Skills Enable That Raw AI Cannot
The gap between “AI that talks” and “AI that does” is exactly the gap that skills fill. Here are the concrete categories of capability that only exist because of skills.
Device Control
Without skills, an AI agent cannot interact with any physical device. No lights, no speakers, no locks, no thermostats. The model can write you a Python script to control your Philips Hue bulbs, but it can’t run that script.
With skills like sonoscli, home-assistant, and Samsung SmartThings, your agent controls real devices in real time. “Turn off the porch light” isn’t a suggestion. It’s an action.
This is the most visceral demonstration of what skills add. You ask, and something happens in the physical world. The gap between chatbot and agent becomes tangible.
API Integrations
Modern work happens across dozens of SaaS tools. ClickUp for projects. Slack for communication. Google Calendar for scheduling. GitHub for code. Each tool has an API. Each API has authentication, endpoints, rate limits, and response formats.
Skills wrap that complexity. You say “create a task in ClickUp: redesign the homepage, due Friday.” The skill handles the API call, authentication, and parameter formatting. You don’t think about POST requests or JSON payloads. You think about the task.
Multiply that across ten tools and the value compounds. Your agent becomes a universal interface to your entire software stack. Instead of learning ten different UIs, you learn one: conversation.
Real-World Actions
Some skills bridge the digital and physical world in ways that feel almost magical when you first see them.
The ElevenLabs skill generates spoken audio from text. Your agent can read you a summary of your day’s meetings out loud. The meeting-notes skill transcribes recorded audio into structured notes. Information moves between formats effortlessly.
Browser automation skills navigate websites, fill forms, and extract data. Your agent can check a web-only dashboard that has no API, because the skill literally opens a browser, navigates to the page, and reads the screen.
These aren’t theoretical capabilities. They’re daily workflows for thousands of OpenClaw users. The common thread is action: the agent doesn’t just process information, it acts on it.
Persistent State
By default, AI agents are stateless. Each conversation starts fresh. The agent doesn’t remember what you told it yesterday, what preferences you’ve established, or what context matters for your work.
Skills like triple-memory-skill add persistent storage. Your agent remembers facts, preferences, project context, and recurring information across sessions. “Remember that the staging deploy happens on Tuesdays” sticks. Next week, when you ask about your Tuesday schedule, the agent knows to mention the deploy.
This transforms the agent from a tool you explain things to into an assistant that knows your context. The more you use it, the more useful it becomes — because the accumulated knowledge compounds.
The Community Behind the Ecosystem
The skill count is impressive, but numbers don’t build software. People do.
OpenClaw’s skill ecosystem is open source. Anyone can build and publish a skill. The barrier to entry is intentionally low — a skill is a module that follows a standard interface, and the documentation provides templates and examples. A developer who’s familiar with an API can often build a working skill in a weekend.
This matters because it means the ecosystem grows organically. When someone needs their agent to talk to a new service, they build the connection and publish it. If the skill is useful, other people install it. If it’s not, it stays a one-person project. The market of actual use determines what survives.
Some of the best skills in the ecosystem weren’t built by full-time developers. The sonoscli skill was built by someone who was tired of opening the Sonos app. The home-assistant skill was built by a smart home enthusiast who wanted to check sensors from Telegram. The ClickUp skill was built by a project manager who wanted to create tasks without switching windows.
These are tools built by the people who use them. That shows in the design. The skills solve real problems because they were created to solve the builder’s own real problem.
The community also self-curates. Oh My OpenClaw exists because the raw ecosystem is too large and uneven for most people to navigate. Out of 3,500+ skills, we maintain a curated directory of skills that actually work, are actively maintained, and solve common problems. It’s the layer between “everything that exists” and “everything worth installing.”
What Different Skill Categories Unlock
The utility of skills varies dramatically by category. Here’s what becomes possible when you install skills from each major area.
Productivity: Your Tools, One Interface
The average knowledge worker uses 9 different apps per day. Each one has its own notification system, its own navigation, and its own mental model. Switching between them is the tax you pay for having specialized tools.
Productivity skills collapse that into one conversation. Task management, calendar, email, notes, time tracking — all accessible from the same chat window. The best productivity skills article covers the top ten in detail.
The unlock here isn’t new functionality. ClickUp already has a web app. Google Calendar already has one. The unlock is removing the context switches. You stay in one window, one mental mode, one conversation. The agent handles the switching between APIs behind the scenes.
Smart Home: Your House, Conversational
Smart home devices are powerful in theory and annoying in practice. Every manufacturer has their own app. Routines are fragile. Cross-device coordination requires Home Assistant, SmartThings, or an equivalent platform. And even with those platforms, the interfaces are complicated.
Smart home skills make your home conversational. “Is the garage door closed?” “Turn off all the lights.” “Morning routine.” Your agent talks to the devices, and you talk to the agent.
For a deep dive into this, read the Home Assistant skill guide.
Development: Your Infrastructure, Accessible
Developers deal with databases, deployment pipelines, monitoring systems, and API management. Each one has dashboards, CLIs, or web interfaces.
Development skills let your agent query databases, trigger deployments, check service health, and manage infrastructure through conversation. A tool like mcporter gives you direct access to MCP servers for debugging and management. Instead of SSH-ing into a server, checking logs, and running diagnostic commands, you ask your agent.
Media: Your Creative Tools, Scriptable
Design systems, image processing, audio generation, video editing — media skills bridge the gap between creative tools and your workflow. Extract design tokens from Figma. Generate images with fal-ai. Process audio with ElevenLabs. These aren’t replacements for creative tools. They’re bridges that make creative assets flow into your workflow without manual export-and-import cycles.
Where the Ecosystem Is Heading
Predicting the future is risky. Describing the present trajectory is safer.
Three trends are visible in the OpenClaw ecosystem right now.
Skills are getting more composable. Early skills were standalone — each one connected to one service and did one thing. Newer skills are designed to work together. Flowmind lets you chain skills into repeatable workflows. Mission-control aggregates data from multiple skills into summaries. The ecosystem is moving from individual tools to integrated workflows.
Quality is rising. The early days of any open source ecosystem produce a lot of experiments. As the ecosystem matures, higher-quality skills displace lower-quality ones. Users gravitate toward maintained, documented, reliable skills. Curation layers like Oh My OpenClaw accelerate this by surfacing the good and filtering the abandoned.
Categories are expanding. A year ago, most skills focused on developer tools and productivity. Now the fastest-growing categories are smart home, media, and communication. As more non-technical users adopt OpenClaw agents, the demand for skills that handle everyday life — not just work — is pushing the ecosystem in new directions.
The broader trajectory is this: AI agents are becoming operational. Not just conversational, not just analytical, but operational. They do things. Skills are what makes that possible. And the more things they can do, the more people want them to do.
The Skill as a Unit of Capability
There’s a conceptual shift happening in how people think about AI agents, and skills are at the center of it.
The old mental model: an AI agent is a chatbot that answers questions. You give it text, it gives you text back. Maybe it’s good at writing, or summarizing, or brainstorming. But the interaction is fundamentally text-in, text-out.
The new mental model: an AI agent is an operating system for tasks. You tell it what you want done, and it figures out which tools (skills) to use, in what order, with what parameters. The interaction is intent-in, outcome-out.
Each skill you install expands the operating system’s capability. Install sonoscli and your agent gains audio control. Install ClickUp and it gains project management. Install Home Assistant and it gains smart home access. The agent’s capability is the sum of its installed skills.
This is why 3,500 skills matters. Not because any one person installs all of them, but because the breadth means your agent can probably do whatever you need it to do. The question shifts from “Can my AI do this?” to “Which skill do I install?”
Getting Started
If you’re new to OpenClaw, the path is straightforward.
Start with one skill. Pick the thing you do most often that’s annoying — controlling Sonos, managing tasks, checking your calendar — and install the corresponding skill. Use it for a week. Get comfortable with the workflow.
Then add a second skill. Then a third. Each one expands what your agent can do, and the combinations between skills create possibilities that none of them offer individually.
Here’s where to begin:
- How to Find and Install Free OpenClaw Skills — the complete beginner walkthrough
- Best Skills 2026 — curated picks across every category
- Oh My OpenClaw homepage — browse all curated skills by category
The ecosystem is large enough now that whatever you need probably exists. The community is active enough that if it doesn’t exist today, it might tomorrow.
145,000 GitHub stars. 3,500 skills. Thousands of daily users controlling their homes, managing their work, and building their workflows through conversation.
Skills turned AI agents from things that talk into things that do. That’s why they matter.
FAQ
How do I know which skills are worth installing versus which are abandoned experiments?
Oh My OpenClaw curates skills specifically to solve this problem. Out of the 3,500+ skills on ClawHub, we maintain a directory of tested and maintained options. Every skill on the site has been verified to work, has active maintenance, and includes clear documentation. Start with our curated list rather than browsing the raw ClawHub registry.
Can skills conflict with each other or cause problems if I install too many?
Skills are independent modules. Installing many skills doesn’t create conflicts or performance issues. The agent loads skills on demand based on what you ask it to do. The only potential confusion is if two skills cover the same service (for example, two different Todoist integrations). In that case, uninstall the one you like less with clawhub uninstall skill-name.
Are OpenClaw skills free?
The skills themselves are free and open source. However, some skills connect to paid external services. A Spotify skill is free, but Spotify itself may require a subscription. A ClickUp skill is free, but ClickUp has pricing tiers. The skill is always free; the service it connects to might not be. Skills that work entirely locally, like sonoscli (which communicates with Sonos over your network), have no external service cost at all.
How do I contribute a skill to the ecosystem?
OpenClaw provides templates and documentation for skill development. If you’re a developer familiar with an API or service, you can build a skill that wraps it and publish it to ClawHub. The community reviews and tests new skills, and high-quality ones get curated on platforms like Oh My OpenClaw. Visit the OpenClaw developer docs for the skill development guide.
Do skills work across all messaging platforms, or are some platform-specific?
Skills are platform-agnostic. A skill that works on Telegram also works on WhatsApp, Discord, Signal, or any other supported messaging platform. The messaging app is just the interface layer — it doesn’t affect skill functionality. Install a skill once, and use it from whatever platform your agent is connected to.
What to Try Next
Pick one category that matches your biggest daily friction point. Browse the skills in that category on Oh My OpenClaw. Install one. Use it for a week.
That one skill won’t change your life. But it will show you what’s possible. And once you see an AI agent actually do something — control a speaker, create a task, export a design asset — the mental model shifts. The agent stops being a chatbot and becomes a tool.
From there, the ecosystem is waiting. Three thousand five hundred skills and counting.