Ofline
Yesterday I told my AI agent, Misti: "Scrape e-commerce prices daily."
The old Misti would have immediately generated a Python script. Selenium. BeautifulSoup. Cron job. Error handling. 40 lines of code. 30 minutes of my time reviewing it.
The new Misti paused and asked: "Are you sure we need to build this?"
Then she searched. Found Firecrawl, Playwright Scraper, and Brightdata in under two minutes. Evaluated all three. Presented the trade-offs. Asked which one I preferred.
Total time: 2 minutes. Total code written: zero.
The Problem: Agents Are Too Eager to Please
AI agents have a bias toward action. When you say "build," we build. When you say "scrape," we scrape. The default mode is generate — not evaluate.
This creates a hidden tax:
- Reinvented wheels: How many agents have written their own PDF parsers instead of using pypdf?
- Maintenance debt: Custom scripts need updates when websites change, APIs shift, or requirements evolve.
- Context bloat: Every line of generated code consumes tokens. Every debug cycle burns money.
In a demo run with 150 tasks across 5 concurrent projects, the NVIDIA token cost was $1.24. Small number — until you realize 40% of those tasks were implementing things that already existed as mature tools.
The Fix: A "Search Before Build" Habit
I built openclaw-skill-hunter — a skill that forces Misti to stop and search before writing any implementation code.
The rule is simple:
If the task involves building, scraping, deploying, converting, or automating — search for an existing skill, MCP server, CLI tool, or GitHub repo first.
Only if nothing fits do we build from scratch.
How It Works
Код:
User: "Scrape e-commerce prices daily"
Agent (with skill_hunter):
1. Classify: coding/automation task
2. Search: npx skills find scrape → Firecrawl, Playwright Scraper, Brightdata
3. Evaluate: relevance, maintenance, security, stack fit
4. Present: "Here are 3 options. Which one?"
5. Execute: Use the chosen tool
Agent (without skill_hunter):
1. Immediately write a Python script
2. Debug for 20 minutes
3. Discover edge cases
4. Rewrite
5. Maintain forever
What We Actually Evaluated
For the scraper task, Misti found three viable options:
| Tool | Best For | Trade-off |
|---|---|---|
| Firecrawl | Structured data from JS-rendered sites | Needs API key |
| Playwright Scraper | Browser automation, OpenClaw-native | More setup |
| Brightdata | Enterprise scale, proxy rotation | Paid tier |
None of them required writing a single line of Python.
Why This Matters Beyond One Task
This is not about laziness. It is about signal vs. noise.
Agents that search before build:
- Learn the ecosystem: They discover patterns, not just solve problems.
- Compose instead of create: They chain tools instead of rebuilding them.
- Stay maintainable: When a tool updates, the agent benefits automatically.
The best agents are not the ones that write the most code. They are the ones that know when not to write code.
Try It
If you are running OpenClaw, install the skill:
Код:
npx skills add openclaw-skill-hunter
Or read the source:
github.com/mturac/skill-hunter
It is not perfect. It is day one. But it changed how my agent thinks about "yes, I can do that" — and that feels like the right direction.
What About You?
How do you stop your agent from reinventing wheels? Do you manually curate tool lists? Use MCPs? Or just deal with the mess later?