MySearch Proxy is a public-facing search stack for Codex, Claude Code,
OpenClaw, and custom agent workflows.
It combines three things that are usually split across separate projects:
- an installable search
MCP - reusable search
Skillpackages - a unified
Proxy Consolefor provider routing and operations
Project entry points:
- GitHub: skernelx/MySearch-Proxy
- Docker Hub: skernelx/mysearch-proxy
- OpenClaw Hub Skill: clawhub.ai/skernelx/mysearch
- Recommended Tavily / Firecrawl provider layer: skernelx/tavily-key-generator
This is not just another Tavily wrapper.
The goal is to make Tavily, Firecrawl, and optional X / Social search
work as one reusable product that can be installed locally, shared publicly,
and routed through the same runtime across multiple environments.
The public OpenClaw page is live here:
- clawhub.ai/skernelx/mysearch
- The screenshot below is a real capture from
2026-03-17 - Always use the live ClawHub page as the source of truth for current scan status
mysearch/- the installable MySearch MCP
- ships
search,extract_url,research, andmysearch_health
proxy/- the proxy layer and web console
- manages Tavily / Firecrawl key pools, downstream tokens, quota sync,
and
/social/search
skill/README_EN.md- MySearch skill guide for
Codex/Claude Code - includes the "tell the AI to install it for me" flow
- MySearch skill guide for
openclaw/README_EN.md- bundled MySearch skill guide for OpenClaw / ClawHub
- includes the "tell the AI to install the OpenClaw skill" flow
docs/mysearch-architecture.md- architecture and design boundaries
Many search MCPs or search skills have the same limitations:
- they only do web search and cannot extract content cleanly
- they work for news, but not for docs, GitHub, PDFs, pricing, or changelogs
- they ship prompts but not a real MCP runtime
- they ship a key panel but not an agent-ready search workflow
- they assume official APIs only and become awkward to self-host
- they lose most of their value if X / Social is unavailable
MySearch Proxy solves this by separating the stack into clear layers:
tavily-key-generator
-> provider layer for Tavily / Firecrawl and optional aggregation APIs
MySearch Proxy
-> MCP, Skills, OpenClaw Skill, Proxy Console, Social / X routing
Codex / Claude Code / OpenClaw / custom agents
-> one shared search entry
The default recommendation is not "paste official keys everywhere". The recommended public deployment is:
tavily-key-generatorfor Tavily / Firecrawl provider deliveryMySearch Proxyfor routing, MCP, skills, and console
MySearch routes by task type:
- general web and news -> Tavily
- docs, GitHub, PDFs, pricing, changelogs, extraction -> Firecrawl
- X / Social -> xAI or compatible
/social/search
That makes it a real search orchestrator, not a one-provider wrapper.
This repository ships:
- MCP
- Codex / Claude Code skill
- OpenClaw skill
- Proxy Console
So the same search logic can be reused across local agents, OpenClaw, and team gateways instead of being rewritten per runtime.
The value is not only search:
extract_url- prefers Firecrawl and falls back to Tavily extract
research- bundles search, extraction, and evidence into a lightweight research flow
That is much more useful for agent workflows than returning a few links.
You can:
- use official Tavily / Firecrawl / xAI APIs
- override
BASE_URL + PATH + AUTH_* - route Tavily / Firecrawl through your own aggregation gateway
- route X / Social through a compatible
/social/search
That matters if you want a reusable public project instead of a one-off setup.
Without xAI or grok2api, the stack still supports:
webnewsdocsgithubpdfextractresearch
Only explicit social routes degrade.
Works well for:
CodexClaude Code- other MCP-capable local assistants
Use cases:
- current web search
- docs / GitHub / pricing / changelog lookup
- single-page content extraction
- lightweight research packs
- optional X / Social evidence
Useful if you want to:
- replace an older Tavily-only search skill
- give OpenClaw a better web + docs + social workflow
- publish a reusable search skill on ClawHub
Useful if you want:
- one shared entry for multiple downstream tools
- separate upstream provider keys from downstream tokens
- a single UI for Tavily, Firecrawl, and Social / X operations
Useful if you already have:
- your own Tavily / Firecrawl gateway
grok2apior another xAI-compatible backend- a need to centralize search logic instead of scattering scripts
The default recommended setup is:
tavily-key-generator
-> provides Tavily / Firecrawl provider access or aggregation APIs
MySearch Proxy
-> connects Tavily / Firecrawl / X
-> exposes MCP, Skills, OpenClaw Skill, and Proxy Console
Codex / Claude Code / OpenClaw / custom agents
-> use MySearch as the unified search path
Why tavily-key-generator is the default recommendation:
- it is a good provider layer for Tavily / Firecrawl
- you do not need to expose official keys to every downstream installation
- MySearch only needs to connect to the normalized endpoints it exposes
If you already have official keys, direct official mode still works fine.
Primary role:
- general web search
- news
- default discovery for research flows
Recommended connection:
- official Tavily API
- or skernelx/tavily-key-generator
Without Tavily:
webnews- the discovery stage of default
research
become weaker, but docs and extraction can still lean on Firecrawl.
Primary role:
- docs
- GitHub
- PDFs
- pricing
- changelogs
- content extraction
Recommended connection:
- official Firecrawl API
- or skernelx/tavily-key-generator
Without Firecrawl:
docs / github / pdf / pricing / changelog- extraction quality
drop in quality, but general web and news still work through Tavily and
extract_url will still try to fall back to Tavily extract.
Primary role:
- X / Social search
- sentiment
- developer conversations
Recommended connection:
- official xAI
- or a compatible
/social/search
Without X / Social:
mode="social"is unavailableresearch(include_social=true)still returns web evidence and addssocial_error
So no X provider is not a blocker for MySearch as a general-purpose search stack.
You do not need every part of the repo for every deployment.
The easiest option is to send this instruction to Codex or Claude Code:
Open skill/README_EN.md and skill/SKILL.md from this repository, install MySearch for me, run install.sh from the repo root if the MCP is not registered yet, then run the health check and smoke tests and tell me the result.
If you are only sharing the GitHub link, this also works:
Please read https://github.com/skernelx/MySearch-Proxy/tree/main/skill and automatically install and verify MySearch for me.
python3 -m venv venv
cp mysearch/.env.example mysearch/.env
./install.shMinimal config:
MYSEARCH_TAVILY_API_KEY=tvly-...
MYSEARCH_FIRECRAWL_API_KEY=fc-...If you want to route through tavily-key-generator, use normalized gateway endpoints instead:
MYSEARCH_TAVILY_BASE_URL=https://your-search-gateway.example.com
MYSEARCH_TAVILY_SEARCH_PATH=/api/search
MYSEARCH_TAVILY_EXTRACT_PATH=/api/extract
MYSEARCH_TAVILY_AUTH_MODE=bearer
MYSEARCH_TAVILY_API_KEY=your-token
MYSEARCH_FIRECRAWL_BASE_URL=https://your-search-gateway.example.com
MYSEARCH_FIRECRAWL_SEARCH_PATH=/firecrawl/v2/search
MYSEARCH_FIRECRAWL_SCRAPE_PATH=/firecrawl/v2/scrape
MYSEARCH_FIRECRAWL_AUTH_MODE=bearer
MYSEARCH_FIRECRAWL_API_KEY=your-tokenThe root install.sh will:
- install
mysearch/requirements.txt - auto-register
Claude Codeif available - auto-register
Codexif available - inject
MYSEARCH_*variables frommysearch/.env
The default registration target is local stdio.
If you also want a remote streamableHTTP endpoint, start it separately:
./venv/bin/python -m mysearch \
--transport streamable-http \
--host 0.0.0.0 \
--port 8000 \
--streamable-http-path /mcpDefault remote endpoint:
http://127.0.0.1:8000/mcp
There are two different install paths here:
- local
stdio- best when
Codex/Claude Coderuns the MCP on the same machine - use
./install.sh
- best when
- remote
streamableHTTP- best when
MySearchruns on a server and clients connect by URL - clients do not need to run
./install.shlocally
- best when
If you want Codex to connect to a remote MySearch, this flow is already
tested:
codex mcp add mysearch --url http://127.0.0.1:8000/mcp
codex mcp get mysearchIf the remote endpoint is behind a reverse proxy or bearer auth:
export MYSEARCH_MCP_BEARER_TOKEN=your-token
codex mcp add mysearch \
--url https://mysearch.example.com/mcp \
--bearer-token-env-var MYSEARCH_MCP_BEARER_TOKEN
codex mcp get mysearchNotes:
- the
Codex --urlflow is thestreamableHTTPpath - these commands have already been tested locally
- if
Claude Codeis still using a local MCP config flow, keep using the defaultstdioinstall path - the
openclaw/bundle does not depend on this remotestreamableHTTPendpoint
If you want the assistant to understand how to use MySearch, install the skill bundle too:
bash skill/scripts/install_codex_skill.shTo overwrite an existing local copy:
bash skill/scripts/install_codex_skill.sh --forceThe more shareable entry for humans and AI assistants is:
If you want the AI to install the OpenClaw skill directly, the easiest prompt is:
Open openclaw/README_EN.md and openclaw/SKILL.md from this repository, install the MySearch OpenClaw skill for me, copy it into ~/.openclaw/skills/mysearch for local installation, carry over the .env file, run the health check, and tell me the result.
If you are only sharing the GitHub link, this also works:
Please read https://github.com/skernelx/MySearch-Proxy/tree/main/openclaw and automatically install and verify the MySearch OpenClaw skill for me.
Public page:
The official ClawHub docs currently show this generic flow:
clawhub search "mysearch"
clawhub install <skill-slug>To install from the local bundle instead:
cp openclaw/.env.example openclaw/.env
bash openclaw/scripts/install_openclaw_skill.sh \
--install-to ~/.openclaw/skills/mysearch \
--copy-env openclaw/.envThe more shareable entry for humans and AI assistants is:
Default public image:
- Docker Hub: skernelx/mysearch-proxy
- Pull:
docker pull skernelx/mysearch-proxy:latest
cd proxy
docker compose up -dor:
docker run -d \
--name mysearch-proxy \
--restart unless-stopped \
-p 9874:9874 \
-e ADMIN_PASSWORD=your-admin-password \
-v $(pwd)/mysearch-proxy-data:/app/data \
skernelx/mysearch-proxy:latestOpen:
http://localhost:9874
MYSEARCH_XAI_BASE_URL=https://api.x.ai/v1
MYSEARCH_XAI_RESPONSES_PATH=/responses
MYSEARCH_XAI_SEARCH_MODE=official
MYSEARCH_XAI_API_KEY=xai-...MYSEARCH_XAI_BASE_URL=https://media.example.com/v1
MYSEARCH_XAI_SOCIAL_BASE_URL=https://your-social-gateway.example.com
MYSEARCH_XAI_SEARCH_MODE=compatible
MYSEARCH_XAI_API_KEY=your-social-gateway-tokenNotes:
MYSEARCH_XAI_BASE_URLpoints to the model or/responsesgatewayMYSEARCH_XAI_SOCIAL_BASE_URLpoints to the social gateway root- MySearch appends
/social/searchautomatically
If your social backend comes from grok2api, the proxy/ console can also
inherit app.api_key and read token state from the admin endpoints.
After MCP installation:
claude mcp list
codex mcp list
codex mcp get mysearchLocal smoke tests:
python skill/scripts/check_mysearch.py --health-only
python skill/scripts/check_mysearch.py --web-query "OpenAI latest announcements"
python skill/scripts/check_mysearch.py --docs-query "OpenAI Responses API docs"If X / Social is configured:
python skill/scripts/check_mysearch.py --social-query "Model Context Protocol"OpenClaw bundle verification:
python3 openclaw/scripts/mysearch_openclaw.py healthThe project still works.
You still get:
webnewsdocsgithubpdfextractresearch
Only explicitly social workflows degrade.
The default recommendation is not to give up. The recommended path is to connect:
That is why this project supports both direct official APIs and custom gateway mode by default.
- Root overview: README.md
- MCP docs: mysearch/README_EN.md
- Skill docs: skill/README_EN.md
- OpenClaw skill docs: openclaw/README_EN.md
- MCP Chinese: mysearch/README.md
- Proxy docs: proxy/README_EN.md
- Proxy Chinese: proxy/README.md
- Architecture notes: docs/mysearch-architecture.md
This project is a good fit if you want:
- a stronger default search MCP than a single-source wrapper
- an OpenClaw search skill that is installable, publishable, and auditable
- Tavily, Firecrawl, and Social / X managed from one control plane
- your own aggregation APIs wired into a reusable search product
If you only need a small one-off script for web search, this repo may feel larger than necessary.
If you need a reusable public search foundation, that is exactly what it is built for.


