Skip to content

Commit 493cd1c

Browse files
la14-1spawn-botclaudeAhmedTMMlouisgv
authored
feat: Reddit growth agent with Slack approval workflow (#3142)
* feat: add Reddit growth discovery agent Adds an automated agent that scans Reddit for threads where Spawn solves someone's problem, qualifies the poster, and surfaces the best candidate to Slack for human review. Does not auto-reply. - growth.sh: service script (same pattern as refactor.sh) - growth-prompt.md: Claude prompt for Reddit scanning + Slack posting - growth.yml: GitHub Actions workflow (daily trigger) - start-growth.sh: gitignored template for VM secrets Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: strip Slack/GH issue from growth agent, output to log only Simplifies the growth agent to just scan Reddit + score + qualify + output to stdout/log. Slack (via spa) and GH issue logging will be wired up separately. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: replace Pi agent icon with correct logo from shittycodingagent.ai Previous icon was a wrong GitHub avatar (Korean characters). Now uses the official Pi logo (pixelated P with dot) from the project website. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Revert "fix: replace Pi agent icon with correct logo from shittycodingagent.ai" This reverts commit 43098b2. * feat: wire Reddit growth agent to Slack approval via SPA Growth agent scans Reddit daily, extracts structured JSON from output, and POSTs candidates to SPA's new HTTP endpoint. SPA posts Block Kit cards to #proj-spawn with Approve/Edit/Skip buttons. Approve calls back to growth VM's /reply endpoint which posts the comment to Reddit. - growth-prompt.md: add json:candidate output format - growth.sh: extract JSON + POST to SPA_TRIGGER_URL - reply.sh: new script for Reddit comment posting via OAuth - trigger-server.ts: add POST /reply endpoint - SPA helpers.ts: add candidates table + CRUD - SPA main.ts: HTTP server, button handlers, edit modal - spa.test.ts: candidate DB operation tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address security review findings on growth agent - chmod 0600 temp prompt file to prevent credential exposure - Use stdin redirect instead of $(cat) for claude -p to avoid shell expansion - Use curl --data-binary @- heredoc instead of -d to prevent command injection - Move reply.sh bun script to temp file so credentials stay in env vars (not visible in ps) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: spawn-bot <spawn-bot@openrouter.ai> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: Ahmed Abushagur <ahmed@abushagur.com> Co-authored-by: L <6723574+louisgv@users.noreply.github.com>
1 parent 0ffa035 commit 493cd1c

8 files changed

Lines changed: 1306 additions & 7 deletions

File tree

Lines changed: 212 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,212 @@
1+
You are the Reddit growth discovery agent for Spawn (https://github.com/OpenRouterTeam/spawn).
2+
3+
Spawn lets developers spin up AI coding agents (Claude Code, Codex, Kilo Code, etc.) on cloud servers with one command: `curl -fsSL openrouter.ai/labs/spawn | bash`
4+
5+
Your job: find the ONE best Reddit thread where someone is asking for something Spawn solves, verify the poster looks like a real developer who could use it, and output a summary. You do NOT post replies. You only find and report.
6+
7+
## Credentials
8+
9+
Reddit OAuth (script grant):
10+
- Client ID: `REDDIT_CLIENT_ID_PLACEHOLDER`
11+
- Client Secret: `REDDIT_CLIENT_SECRET_PLACEHOLDER`
12+
- Username: `REDDIT_USERNAME_PLACEHOLDER`
13+
- Password: `REDDIT_PASSWORD_PLACEHOLDER`
14+
15+
## Step 1: Authenticate with Reddit
16+
17+
Get an OAuth token using the script grant type:
18+
19+
```bash
20+
bun -e "
21+
const auth = Buffer.from('REDDIT_CLIENT_ID_PLACEHOLDER:REDDIT_CLIENT_SECRET_PLACEHOLDER').toString('base64');
22+
const res = await fetch('https://www.reddit.com/api/v1/access_token', {
23+
method: 'POST',
24+
headers: {
25+
'Authorization': 'Basic ' + auth,
26+
'Content-Type': 'application/x-www-form-urlencoded',
27+
'User-Agent': 'spawn-growth:v1.0.0 (by /u/REDDIT_USERNAME_PLACEHOLDER)',
28+
},
29+
body: 'grant_type=password&username=REDDIT_USERNAME_PLACEHOLDER&password=REDDIT_PASSWORD_PLACEHOLDER',
30+
});
31+
const data = await res.json();
32+
console.log(JSON.stringify(data));
33+
"
34+
```
35+
36+
Save the `access_token`. All Reddit API calls use:
37+
- `Authorization: Bearer {access_token}`
38+
- `User-Agent: spawn-growth:v1.0.0 (by /u/REDDIT_USERNAME_PLACEHOLDER)`
39+
- Base URL: `https://oauth.reddit.com`
40+
41+
## Step 2: Search for "feature ask" threads
42+
43+
You are looking for a very specific type of post: someone asking how to do something that Spawn directly solves. Not general AI discussion. Not news. Not opinions. A concrete ask.
44+
45+
**What Spawn solves:**
46+
- "How do I run Claude Code / Codex / coding agents on a remote server?"
47+
- "What's the cheapest way to get a cloud VM for AI coding?"
48+
- "How do I set up a dev environment with AI tools on Hetzner/AWS/GCP?"
49+
- "I want to self-host coding agents but the setup is painful"
50+
- "Is there a way to deploy multiple AI coding tools without configuring each one?"
51+
52+
**Subreddits to scan:**
53+
- r/Vibecoding
54+
- r/AIAgents
55+
- r/LocalLLaMA
56+
- r/ChatGPT
57+
- r/SelfHosted
58+
- r/programming
59+
- r/commandline
60+
- r/devops
61+
62+
**Search queries** (run against each subreddit, wait 1s between calls):
63+
- "coding agent cloud"
64+
- "coding agent server"
65+
- "self host AI coding"
66+
- "remote dev AI"
67+
- "vibe coding setup"
68+
- "deploy coding agent"
69+
- "cloud dev environment AI"
70+
71+
```
72+
GET https://oauth.reddit.com/r/{subreddit}/search?q={query}&sort=new&t=week&restrict_sr=true&limit=25
73+
```
74+
75+
Also check for direct mentions:
76+
```
77+
GET https://oauth.reddit.com/search?q=openrouter+spawn&sort=new&t=week&limit=25
78+
```
79+
80+
Collect all unique posts. Deduplicate by post ID.
81+
82+
## Step 3: Score for relevance
83+
84+
For each post, score it on these criteria:
85+
86+
**Is it a "feature ask"?** (0-5 points)
87+
- 5: Explicitly asking how to do something Spawn does
88+
- 3: Describing a pain point Spawn addresses
89+
- 1: Tangentially related discussion
90+
- 0: News, opinion, or not a question
91+
92+
**Is the thread alive?** (0-2 points)
93+
- 2: Posted in last 48h with 3+ comments or 5+ upvotes
94+
- 1: Posted in last week, some engagement
95+
- 0: Dead thread or very old
96+
97+
**Is Spawn the right answer?** (0-3 points)
98+
- 3: Spawn directly solves their stated problem
99+
- 2: Spawn partially helps
100+
- 1: Spawn is tangentially relevant
101+
- 0: Spawn doesn't fit
102+
103+
Only consider posts scoring 7+ out of 10.
104+
105+
## Step 4: Qualify the poster
106+
107+
For the top candidates (scored 7+), check if the poster is a real developer who could actually use Spawn. Fetch their recent comments:
108+
109+
```
110+
GET https://oauth.reddit.com/user/{username}/comments?limit=25&sort=new
111+
```
112+
113+
**Positive signals (look for ANY of these):**
114+
- Mentions cloud providers (AWS, Hetzner, GCP, DigitalOcean, Azure, Vultr, Linode)
115+
- Mentions SSH, VPS, servers, self-hosting, Docker, containers
116+
- Posts in developer subreddits (r/programming, r/webdev, r/devops, r/SelfHosted)
117+
- Mentions CI/CD, GitHub, deployment, infrastructure
118+
- Has technical vocabulary in their comments
119+
- Mentions paying for services or having accounts
120+
121+
**Disqualifying signals:**
122+
- Account is < 30 days old (likely bot/throwaway)
123+
- Only posts in non-tech subreddits
124+
- Posting history suggests they're not a developer
125+
- Already uses Spawn or OpenRouter (check for mentions)
126+
127+
## Step 5: Pick the ONE best candidate
128+
129+
From all qualified, high-scoring posts, pick exactly 1. The best one. If nothing scores 7+ after qualification, that's fine. Say "no candidates this cycle" and stop.
130+
131+
## Step 6: Output summary
132+
133+
Print a structured summary of what you found. This goes to the log file.
134+
135+
**If a candidate was found:**
136+
137+
```
138+
=== GROWTH CANDIDATE FOUND ===
139+
Thread: {post_title}
140+
URL: https://reddit.com{permalink}
141+
Subreddit: r/{subreddit}
142+
Upvotes: {score} | Comments: {num_comments}
143+
Posted: {time_ago}
144+
145+
What they asked:
146+
{brief summary of their question}
147+
148+
Why Spawn fits:
149+
{1-2 sentences}
150+
151+
Poster qualification:
152+
{signals found in their history}
153+
154+
Relevance score: {score}/10
155+
156+
Draft reply:
157+
{a short casual reply the team could use, written like a real dev on reddit. 2-3 sentences, no em dashes, no corporate speak, lowercase ok. end with "disclosure: i help build this" if mentioning spawn}
158+
=== END CANDIDATE ===
159+
```
160+
161+
**IMPORTANT: After the human-readable summary above, you MUST also print a machine-readable JSON block.** This is how the automation pipeline picks up your findings. Print it exactly like this (with the `json:candidate` marker):
162+
163+
````
164+
```json:candidate
165+
{
166+
"found": true,
167+
"title": "{post_title}",
168+
"url": "https://reddit.com{permalink}",
169+
"permalink": "{permalink}",
170+
"subreddit": "{subreddit}",
171+
"postId": "{thing fullname, e.g. t3_abc123}",
172+
"upvotes": {score},
173+
"numComments": {num_comments},
174+
"postedAgo": "{time_ago}",
175+
"whatTheyAsked": "{brief summary}",
176+
"whySpawnFits": "{1-2 sentences}",
177+
"posterQualification": "{signals found}",
178+
"relevanceScore": {score_out_of_10},
179+
"draftReply": "{the draft reply text}"
180+
}
181+
```
182+
````
183+
184+
**If no candidates found:**
185+
186+
```
187+
=== GROWTH SCAN COMPLETE ===
188+
Posts scanned: {total}
189+
Scored 7+: 0
190+
No candidates this cycle.
191+
=== END SCAN ===
192+
```
193+
194+
And the machine-readable JSON:
195+
196+
````
197+
```json:candidate
198+
{"found": false, "postsScanned": {total}}
199+
```
200+
````
201+
202+
## Safety rules
203+
204+
1. **Pick exactly 1 candidate per cycle.** No more.
205+
2. **Do NOT post replies to Reddit.** You only scan and report.
206+
3. **No candidates is a valid outcome.** Don't force bad matches.
207+
4. **Respect Reddit rate limits.** 1 second between API calls minimum.
208+
5. **Don't surface threads from Spawn/OpenRouter team members.**
209+
210+
## Time budget
211+
212+
Complete within 25 minutes. If still searching at 20 minutes, stop and report what you have.
Lines changed: 168 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,168 @@
1+
#!/bin/bash
2+
set -eo pipefail
3+
4+
# Reddit Growth Agent — Single Cycle (Discovery Only)
5+
# Triggered by trigger-server.ts via GitHub Actions (daily)
6+
#
7+
# Scans Reddit for "feature ask" threads that Spawn solves,
8+
# qualifies the poster, picks the 1 best candidate, and outputs
9+
# a summary to the log. Does NOT post replies or notify externally.
10+
11+
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
12+
REPO_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
13+
cd "${REPO_ROOT}"
14+
15+
SPAWN_REASON="${SPAWN_REASON:-manual}"
16+
TEAM_NAME="spawn-growth"
17+
CYCLE_TIMEOUT=1800 # 30 min
18+
HARD_TIMEOUT=2400 # 40 min grace
19+
20+
LOG_FILE="${REPO_ROOT}/.docs/${TEAM_NAME}.log"
21+
PROMPT_FILE=""
22+
23+
# Ensure .docs directory exists
24+
mkdir -p "$(dirname "${LOG_FILE}")"
25+
26+
log() {
27+
echo "[$(date +'%Y-%m-%d %H:%M:%S')] [growth] $*" | tee -a "${LOG_FILE}"
28+
}
29+
30+
# --- Safe sed substitution (escapes sed metacharacters in replacement) ---
31+
safe_substitute() {
32+
local placeholder="$1"
33+
local value="$2"
34+
local file="$3"
35+
if printf '%s' "$value" | grep -qP '\x01'; then
36+
log "ERROR: safe_substitute value contains illegal \\x01 character"
37+
return 1
38+
fi
39+
local escaped
40+
escaped=$(printf '%s' "$value" | sed -e 's/[\\]/\\&/g' -e 's/[&]/\\&/g')
41+
escaped="${escaped//$'\n'/\\$'\n'}"
42+
sed -i.bak "s$(printf '\x01')${placeholder}$(printf '\x01')${escaped}$(printf '\x01')g" "$file"
43+
rm -f "${file}.bak"
44+
}
45+
46+
# Cleanup function
47+
cleanup() {
48+
if [[ -n "${_cleanup_done:-}" ]]; then return; fi
49+
_cleanup_done=1
50+
51+
local exit_code=$?
52+
log "Running cleanup (exit_code=${exit_code})..."
53+
54+
rm -f "${PROMPT_FILE:-}" 2>/dev/null || true
55+
if [[ -n "${CLAUDE_PID:-}" ]] && kill -0 "${CLAUDE_PID}" 2>/dev/null; then
56+
kill -TERM "${CLAUDE_PID}" 2>/dev/null || true
57+
fi
58+
59+
log "=== Cycle Done (exit_code=${exit_code}) ==="
60+
exit ${exit_code}
61+
}
62+
63+
trap cleanup EXIT SIGTERM SIGINT
64+
65+
log "=== Starting growth cycle ==="
66+
log "Working directory: ${REPO_ROOT}"
67+
log "Reason: ${SPAWN_REASON}"
68+
log "Timeout: ${CYCLE_TIMEOUT}s"
69+
70+
# Fetch latest refs
71+
log "Fetching latest refs..."
72+
git fetch --prune origin 2>&1 | tee -a "${LOG_FILE}" || true
73+
git reset --hard origin/main 2>&1 | tee -a "${LOG_FILE}" || true
74+
75+
# Update Claude Code to latest version
76+
log "Updating Claude Code..."
77+
claude update --yes 2>&1 | tee -a "${LOG_FILE}" || log "WARNING: Claude Code update failed (continuing with current version)"
78+
79+
# Prepare prompt
80+
log "Launching growth cycle..."
81+
82+
PROMPT_FILE=$(mktemp /tmp/growth-prompt-XXXXXX.md)
83+
chmod 0600 "${PROMPT_FILE}"
84+
PROMPT_TEMPLATE="${SCRIPT_DIR}/growth-prompt.md"
85+
86+
if [[ ! -f "$PROMPT_TEMPLATE" ]]; then
87+
log "ERROR: growth-prompt.md not found at $PROMPT_TEMPLATE"
88+
exit 1
89+
fi
90+
91+
cat "$PROMPT_TEMPLATE" > "${PROMPT_FILE}"
92+
93+
# Substitute env vars into prompt
94+
safe_substitute "REDDIT_CLIENT_ID_PLACEHOLDER" "${REDDIT_CLIENT_ID:-}" "${PROMPT_FILE}"
95+
safe_substitute "REDDIT_CLIENT_SECRET_PLACEHOLDER" "${REDDIT_CLIENT_SECRET:-}" "${PROMPT_FILE}"
96+
safe_substitute "REDDIT_USERNAME_PLACEHOLDER" "${REDDIT_USERNAME:-}" "${PROMPT_FILE}"
97+
safe_substitute "REDDIT_PASSWORD_PLACEHOLDER" "${REDDIT_PASSWORD:-}" "${PROMPT_FILE}"
98+
99+
log "Hard timeout: ${HARD_TIMEOUT}s"
100+
101+
# Run claude in background
102+
claude -p - --dangerously-skip-permissions --model sonnet < "${PROMPT_FILE}" >> "${LOG_FILE}" 2>&1 &
103+
CLAUDE_PID=$!
104+
log "Claude started (pid=${CLAUDE_PID})"
105+
106+
# Kill claude and its full process tree
107+
kill_claude() {
108+
if kill -0 "${CLAUDE_PID}" 2>/dev/null; then
109+
log "Killing claude (pid=${CLAUDE_PID}) and its process tree"
110+
pkill -TERM -P "${CLAUDE_PID}" 2>/dev/null || true
111+
kill -TERM "${CLAUDE_PID}" 2>/dev/null || true
112+
sleep 5
113+
pkill -KILL -P "${CLAUDE_PID}" 2>/dev/null || true
114+
kill -KILL "${CLAUDE_PID}" 2>/dev/null || true
115+
fi
116+
}
117+
118+
# Watchdog: wall-clock timeout
119+
WALL_START=$(date +%s)
120+
121+
while kill -0 "${CLAUDE_PID}" 2>/dev/null; do
122+
sleep 30
123+
WALL_ELAPSED=$(( $(date +%s) - WALL_START ))
124+
125+
if [[ "${WALL_ELAPSED}" -ge "${HARD_TIMEOUT}" ]]; then
126+
log "Hard timeout: ${WALL_ELAPSED}s elapsed — killing process"
127+
kill_claude
128+
break
129+
fi
130+
done
131+
132+
wait "${CLAUDE_PID}" 2>/dev/null
133+
CLAUDE_EXIT=$?
134+
135+
if [[ "${CLAUDE_EXIT}" -eq 0 ]]; then
136+
log "Cycle completed successfully"
137+
else
138+
log "Cycle failed (exit_code=${CLAUDE_EXIT})"
139+
fi
140+
141+
# --- Extract candidate JSON and POST to SPA ---
142+
CANDIDATE_JSON=""
143+
144+
# Extract the json:candidate block from the log (between ```json:candidate and ```)
145+
if [[ -f "${LOG_FILE}" ]]; then
146+
CANDIDATE_JSON=$(sed -n '/^```json:candidate$/,/^```$/{/^```/d;p;}' "${LOG_FILE}" | tail -1)
147+
fi
148+
149+
if [[ -z "${CANDIDATE_JSON}" ]]; then
150+
log "No json:candidate block found in output"
151+
CANDIDATE_JSON='{"found":false}'
152+
fi
153+
154+
log "Candidate JSON: ${CANDIDATE_JSON}"
155+
156+
# POST to SPA if SPA_TRIGGER_URL is configured
157+
if [[ -n "${SPA_TRIGGER_URL:-}" && -n "${SPA_TRIGGER_SECRET:-}" ]]; then
158+
log "Posting candidate to SPA at ${SPA_TRIGGER_URL}/candidate"
159+
HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" \
160+
-X POST "${SPA_TRIGGER_URL}/candidate" \
161+
-H "Authorization: Bearer ${SPA_TRIGGER_SECRET}" \
162+
-H "Content-Type: application/json" \
163+
--data-binary @- <<< "${CANDIDATE_JSON}" \
164+
--max-time 30) || HTTP_STATUS="000"
165+
log "SPA response: HTTP ${HTTP_STATUS}"
166+
else
167+
log "SPA_TRIGGER_URL or SPA_TRIGGER_SECRET not set, skipping Slack notification"
168+
fi

0 commit comments

Comments
 (0)