Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 51 additions & 0 deletions .github/workflows/auto-merge-dependabot.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
name: Auto-merge Dependabot PRs

on:
pull_request:
types: [opened, synchronize, reopened]

permissions:
contents: write
pull-requests: write

jobs:
auto-merge:
runs-on: ubuntu-latest
# Only run for Dependabot PRs
if: github.actor == 'dependabot[bot]'
steps:
- name: Dependabot metadata
id: metadata
uses: dependabot/fetch-metadata@v2
with:
github-token: "${{ secrets.GITHUB_TOKEN }}"

- name: Wait for CI checks
uses: lewagon/wait-on-check-action@v1.3.4
with:
ref: ${{ github.event.pull_request.head.sha }}
check-name: 'build'
repo-token: ${{ secrets.GITHUB_TOKEN }}
wait-interval: 10

- name: Auto-approve for patch and minor updates
if: steps.metadata.outputs.update-type == 'version-update:semver-patch' || steps.metadata.outputs.update-type == 'version-update:semver-minor'
run: gh pr review --approve "$PR_URL"
env:
PR_URL: ${{ github.event.pull_request.html_url }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

- name: Enable auto-merge for patch and minor updates
if: steps.metadata.outputs.update-type == 'version-update:semver-patch' || steps.metadata.outputs.update-type == 'version-update:semver-minor'
run: gh pr merge --auto --squash "$PR_URL"
env:
PR_URL: ${{ github.event.pull_request.html_url }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

- name: Comment on major updates
if: steps.metadata.outputs.update-type == 'version-update:semver-major'
run: |
gh pr comment "$PR_URL" --body "⚠️ This is a major version update. Please review manually before merging."
env:
PR_URL: ${{ github.event.pull_request.html_url }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
5 changes: 4 additions & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ name: Test
on:
pull_request:
branches: [ main ]
push:
push:
branches: [ main, dev, 'feature/*' ]

jobs:
Expand All @@ -17,6 +17,9 @@ jobs:
with:
toolchain: stable
override: true
components: clippy
- name: Run clippy
run: cargo clippy --all-targets --all-features -- -D warnings
- name: Build dev
run: |
cargo install -q worker-build
Expand Down
170 changes: 170 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
# CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

## Project Overview

**tul** is a lightweight Cloudflare Worker proxy written in Rust/WASM that provides multiple proxy modes:
- Trojan over WebSocket protocol for secure proxying
- Universal API proxy for routing any HTTP/HTTPS requests
- Docker registry proxy (defaults to Docker Hub)
- DNS over HTTPS (DoH) proxy with Cloudflare IP detection
- Website mirroring with content rewriting

The project compiles Rust to WebAssembly and deploys to Cloudflare Workers using the `worker` crate.

## Development Commands

### Build and Deploy
```bash
# Build and deploy to Cloudflare Workers
make deploy
# or
npx wrangler deploy

# Run locally for development
make dev
# or
npx wrangler dev -c .wrangler.dev.toml
```

### Testing
```bash
# Run all tests
cargo test

# Run specific test
cargo test test_parse_path

# Run tests without executing (compile only)
cargo test --no-run
```

### Build Configuration
The project uses `worker-build` to compile Rust to WASM:
```bash
cargo install -q worker-build && worker-build --release
```

## Architecture

### Request Routing (src/lib.rs)
The main entry point uses a simple router that directs all requests to a single `handler` function in `src/proxy/mod.rs`. The handler performs path-based routing to different proxy modes.

### Proxy Modes (src/proxy/mod.rs)

The main `handler` function routes requests based on path patterns:

1. **Trojan WebSocket** (`/tj` or custom PREFIX): Routes to `tj()` function
- Establishes WebSocket connection
- Parses Trojan protocol (password hash validation)
- Performs DNS lookup with CF IP detection
- Proxies bidirectional traffic between WebSocket and TCP socket

2. **DNS over HTTPS** (`/dns-query`): Routes to `dns::resolve_handler()`
- Proxies DNS queries to upstream DoH server (default: 1.1.1.1)
- Checks if resolved IPs belong to Cloudflare network
- Uses prefix trie for efficient CF IP range matching

3. **Docker Registry** (`/v2/*`): Routes to `api::image_handler()`
- Supports multiple registries via `ns` query parameter (docker.io, gcr.io, quay.io, ghcr.io, registry.k8s.io)
- Defaults to Docker Hub (registry-1.docker.io)

4. **Website Mirroring/API Proxy** (all other paths): Routes to `api::handler()`
- Parses path as `/{domain}[:{port}][/path]`
- Uses cookie-based domain persistence for multi-request sessions
- Rewrites HTML content to replace absolute URLs with proxied versions
- Removes hop-by-hop headers before forwarding

### Key Components

**src/proxy/tj.rs**: Trojan protocol parser
- Validates 56-byte SHA224 password hash
- Parses SOCKS5-like address format (IPv4 or domain)
- Returns target hostname and port

**src/proxy/dns.rs**: DNS resolution and CF IP detection
- Maintains prefix trie of Cloudflare IP ranges
- Queries DoH endpoint and parses JSON responses
- Returns whether target is behind Cloudflare

**src/proxy/api.rs**: HTTP/HTTPS proxy handler
- Forwards requests with header manipulation
- Rewrites HTML content for website mirroring
- Handles content-type specific processing

**src/proxy/websocket.rs**: WebSocket stream wrapper
- Implements AsyncRead/AsyncWrite for WebSocket
- Enables bidirectional copying with tokio::io::copy_bidirectional

### Configuration via Cloudflare Secrets

The application reads configuration from Cloudflare Worker secrets:
- `PASSWORD`: Trojan password (hashed with SHA224)
- `PREFIX`: Trojan WebSocket path prefix (default: `/tj`)
- `PROXY_DOMAINS`: Comma-separated domains for special handling (currently unused)
- `FORWARD_HOST`: Optional host for forwarding (currently unused)
- `DOH_HOST`: DoH server hostname (default: `1.1.1.1`)

These are set via `npx wrangler secret put <NAME>` or through GitHub Actions during deployment.

### Path Parsing Logic

The `parse_path()` function extracts domain, port, and path from URL patterns:
- `/{domain}` → domain only
- `/{domain}:{port}` → domain and port
- `/{domain}/path` → domain and path
- `/{domain}:{port}/path` → all three components

### Cloudflare IP Detection

The DNS module maintains a prefix trie of CF IP ranges and checks if resolved IPs belong to Cloudflare. This is critical for the Trojan proxy mode - if the target is behind CF, the connection is closed with a message to use DoH and connect directly (to avoid CF blocking CF-to-CF connections).

### Header Handling

The `get_hop_headers()` function defines headers that must be removed when proxying:
- Standard hop-by-hop headers (Connection, Upgrade, etc.)
- Proxy-specific headers (X-Forwarded-*, Via, etc.)
- Cloudflare headers (CF-Ray, CF-IPCountry, etc.)
- **Exception**: `cf-connecting-ip` is preserved to avoid CF CDN blocking

## Deployment

### GitHub Actions Workflows

**Deployment** (`.github/workflows/cf.yml`):
1. Installs Rust toolchain and wrangler
2. Checks for existing secrets and creates them if needed
3. Runs `npx wrangler deploy`
4. Redacts worker URLs in output for security

**CI Testing** (`.github/workflows/ci.yml`):
- Runs on PRs to main and pushes to main/dev/feature branches
- Builds the project in dev mode using `worker-build --dev`

**Dependabot Auto-merge** (`.github/workflows/auto-merge-dependabot.yml`):
- Automatically merges Dependabot PRs for patch and minor version updates
- Waits for CI checks to pass before merging
- Uses squash merge strategy
- For major version updates, adds a comment requesting manual review
- Requires `contents: write` and `pull-requests: write` permissions

### Manual Deployment
1. Set `CLOUDFLARE_API_TOKEN` in `.env` file
2. Run `make deploy`

### Required Secrets
Configure in GitHub repository settings under Secrets and variables → Actions:
- `CLOUDFLARE_API_TOKEN`: Cloudflare API token with Workers permissions
- `PASSWORD`: Trojan password
- `PREFIX`: Trojan path prefix
- `PROXY_DOMAINS`: (optional) Comma-separated proxy domains
- `FORWARD_HOST`: (optional) Forward host configuration

## Important Notes

- The project uses aggressive optimization for WASM: `opt-level = "z"`, LTO, and wasm-opt with `-Oz`
- WebSocket early data is not supported by Cloudflare Workers
- Cloudflare-to-Cloudflare connections may be blocked, hence the CF IP detection logic
- The 10-second read/write timeout may truncate large file downloads - use resume-capable tools (curl -C, wget -c)
- Cookie-based domain persistence (`tul_host` cookie) enables multi-request website mirroring sessions
4 changes: 2 additions & 2 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ worker-macros = { version = "0.7.2" }
futures = "0.3.31"
wasm-bindgen-futures = "0.4.56"

tokio = { version = "1.48.0", features = ["io-util", "sync"], default-features = false }
tokio = { version = "1.49.0", features = ["io-util", "sync"], default-features = false }
regex = "1.12.2"
getrandom = { version = "0.3", features = ["wasm_js"] }
sha2 = "0.10.9"
Expand All @@ -38,4 +38,4 @@ codegen-units = 1


[package.metadata.wasm-pack.profile.release]
wasm-opt = ["-Oz", "--enable-bulk-memory", "--all-features"]
wasm-opt = ["-Oz", "--enable-bulk-memory", "--all-features"]
44 changes: 21 additions & 23 deletions src/proxy/api.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,12 @@ use regex::Regex;
static REGISTRY: &str = "registry-1.docker.io";


fn replace_host(content: &mut String, src: &str, dest: &str) -> Result<String> {
fn replace_host(content: &mut str, src: &str, dest: &str) -> Result<String> {

let re = Regex::new(r#"(?P<attr>src|href)(?P<eq>=)(?P<quote>['"]?)(?P<url>(//|https://))"#)
.map_err(|_e| worker::Error::BadEncoding)?;

let result = re.replace_all(&content, |caps: &regex::Captures| {
let result = re.replace_all(content, |caps: &regex::Captures| {
let attr = &caps["attr"];
let eq = &caps["eq"];
let quote = &caps["quote"];
Expand All @@ -24,8 +24,8 @@ fn replace_host(content: &mut String, src: &str, dest: &str) -> Result<String> {
caps[0].to_string()
}
});
return Ok(result.into_owned()
.replace(&format!("//{}", src), &format!("//{}/{}", dest, src)));
Ok(result.into_owned()
.replace(&format!("//{}", src), &format!("//{}/{}", dest, src)))
}

pub async fn image_handler(req: Request, query: Option<HashMap<String, String>>) -> Result<Response> {
Expand All @@ -42,9 +42,10 @@ pub async fn image_handler(req: Request, query: Option<HashMap<String, String>>)

let full_url = format!("https://{}{}", domain, req_url.path());
if let Ok(url) = Url::parse(&full_url) {
return handler(req, url, domain).await;
handler(req, url, domain).await
} else {
Response::error( "Not Found",404)
}
return Response::error( "Not Found",404);
}

pub async fn handler(mut req: Request, uri: Url, dst_host: &str) -> Result<Response> {
Expand Down Expand Up @@ -79,7 +80,7 @@ pub async fn handler(mut req: Request, uri: Url, dst_host: &str) -> Result<Respo
req_init.body = Some(wasm_bindgen::JsValue::from(body));
}
}
let new_req = Request::new_with_init(&uri.to_string(), &req_init)?;
let new_req = Request::new_with_init(uri.as_ref(), &req_init)?;

// send request
let mut response = Fetch::Request(new_req).send().await?;
Expand All @@ -98,7 +99,7 @@ pub async fn handler(mut req: Request, uri: Url, dst_host: &str) -> Result<Respo
format!("/{}{}", uri.host().unwrap(), value)
} else if value.starts_with("https://") {
if let Ok(url) = Url::parse(&value) {
if url.host_str().map_or(false, |host| host.contains("cloudflarestorage")) {
if url.host_str().is_some_and(|host| host.contains("cloudflarestorage")) {
value
} else {
value.replace("https://", &format!("https://{}/", my_host))
Expand All @@ -118,20 +119,17 @@ pub async fn handler(mut req: Request, uri: Url, dst_host: &str) -> Result<Respo
}
let _ = resp_header.delete("content-security-policy");
let _ = resp_header.set("access-control-allow-origin", "*");
match resp_header.get("content-type")? {
Some(s) => {
if s.contains("text/html") {
let mut body = response.text().await?;
let newbody = replace_host(&mut body, dst_host, &my_host)?;
let _ = resp_header.delete("content-encoding");
let resp = Response::builder()
.with_headers(resp_header)
.with_status(status)
.body(ResponseBody::Body(newbody.into_bytes()));
return Ok(resp);
}
},
_ => {}
if let Some(s) = resp_header.get("content-type")? {
if s.contains("text/html") {
let mut body = response.text().await?;
let newbody = replace_host(&mut body, dst_host, &my_host)?;
let _ = resp_header.delete("content-encoding");
let resp = Response::builder()
.with_headers(resp_header)
.with_status(status)
.body(ResponseBody::Body(newbody.into_bytes()));
return Ok(resp);
}
}

let resp = match response.stream() {
Expand All @@ -145,6 +143,6 @@ pub async fn handler(mut req: Request, uri: Url, dst_host: &str) -> Result<Respo
.from_stream(stream)?,
};

return Ok(resp);
Ok(resp)
}

10 changes: 5 additions & 5 deletions src/proxy/dns.rs
Original file line number Diff line number Diff line change
Expand Up @@ -88,11 +88,11 @@ pub async fn is_cf_address<T: AsRef<str>>(addr: &super::Address<T>) -> Result<(b
get_cf_trie().await
}).await;
let v4fn = |ip: &Ipv4Addr| -> Result<(bool, Ipv4Addr)> {
let ipnet = Ipv4Net::new(ip.clone(), 32).or_else(|e|{
let ipnet = Ipv4Net::new(*ip, 32).map_err(|e|{
console_error!("parse ipv4 failed: {}", e);
Err(worker::Error::RustError(e.to_string()))
worker::Error::RustError(e.to_string())
})?;
return Ok((trie.get_lpm(&ipnet).is_some(), ip.clone()));
Ok((trie.get_lpm(&ipnet).is_some(), *ip))
};
// TODO: only 1.1.1.1 support RFC 8484 and JSON API
let resolve = "1.1.1.1";
Expand Down Expand Up @@ -121,9 +121,9 @@ pub async fn is_cf_address<T: AsRef<str>>(addr: &super::Address<T>) -> Result<(b
if let Some(records) = dns_record.answer {
for answer in records {
if answer.rtype == 1 {
let ip = answer.data.parse::<Ipv4Addr>().or_else(|e| {
let ip = answer.data.parse::<Ipv4Addr>().map_err(|e| {
console_error!("parse ipv4 failed: {}", e);
Err(worker::Error::RustError(e.to_string()))
worker::Error::RustError(e.to_string())
})?;
return v4fn(&ip);
}
Expand Down
Loading
Loading