-
-
Notifications
You must be signed in to change notification settings - Fork 31
feat: Add --random-src-port flag for censorship evasion #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ce ports, configurable via a new CLI argument.
|
Hi, thanks for the contribution. What is the traffic pattern produced by this change, and what is the rationale for choosing it? |
|
Hi The rationale for this feature is to evade traffic fingerprinting and censorship by firewalls like the GFW or GFI, which can easily identify and block DNS tunnels that use a single, static source port. By spawning worker tasks that create fresh UDP sockets for each query, the tool mimics the behavior of standard DNS clients that naturally use ephemeral ports, thereby making the tunnel's traffic pattern indistinguishable from legitimate DNS activity and significantly harder to detect. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds a --random-src-port N CLI flag that enables DNS tunnel traffic to evade censorship by using random ephemeral source ports per DNS query, mimicking normal DNS client behavior instead of the identifiable single-port pattern.
Changes:
- Adds new worker pool implementation that creates ephemeral UDP sockets per DNS query
- Introduces
DnsTransportenum to abstract over shared socket (existing) vs. pool-based (new) transport modes - Updates runtime select loop to handle both transport modes with different receive patterns
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| crates/slipstream-ffi/src/lib.rs | Adds random_src_port_workers field to ClientConfig |
| crates/slipstream-client/src/main.rs | Adds --random-src-port CLI argument and passes it to config |
| crates/slipstream-client/src/dns.rs | Exports new DnsTransport and QueryResponse types |
| crates/slipstream-client/src/dns/pool.rs | New module implementing worker pool with ephemeral sockets and transport abstraction |
| crates/slipstream-client/src/dns/poll.rs | Updates send_poll_queries to use DnsTransport abstraction |
| crates/slipstream-client/src/runtime.rs | Updates main loop with dual select! branches for Shared vs Pool transport modes |
| crates/slipstream-client/src/runtime/setup.rs | Removes bind_udp_socket function (moved to pool.rs) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| Ok(()) | ||
| } | ||
| DnsTransport::Pool(pool) => { | ||
| pool.send(packet.to_vec(), dest).await; |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The DnsTransport::send method signature is asymmetric between variants. For the Shared variant it returns errors from send_to, but for the Pool variant it silently ignores send failures in pool.send() and always returns Ok. This inconsistency in error handling behavior could confuse callers who expect uniform error propagation. Consider making both variants handle errors consistently.
| pool.send(packet.to_vec(), dest).await; | |
| if let Err(e) = pool.send(packet.to_vec(), dest).await { | |
| return Err(ClientError::new(e.to_string())); | |
| } |
| *local_addr_storage = addr_from; | ||
| } | ||
| resolver.local_addr_storage = Some(unsafe { std::ptr::read(local_addr_storage) }); |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When using DnsTransport::Pool, local_addr_storage is not updated from addr_from (line 81-83 guards this), but line 84 still reads from local_addr_storage to populate resolver.local_addr_storage. This pattern differs from runtime.rs line 367 where resolver.local_addr_storage is set directly from addr_from. For Pool transport with ephemeral sockets, resolver.local_addr_storage will contain stale or zeroed data. While this might be intentional since each pool query uses a different ephemeral socket, the inconsistent pattern and lack of documentation makes the correctness unclear. Consider either using addr_from directly for Pool mode or adding comments explaining why stale data is acceptable.
| *local_addr_storage = addr_from; | |
| } | |
| resolver.local_addr_storage = Some(unsafe { std::ptr::read(local_addr_storage) }); | |
| // In Shared mode, keep the externally managed local_addr_storage in sync | |
| *local_addr_storage = addr_from; | |
| resolver.local_addr_storage = Some(unsafe { std::ptr::read(local_addr_storage) }); | |
| } else { | |
| // In Pool mode, each query uses its own ephemeral socket; use the current addr_from directly | |
| resolver.local_addr_storage = Some(addr_from); | |
| } |
| debug_poll: bool, | ||
| #[arg(long = "debug-streams")] | ||
| debug_streams: bool, | ||
| #[arg(long = "random-src-port", default_value_t = 0)] |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The random_src_port argument accepts any usize value without bounds checking. An excessively large value (e.g., 1000000) would spawn that many worker tasks, potentially exhausting system resources. Consider adding validation to limit the maximum worker count to a reasonable value (e.g., 1000) or documenting safe ranges in the help text.
| #[arg(long = "random-src-port", default_value_t = 0)] | |
| #[arg( | |
| long = "random-src-port", | |
| default_value_t = 0, | |
| value_parser = clap::value_parser!(usize).range(0..=1000), | |
| help = "Number of random source port workers (0 disables, max 1000)" | |
| )] |
| let request_rx = std::sync::Arc::new(tokio::sync::Mutex::new(request_rx)); | ||
| let (response_tx, response_rx) = mpsc::unbounded_channel(); | ||
|
|
||
| for worker_id in 0..workers { | ||
| let rx = request_rx.clone(); | ||
| let tx = response_tx.clone(); | ||
| tokio::spawn(async move { | ||
| loop { | ||
| let request = { | ||
| let mut guard = rx.lock().await; | ||
| guard.recv().await |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The shared Mutex for request_rx (line 33) means all workers contend for the same lock when trying to receive requests. This creates a bottleneck where workers must serialize their access to the queue. Consider using an alternative pattern such as: (1) using multiple channels with one per worker and round-robin distribution, or (2) using a lock-free channel like crossbeam-channel, or (3) accepting this design if the DNS query latency dominates over lock contention.
| match result { | ||
| Ok(Ok((size, peer))) => { | ||
| buf.truncate(size); | ||
| let _ = response_tx.send(QueryResponse { data: buf, peer }); |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The response channel send failures are silently ignored with let _ = .... If the response_rx has been dropped (e.g., if the main loop panics or exits), responses will be silently discarded. Consider logging when responses fail to send, as this could indicate a programming error or resource exhaustion.
| /// Send a DNS query through the pool. | ||
| pub(crate) async fn send(&self, packet: Vec<u8>, dest: SocketAddr) { | ||
| let request = QueryRequest { packet, dest }; | ||
| let _ = self.request_tx.send(request).await; |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The send method silently ignores errors when the channel is full or closed. If the request_tx channel fails to send (returns Err), queries will be silently dropped. Consider logging this error or propagating it to the caller so failed sends can be detected and handled appropriately.
| let _ = self.request_tx.send(request).await; | |
| if let Err(e) = self.request_tx.send(request).await { | |
| debug!("Failed to enqueue DNS query request: {}", e); | |
| } |
| response = pool.response_rx_mut().recv() => { | ||
| if let Some(QueryResponse { data, peer }) = response { | ||
| let mut response_ctx = DnsResponseContext { | ||
| quic, | ||
| local_addr_storage: &local_addr_storage, | ||
| resolvers: &mut resolvers, | ||
| }; | ||
| handle_dns_response(&data, peer, &mut response_ctx)?; | ||
| } | ||
| } | ||
| _ = sleep(timeout) => {} | ||
| } |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Pool transport path does not implement batch receiving like the Shared socket path does. The Shared path uses a loop to call try_recv_from up to packet_loop_recv_max times after the initial receive, allowing it to process multiple queued responses efficiently. The Pool path only processes one response per select! iteration, which could impact performance when multiple DNS responses arrive simultaneously. Consider adding batch processing for pool responses to maintain performance parity.
| use crate::error::ClientError; | ||
| use std::net::{Ipv6Addr, SocketAddr, SocketAddrV6}; | ||
| use std::time::Duration; | ||
| use tokio::net::UdpSocket as TokioUdpSocket; | ||
| use tokio::sync::mpsc; | ||
| use tracing::debug; | ||
|
|
||
| const QUERY_TIMEOUT: Duration = Duration::from_secs(5); | ||
| const MAX_RECV_BUF: usize = 4096; | ||
|
|
||
| /// Response from a worker after completing a DNS query. | ||
| pub(crate) struct QueryResponse { | ||
| pub(crate) data: Vec<u8>, | ||
| pub(crate) peer: SocketAddr, | ||
| } | ||
|
|
||
| struct QueryRequest { | ||
| packet: Vec<u8>, | ||
| dest: SocketAddr, | ||
| } | ||
|
|
||
| /// A pool of workers that each create ephemeral UDP sockets for DNS queries. | ||
| /// This ensures each query uses a random source port assigned by the OS. | ||
| pub(crate) struct DnsQueryPool { | ||
| request_tx: mpsc::Sender<QueryRequest>, | ||
| response_rx: mpsc::UnboundedReceiver<QueryResponse>, | ||
| } | ||
|
|
||
| impl DnsQueryPool { | ||
| /// Create a new pool with `workers` concurrent query handlers. | ||
| pub(crate) fn new(workers: usize) -> Self { | ||
| let (request_tx, request_rx) = mpsc::channel::<QueryRequest>(workers * 2); | ||
| let request_rx = std::sync::Arc::new(tokio::sync::Mutex::new(request_rx)); | ||
| let (response_tx, response_rx) = mpsc::unbounded_channel(); | ||
|
|
||
| for worker_id in 0..workers { | ||
| let rx = request_rx.clone(); | ||
| let tx = response_tx.clone(); | ||
| tokio::spawn(async move { | ||
| loop { | ||
| let request = { | ||
| let mut guard = rx.lock().await; | ||
| guard.recv().await | ||
| }; | ||
| let Some(request) = request else { | ||
| debug!("Worker {} shutting down", worker_id); | ||
| break; | ||
| }; | ||
| Self::handle_query(request, &tx).await; | ||
| } | ||
| }); | ||
| } | ||
|
|
||
| Self { | ||
| request_tx, | ||
| response_rx, | ||
| } | ||
| } | ||
|
|
||
| async fn handle_query( | ||
| request: QueryRequest, | ||
| response_tx: &mpsc::UnboundedSender<QueryResponse>, | ||
| ) { | ||
| // Create ephemeral socket - OS assigns random source port | ||
| let socket = match TokioUdpSocket::bind("[::]:0").await { | ||
| Ok(s) => s, | ||
| Err(e) => { | ||
| debug!("Failed to bind ephemeral socket: {}", e); | ||
| return; | ||
| } | ||
| }; | ||
|
|
||
| if let Err(e) = socket.send_to(&request.packet, request.dest).await { | ||
| debug!("Failed to send query: {}", e); | ||
| return; | ||
| } | ||
|
|
||
| let mut buf = vec![0u8; MAX_RECV_BUF]; | ||
| let result = tokio::time::timeout(QUERY_TIMEOUT, socket.recv_from(&mut buf)).await; | ||
|
|
||
| match result { | ||
| Ok(Ok((size, peer))) => { | ||
| buf.truncate(size); | ||
| let _ = response_tx.send(QueryResponse { data: buf, peer }); | ||
| } | ||
| Ok(Err(e)) => { | ||
| debug!("Failed to receive response: {}", e); | ||
| } | ||
| Err(_) => { | ||
| // Timeout - normal for DNS, don't log | ||
| } | ||
| } | ||
| // Socket dropped here - OS will recycle the port | ||
| } | ||
|
|
||
| /// Send a DNS query through the pool. | ||
| pub(crate) async fn send(&self, packet: Vec<u8>, dest: SocketAddr) { | ||
| let request = QueryRequest { packet, dest }; | ||
| let _ = self.request_tx.send(request).await; | ||
| } | ||
|
|
||
| /// Get mutable reference to the response receiver for tokio::select! | ||
| pub(crate) fn response_rx_mut(&mut self) -> &mut mpsc::UnboundedReceiver<QueryResponse> { | ||
| &mut self.response_rx | ||
| } | ||
| } | ||
|
|
||
| /// Abstraction over DNS transport - either a shared socket or worker pool | ||
| pub(crate) enum DnsTransport { | ||
| /// Single shared UDP socket (current behavior when --random-src-port 0) | ||
| Shared { | ||
| socket: TokioUdpSocket, | ||
| local_addr_storage: libc::sockaddr_storage, | ||
| }, | ||
| /// Worker pool with ephemeral sockets (when --random-src-port N > 0) | ||
| Pool(DnsQueryPool), | ||
| } | ||
|
|
||
| impl DnsTransport { | ||
| pub(crate) async fn new(workers: usize) -> Result<Self, ClientError> { | ||
| if workers == 0 { | ||
| let bind_addr = SocketAddr::V6(SocketAddrV6::new(Ipv6Addr::UNSPECIFIED, 0, 0, 0)); | ||
| let socket = TokioUdpSocket::bind(bind_addr) | ||
| .await | ||
| .map_err(|e| ClientError::new(e.to_string()))?; | ||
| let local_addr = socket | ||
| .local_addr() | ||
| .map_err(|e| ClientError::new(e.to_string()))?; | ||
| let local_addr_storage = slipstream_ffi::socket_addr_to_storage(local_addr); | ||
| Ok(DnsTransport::Shared { | ||
| socket, | ||
| local_addr_storage, | ||
| }) | ||
| } else { | ||
| Ok(DnsTransport::Pool(DnsQueryPool::new(workers))) | ||
| } | ||
| } | ||
|
|
||
| pub(crate) async fn send(&self, packet: &[u8], dest: SocketAddr) -> Result<(), ClientError> { | ||
| match self { | ||
| DnsTransport::Shared { socket, .. } => { | ||
| socket | ||
| .send_to(packet, dest) | ||
| .await | ||
| .map_err(|e| ClientError::new(e.to_string()))?; | ||
| Ok(()) | ||
| } | ||
| DnsTransport::Pool(pool) => { | ||
| pool.send(packet.to_vec(), dest).await; | ||
| Ok(()) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| pub(crate) fn local_addr_storage(&self) -> Option<libc::sockaddr_storage> { | ||
| match self { | ||
| DnsTransport::Shared { | ||
| local_addr_storage, .. | ||
| } => Some(*local_addr_storage), | ||
| DnsTransport::Pool(_) => None, | ||
| } | ||
| } | ||
| } |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new DnsQueryPool and DnsTransport implementations lack test coverage. Since other modules in this project have unit tests (e.g., dns/resolver.rs, main.rs), consider adding tests for the pool functionality. Key behaviors to test include: worker task spawning and shutdown, query request/response flow, timeout handling, ephemeral socket creation, and transport mode selection based on worker count.
| tokio::spawn(async move { | ||
| loop { | ||
| let request = { | ||
| let mut guard = rx.lock().await; | ||
| guard.recv().await | ||
| }; | ||
| let Some(request) = request else { | ||
| debug!("Worker {} shutting down", worker_id); | ||
| break; | ||
| }; | ||
| Self::handle_query(request, &tx).await; | ||
| } | ||
| }); | ||
| } |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The spawned worker tasks are not tracked or joined on shutdown. When DnsQueryPool is dropped, the request_tx will be dropped which will cause workers to exit, but there's no mechanism to wait for graceful shutdown or track task handles. Consider storing JoinHandles and implementing Drop to await worker completion, preventing potential resource leaks or incomplete operations during shutdown.
Adds
--random-src-port NCLI flag to use random source ports per DNS query.Why
DNS tunnels using a single source port create an identifiable traffic pattern. Firewalls (GFW, GFI) can fingerprint this behavior since normal DNS clients typically use ephemeral ports per query. This change makes tunnel traffic indistinguishable from regular DNS.
How
When
N > 0, spawns N worker tasks that each create a fresh UDP socket per query. The OS assigns a random ephemeral port to each socket, mimicking normal DNS client behavior.--random-src-port 0(default)--random-src-port 100Usage