-
-
Notifications
You must be signed in to change notification settings - Fork 51
feat: Add --random-src-port flag for censorship evasion #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Hi, thanks for the contribution. What is the traffic pattern produced by this change, and what is the rationale for choosing it? |
|
Hi The rationale for this feature is to evade traffic fingerprinting and censorship by firewalls like the GFW or GFI, which can easily identify and block DNS tunnels that use a single, static source port. By spawning worker tasks that create fresh UDP sockets for each query, the tool mimics the behavior of standard DNS clients that naturally use ephemeral ports, thereby making the tunnel's traffic pattern indistinguishable from legitimate DNS activity and significantly harder to detect. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds a --random-src-port N CLI flag that enables DNS tunnel traffic to evade censorship by using random ephemeral source ports per DNS query, mimicking normal DNS client behavior instead of the identifiable single-port pattern.
Changes:
- Adds new worker pool implementation that creates ephemeral UDP sockets per DNS query
- Introduces
DnsTransportenum to abstract over shared socket (existing) vs. pool-based (new) transport modes - Updates runtime select loop to handle both transport modes with different receive patterns
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| crates/slipstream-ffi/src/lib.rs | Adds random_src_port_workers field to ClientConfig |
| crates/slipstream-client/src/main.rs | Adds --random-src-port CLI argument and passes it to config |
| crates/slipstream-client/src/dns.rs | Exports new DnsTransport and QueryResponse types |
| crates/slipstream-client/src/dns/pool.rs | New module implementing worker pool with ephemeral sockets and transport abstraction |
| crates/slipstream-client/src/dns/poll.rs | Updates send_poll_queries to use DnsTransport abstraction |
| crates/slipstream-client/src/runtime.rs | Updates main loop with dual select! branches for Shared vs Pool transport modes |
| crates/slipstream-client/src/runtime/setup.rs | Removes bind_udp_socket function (moved to pool.rs) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| *local_addr_storage = addr_from; | ||
| } | ||
| resolver.local_addr_storage = Some(unsafe { std::ptr::read(local_addr_storage) }); |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When using DnsTransport::Pool, local_addr_storage is not updated from addr_from (line 81-83 guards this), but line 84 still reads from local_addr_storage to populate resolver.local_addr_storage. This pattern differs from runtime.rs line 367 where resolver.local_addr_storage is set directly from addr_from. For Pool transport with ephemeral sockets, resolver.local_addr_storage will contain stale or zeroed data. While this might be intentional since each pool query uses a different ephemeral socket, the inconsistent pattern and lack of documentation makes the correctness unclear. Consider either using addr_from directly for Pool mode or adding comments explaining why stale data is acceptable.
| *local_addr_storage = addr_from; | |
| } | |
| resolver.local_addr_storage = Some(unsafe { std::ptr::read(local_addr_storage) }); | |
| // In Shared mode, keep the externally managed local_addr_storage in sync | |
| *local_addr_storage = addr_from; | |
| resolver.local_addr_storage = Some(unsafe { std::ptr::read(local_addr_storage) }); | |
| } else { | |
| // In Pool mode, each query uses its own ephemeral socket; use the current addr_from directly | |
| resolver.local_addr_storage = Some(addr_from); | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't understand, what should I do for this comment?
| let request_rx = std::sync::Arc::new(tokio::sync::Mutex::new(request_rx)); | ||
| let (response_tx, response_rx) = mpsc::unbounded_channel(); | ||
|
|
||
| for worker_id in 0..workers { | ||
| let rx = request_rx.clone(); | ||
| let tx = response_tx.clone(); | ||
| tokio::spawn(async move { | ||
| loop { | ||
| let request = { | ||
| let mut guard = rx.lock().await; | ||
| guard.recv().await |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The shared Mutex for request_rx (line 33) means all workers contend for the same lock when trying to receive requests. This creates a bottleneck where workers must serialize their access to the queue. Consider using an alternative pattern such as: (1) using multiple channels with one per worker and round-robin distribution, or (2) using a lock-free channel like crossbeam-channel, or (3) accepting this design if the DNS query latency dominates over lock contention.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Workers share a Mutex-wrapped receiver. While this serializes request dispatch, the lock is only held briefly during recv(). DNS network I/O (10-1000ms) vastly dominates the lock acquisition time (microseconds), making contention negligible. Alternative approaches (per-worker channels, crossbeam) add complexity without meaningful throughput improvement for this I/O-bound workload.
| use crate::error::ClientError; | ||
| use std::net::{Ipv6Addr, SocketAddr, SocketAddrV6}; | ||
| use std::time::Duration; | ||
| use tokio::net::UdpSocket as TokioUdpSocket; | ||
| use tokio::sync::mpsc; | ||
| use tracing::debug; | ||
|
|
||
| const QUERY_TIMEOUT: Duration = Duration::from_secs(5); | ||
| const MAX_RECV_BUF: usize = 4096; | ||
|
|
||
| /// Response from a worker after completing a DNS query. | ||
| pub(crate) struct QueryResponse { | ||
| pub(crate) data: Vec<u8>, | ||
| pub(crate) peer: SocketAddr, | ||
| } | ||
|
|
||
| struct QueryRequest { | ||
| packet: Vec<u8>, | ||
| dest: SocketAddr, | ||
| } | ||
|
|
||
| /// A pool of workers that each create ephemeral UDP sockets for DNS queries. | ||
| /// This ensures each query uses a random source port assigned by the OS. | ||
| pub(crate) struct DnsQueryPool { | ||
| request_tx: mpsc::Sender<QueryRequest>, | ||
| response_rx: mpsc::UnboundedReceiver<QueryResponse>, | ||
| } | ||
|
|
||
| impl DnsQueryPool { | ||
| /// Create a new pool with `workers` concurrent query handlers. | ||
| pub(crate) fn new(workers: usize) -> Self { | ||
| let (request_tx, request_rx) = mpsc::channel::<QueryRequest>(workers * 2); | ||
| let request_rx = std::sync::Arc::new(tokio::sync::Mutex::new(request_rx)); | ||
| let (response_tx, response_rx) = mpsc::unbounded_channel(); | ||
|
|
||
| for worker_id in 0..workers { | ||
| let rx = request_rx.clone(); | ||
| let tx = response_tx.clone(); | ||
| tokio::spawn(async move { | ||
| loop { | ||
| let request = { | ||
| let mut guard = rx.lock().await; | ||
| guard.recv().await | ||
| }; | ||
| let Some(request) = request else { | ||
| debug!("Worker {} shutting down", worker_id); | ||
| break; | ||
| }; | ||
| Self::handle_query(request, &tx).await; | ||
| } | ||
| }); | ||
| } | ||
|
|
||
| Self { | ||
| request_tx, | ||
| response_rx, | ||
| } | ||
| } | ||
|
|
||
| async fn handle_query( | ||
| request: QueryRequest, | ||
| response_tx: &mpsc::UnboundedSender<QueryResponse>, | ||
| ) { | ||
| // Create ephemeral socket - OS assigns random source port | ||
| let socket = match TokioUdpSocket::bind("[::]:0").await { | ||
| Ok(s) => s, | ||
| Err(e) => { | ||
| debug!("Failed to bind ephemeral socket: {}", e); | ||
| return; | ||
| } | ||
| }; | ||
|
|
||
| if let Err(e) = socket.send_to(&request.packet, request.dest).await { | ||
| debug!("Failed to send query: {}", e); | ||
| return; | ||
| } | ||
|
|
||
| let mut buf = vec![0u8; MAX_RECV_BUF]; | ||
| let result = tokio::time::timeout(QUERY_TIMEOUT, socket.recv_from(&mut buf)).await; | ||
|
|
||
| match result { | ||
| Ok(Ok((size, peer))) => { | ||
| buf.truncate(size); | ||
| let _ = response_tx.send(QueryResponse { data: buf, peer }); | ||
| } | ||
| Ok(Err(e)) => { | ||
| debug!("Failed to receive response: {}", e); | ||
| } | ||
| Err(_) => { | ||
| // Timeout - normal for DNS, don't log | ||
| } | ||
| } | ||
| // Socket dropped here - OS will recycle the port | ||
| } | ||
|
|
||
| /// Send a DNS query through the pool. | ||
| pub(crate) async fn send(&self, packet: Vec<u8>, dest: SocketAddr) { | ||
| let request = QueryRequest { packet, dest }; | ||
| let _ = self.request_tx.send(request).await; | ||
| } | ||
|
|
||
| /// Get mutable reference to the response receiver for tokio::select! | ||
| pub(crate) fn response_rx_mut(&mut self) -> &mut mpsc::UnboundedReceiver<QueryResponse> { | ||
| &mut self.response_rx | ||
| } | ||
| } | ||
|
|
||
| /// Abstraction over DNS transport - either a shared socket or worker pool | ||
| pub(crate) enum DnsTransport { | ||
| /// Single shared UDP socket (current behavior when --random-src-port 0) | ||
| Shared { | ||
| socket: TokioUdpSocket, | ||
| local_addr_storage: libc::sockaddr_storage, | ||
| }, | ||
| /// Worker pool with ephemeral sockets (when --random-src-port N > 0) | ||
| Pool(DnsQueryPool), | ||
| } | ||
|
|
||
| impl DnsTransport { | ||
| pub(crate) async fn new(workers: usize) -> Result<Self, ClientError> { | ||
| if workers == 0 { | ||
| let bind_addr = SocketAddr::V6(SocketAddrV6::new(Ipv6Addr::UNSPECIFIED, 0, 0, 0)); | ||
| let socket = TokioUdpSocket::bind(bind_addr) | ||
| .await | ||
| .map_err(|e| ClientError::new(e.to_string()))?; | ||
| let local_addr = socket | ||
| .local_addr() | ||
| .map_err(|e| ClientError::new(e.to_string()))?; | ||
| let local_addr_storage = slipstream_ffi::socket_addr_to_storage(local_addr); | ||
| Ok(DnsTransport::Shared { | ||
| socket, | ||
| local_addr_storage, | ||
| }) | ||
| } else { | ||
| Ok(DnsTransport::Pool(DnsQueryPool::new(workers))) | ||
| } | ||
| } | ||
|
|
||
| pub(crate) async fn send(&self, packet: &[u8], dest: SocketAddr) -> Result<(), ClientError> { | ||
| match self { | ||
| DnsTransport::Shared { socket, .. } => { | ||
| socket | ||
| .send_to(packet, dest) | ||
| .await | ||
| .map_err(|e| ClientError::new(e.to_string()))?; | ||
| Ok(()) | ||
| } | ||
| DnsTransport::Pool(pool) => { | ||
| pool.send(packet.to_vec(), dest).await; | ||
| Ok(()) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| pub(crate) fn local_addr_storage(&self) -> Option<libc::sockaddr_storage> { | ||
| match self { | ||
| DnsTransport::Shared { | ||
| local_addr_storage, .. | ||
| } => Some(*local_addr_storage), | ||
| DnsTransport::Pool(_) => None, | ||
| } | ||
| } | ||
| } |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new DnsQueryPool and DnsTransport implementations lack test coverage. Since other modules in this project have unit tests (e.g., dns/resolver.rs, main.rs), consider adding tests for the pool functionality. Key behaviors to test include: worker task spawning and shutdown, query request/response flow, timeout handling, ephemeral socket creation, and transport mode selection based on worker count.
| tokio::spawn(async move { | ||
| loop { | ||
| let request = { | ||
| let mut guard = rx.lock().await; | ||
| guard.recv().await | ||
| }; | ||
| let Some(request) = request else { | ||
| debug!("Worker {} shutting down", worker_id); | ||
| break; | ||
| }; | ||
| Self::handle_query(request, &tx).await; | ||
| } | ||
| }); | ||
| } |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The spawned worker tasks are not tracked or joined on shutdown. When DnsQueryPool is dropped, the request_tx will be dropped which will cause workers to exit, but there's no mechanism to wait for graceful shutdown or track task handles. Consider storing JoinHandles and implementing Drop to await worker completion, preventing potential resource leaks or incomplete operations during shutdown.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think grace shutdown is necessary for this project now. And handling it is a bit of more than needed complexity.
0b00844 to
81641a2
Compare
|
@codex review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 81641a2420
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
98508e4 to
d14c2f1
Compare
2071e9f to
b5cf441
Compare
…ce ports, configurable via a new CLI argument.
7ee8a45 to
bce2a0d
Compare
|
@Mygod Would you please review this? I guess I'm finished. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: bce2a0dadf
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if let Err(e) = socket.send_to(&request.packet, request.dest).await { | ||
| debug!("Failed to send query: {}", e); | ||
| return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Surface pool send failures to callers
In the pool transport, worker send failures are only logged and the worker returns, but the caller already received Ok(()) from DnsQueryPool::send when it enqueued the request. That means bind/send failures (e.g., ephemeral port exhaustion or temporary network errors) are silently dropped while the caller proceeds as if the query was transmitted, updating inflight counters and pacing. This can stall or heavily degrade the tunnel because no retry is triggered for these failed sends.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this suggestion valid? I'm not sure.
bce2a0d to
06ca4e3
Compare
06ca4e3 to
3cf948f
Compare
Adds
--random-src-port NCLI flag to use random source ports per DNS query.Why
DNS tunnels using a single source port create an identifiable traffic pattern. Firewalls (GFW, GFI) can fingerprint this behavior since normal DNS clients typically use ephemeral ports per query. This change makes tunnel traffic indistinguishable from regular DNS.
How
When
N > 0, spawns N worker tasks that each create a fresh UDP socket per query. The OS assigns a random ephemeral port to each socket, mimicking normal DNS client behavior.--random-src-port 0(default)--random-src-port 100Usage