VPN Under the Microscope: How Packets Really Travel in a Tunnel and Where Bytes Get Lost
Content of the article
- What vpn really does to packets
- Layered encapsulation: nested dolls
- Headers and their structure: from bits to meaning
- Overhead made simple: what a tunnel costs you
- Mtu, mss and real speed
- Hands-on: traffic captures and analysis in wireshark
- Cryptography and security at the packet level
- Optimization and 2026 cases
- Packet-level tunnel diagnostics checklist
- Faq: quick and to the point
What VPN Really Does to Packets
The Path of an IP Packet Without VPN
Let's start simple. Picture a regular IP packet traveling from your laptop to a server. It gets the local MAC address of the gateway, hops onto the provider's router, makes a couple of jumps, and arrives safely at the destination. Minimal magic, pure routing. No extra wrapping—just the original IPv4 or IPv6 header, a TCP or UDP transport header, and the payload. Beautifully straightforward.
Now, from a switch's perspective: the frame arrives, the Layer 2 table points to the correct port, and the frame is sent out. At Layer 3, the router consults the routing table, updates the TTL, and maybe fragments if the MTU is too small. That's it. This is how you operate until you need a secure channel or remote network access. That's when VPN steps in and starts packing this IP packet like a Russian nesting doll.
The Path of an IP Packet Through a Tunnel
With VPN, things get more interesting. Your original IP packet no longer travels directly over the internet. The client encapsulates it into a new packet: it adds an outer IP header (to the VPN server), then a UDP or ESP header on top, and sometimes an additional control header for the tunnel protocol itself. The inner packet becomes the "payload." It’s encrypted and hidden—like a letter inside an envelope placed within another, more opaque envelope.
On the other end, the VPN server removes the outer layer, verifies authentication, decrypts the packet, and releases the original inner IP packet from the tunnel. This packet then gets its second chance to travel the internet normally—but now appearing as if it's coming from the server-side node or routed into a corporate network. Expensive? Yes. But secure and manageable if you correctly account for overhead and MTU.
Transport vs. Tunnel: Where’s the Line?
Remember this simple rule: transport is how we deliver packets (UDP, TCP, QUIC), and tunnel is how we package them and where we unpack them. Some VPNs use UDP as transport (like WireGuard, OpenVPN-UDP), others rely on proprietary IP-layer protocols (ESP in IPsec). But fundamentally, both do the same thing: they encapsulate your original IP packet inside another.
What interests us is the technical boundary between the inner payload and the outer transport. This is where overhead happens. This is where PMTUD breaks down. This is where encryption adds delay. We'll focus on this line, layer by layer, to understand why packets suddenly fragment and TCP connections slow down, even if you have a 1 Gbps plan.
Where Cryptography Hides
Encryption sits between the inner packet and the outer transport. In IPsec ESP, it's ciphertext over the inner packet plus ESP fields and an integrity check value (ICV). In WireGuard, the data message’s payload is encrypted, while the UDP header and outer IP stay visible. OpenVPN encrypts its protocol content over UDP or TCP, often with HMAC and optional TLS overlays for control channels.
Important: cryptography enforces alignment, adds authentication tags (usually 16 bytes), and often requires IVs or nonces. These bytes don’t disappear—they increase each packet's size. So, you have to reduce the MTU on the tunnel interface or clamp MSS for inner TCP connections to avoid packet fragmentation and loss in the endless fragmentation dance.
Layered Encapsulation: Nested Dolls
Outer and Inner IP
Here’s the baseline: the inner IP packet is built by the application, then the VPN client places it inside an outer container. That container is a new IP header addressed to the tunnel server. So, we now have two IPs: the inner IP says "where to" within the protected logic, the outer IP says "how to reach the VPN concentrator." In Wireshark, you'll see two IP layers, but the inner one usually remains hidden until you decrypt at the endpoint.
This dual-IP concept defines IPsec’s “tunnel mode” and “transport mode.” In tunnel mode, a full new IP header hides the inner one completely. In transport mode, only the payload is encrypted, and the outer IP remains original. Tunnel mode is more commonly used for corporate routing and remote access.
UDP or TCP as the Tunnel Envelope
Often, tunnels run over UDP. Why? Because UDP is simpler and more stable for NAT traversal, introduces less overhead for congestion control, and reduces delay from packet loss. WireGuard uses UDP over the outer IP with encrypted packets inside. OpenVPN in UDP mode behaves similarly. Meanwhile, OpenVPN-TCP builds a "VPN-over-TCP," which can work behind strict proxies but suffers from TCP-over-TCP meltdown—overlapping congestion controls that punish you with latency.
When ESP (IP protocol 50) is used, outer UDP might be absent. But in practice, NAT-T encapsulates ESP inside UDP port 4500 to trick NAT devices. This adds 12 bytes (UDP’s 8 plus 4 for the Non-ESP Marker) but reliably passes home routers and provider gear.
Protocol Marking: ESP, GRE, L2TP
Tunnels are recognized by protocol numbers: ESP is 50, AH 51, GRE 47, L2TP UDP port 1701, WireGuard typically UDP 51820 (but often changed for obfuscation), OpenVPN defaults to UDP 1194, TCP 443 for conservative setups. These numbers matter for tcpdump, Wireshark filters, and firewall policies.
This tagging tells you what you're looking at: esp means IPsec, gre means there's an additional IP or L3 protocol inside, udp.port==51820 likely means WireGuard. Fun fact: in 2026, some providers began aggressive DPI on UDP with unusual patterns, boosting QUIC-based VPNs and those hiding tunnels inside HTTP/3 traffic—though that's more about transport-level obfuscation.
Fragmentation and Reassembly
Encapsulation increases packet size. If the final size exceeds the link's MTU, routers either fragment or send ICMP "Fragmentation Needed" messages (if DF is set). VPNs often cause invisible fragmentation of outer packets, which destroy performance. Each fragment adds work and risks TCP retransmits and timeouts.
The right approach: calculate the final size ahead and lower the tunnel’s MTU or clamp MSS for TCP sessions, so segments fit inside the MTU with room for headers. Even in 2026, most providers stick to 1500-byte MTUs, so aiming MSS at 1360–1380 for IPsec NAT-T is practical, not snobby.
Headers and Their Structure: From Bits to Meaning
IPv4 and IPv6: Critical Fields for VPN
IPv4 fields of concern include Total Length, Identification, Flags (DF, MF), Fragment Offset, TTL, Protocol, and Header Checksum. DF means "do not fragment," and ICMP Type 3 Code 4 signals the MTU is smaller. IPv6 works differently: no header checksum, fragmentation responsibility shifted to the sender, routers don't fragment. So, PMTUD is mandatory with IPv6—otherwise, tunnels silently drop large packets.
Another detail: Traffic Class and Flow Label affect QoS and priority behavior. Inside VPN tunnels, outer and inner IPs may have different QoS markings, which is why many admins in 2026 copy DSCP from inner to outer IPsec headers—to preserve voice and video priority.
UDP and TCP: Control Numbers and Hidden Effects
UDP headers are simple—Source Port, Destination Port, Length, Checksum—only 8 bytes overhead, ideal for tunnels. TCP is more complex: a 20-byte header plus options (MSS, SACK, Timestamps). Inner TCP with MSS 1460 fits well on bare Ethernet but struggles inside tunnels. That’s why MSS Clamping—forcibly reducing MSS in SYN packets—is practiced.
TCP has a subtle issue called "TCP-over-TCP meltdown." Outer TCP (e.g., OpenVPN over TCP) and inner TCP both respond to loss and delay, doubling retransmissions and congestion control. The result: latency spikes, jitter, and sluggish traffic under load. This isn’t myth—avoid TCP-over-TCP unless absolutely necessary.
ESP: SPI, Sequence, IV, Padding, ICV
An ESP packet consists of an ESP header (4-byte SPI, 4-byte sequence number), encrypted payload (inner IP and transport), ESP trailer (padding, pad length, next header), and authentication data (ICV, usually 16 bytes for GCM). With AES-GCM, you often see an 8-byte explicit IV and 16-byte authentication tag. NAT-T adds 8 bytes for UDP and 4 for Non-ESP Marker, increasing overall size.
In tunnel mode, IPsec adds another outer IP header (20 bytes for IPv4, 40 for IPv6). Padding depends on block cipher alignment and can consume several bytes per packet. Bottom line: ESP adds stable and variable overhead. Production calculations typically assume 50–70 bytes overhead with NAT-T for IPv4, and 70–90 bytes for IPv6.
OpenVPN and WireGuard: Packet-Level Differences
OpenVPN over UDP adds a small header, HMAC, and optionally IV/nonce, totaling roughly 36–60 bytes overhead plus outer IP and UDP. In TCP mode, you add TCP header and TLS overhead on control channels, boosting stability on flaky networks but hurting latency and throughput under loss.
WireGuard keeps it minimal. Its Data Message includes a control header (recipient, counter) and an encrypted payload with a 16-byte Poly1305 tag. Usually, it’s about 32 bytes WG header plus 8 bytes UDP and 20/40 bytes IP—roughly 60–80 bytes per packet. Not perfect, but predictable and fast, especially with hardware acceleration for ChaCha20-Poly1305.
Overhead Made Simple: What a Tunnel Costs You
Basic Formula and Examples
Overhead = Outer IP + Outer Transport (UDP/TCP) + Tunnel Header (ESP, WG, OpenVPN, GRE, etc.) + Crypto Tag + Padding/IV/Nonce + Alignment. Sounds complex, but it’s just arithmetic.
Example: a 1400-byte internal TCP payload segment over IPsec NAT-T with AES-GCM. Outer IPv4: 20 bytes; UDP: 8; Non-ESP Marker: 4; ESP Header: 8; IV: 8; ICV: 16; Padding: 2–6 bytes. Total ~66–70 bytes. Final packet size ~1470 bytes. On a 1500 MTU link, this fits with minimal margin. Any TCP or IPv6 option can push it over the edge, causing fragmentation.
IPsec ESP: Transport vs. Tunnel, NAT-T and Sizing
In transport mode, IPsec doesn’t add an outer IP header, saving 20/40 bytes. Remote access usually requires tunnel mode—with an outer IP header, overhead gets noticeable: IPv4 ESP without NAT-T adds 42–60 bytes; with NAT-T, 54–74 bytes depending on fields and padding. For IPv6 add another 20 bytes due to longer headers.
Rule of thumb: with IPsec NAT-T, set tunnel MTU at 1400–1420 and MSS clamp at 1360–1380. These values reflect typical header sizes and give a margin to avoid fragmentation. Always test with ping -M do using large packets to ensure PMTUD works through all firewalls.
WireGuard, OpenVPN UDP and TCP: Byte Guidelines
WireGuard over IPv4 usually adds about 60 bytes overhead, about 80 bytes with IPv6. This gives the common advice: use MTU 1420 on wg0 interface. OpenVPN UDP overhead varies with encryption and HMAC, typically 50–80 bytes on IP/UDP—MTU 1400–1450 and MSS clamp 1360–1420 solve most issues.
OpenVPN TCP is another story. Besides 20 bytes TCP header (without options), congestion control mechanisms overlap. In noisy or narrow channels, TCP-over-TCP struggles. TCP Fast Open or careful buffer tuning can help, but when possible stay on UDP and handle proxy traversal or use QUIC transport.
GRE, L2TP, VXLAN: Quick Comparison
GRE adds a minimum 4-byte base header, often plus Key and Checksum fields, totaling 8–12 bytes plus outer IP. Including the encapsulated IP in GRE, overhead easily reaches 24–28 bytes over outer IP. L2TPv2 runs over UDP (8 bytes) adding 6–12 bytes L2TP header plus PPP—14–24 bytes before encryption or atop IPsec.
VXLAN targets Layer 2 data center encapsulation: UDP 8 + VXLAN 8 + outer IP and MAC on data link. Less common in VPNs but principles hold: every header byte reduces payload space. More nested wrappers mean MTU tuning and fragment blocking get critically important.
MTU, MSS and Real Speed
How to Calculate MTU for Your Tunnel
The process is straightforward: 1) Determine total overhead for your stack (e.g., about 68 bytes for IPsec NAT-T IPv4). 2) Subtract it from 1500 if your underlay is Ethernet without jumbo frames. 3) Add a 10–20 byte safety margin for options or unexpected fields. 4) Set the resulting MTU on your tunnel interface and test using ping with DF set and incrementally larger packets.
Example: WireGuard on a home router has roughly 60–64 bytes overhead on IPv4. So, 1500 - 64 = 1436. Round down to 1420 (recommended) for headroom. Then configure MSS clamp at 1360–1380 and test large file downloads and VoIP. If you see no freezes or loss, congratulations—you nailed it.
MSS Clamping: The Quick Fix for TCP
MSS (Maximum Segment Size) is the maximum TCP payload size in one segment. If your tunnel reduces MTU, you need to reduce MSS so segments fit without fragmentation. The method is to rewrite MSS in SYN packets on passing TCP flows at the network edge. Almost all modern gateways and even SOHO routers can do this with two clicks in 2026.
Practically: for MTU 1420, set MSS to 1360. For MTU 1400, 1360 is also common, considering unpredictable TCP options. Verify with tcpdump that SYN packets carry the expected MSS. If you see odd retransmissions or RTT spikes, lower MSS by 10–20 bytes and retest.
Jumbo Frames, PMTUD and the DF Bit
Jumbo frames (MTUs >1500) make life easier but aren't always available. Datacenters use them; the internet, rarely. PMTUD theoretically works flawlessly: senders adjust packet size to the smallest link MTU. In practice, ICMP messages often get dropped, so senders don't learn about insufficient MTUs, causing hanging connections and mysterious stalls.
If you control both ends, enable PMTUD and don't block ICMP Type 3 Code 4. If not, take matters into your own hands: conservative MTU, MSS clamp, and explicit ping tests with DF. It’s boring, agreed, but it works and saves debugging hours.
2026 Practical Presets
The market boils down to a short list: WireGuard IPv4 — MTU 1420, MSS 1360; IPsec NAT-T IPv4 — MTU 1400–1420, MSS 1360–1380; OpenVPN-UDP — MTU 1400–1450, MSS 1360–1420; IPv6 stacks deduct another 20 bytes. Don’t forget jitter: consistently low jitter almost always trumps abstract MTU bumps of 20–30 bytes.
In 2026, many providers implement QoS Per-Hop Behavior on backbones, and corporate VPNs routinely copy DSCP from inner to outer IP headers. If your voice quality improved after this, don’t be surprised—packets finally got their rightful priority.
Hands-On: Traffic Captures and Analysis in Wireshark
Filters for tcpdump and Wireshark
Need quick filters? For IPsec: use esp or udp port 4500 (NAT-T), plus isakmp on udp 500 for IKEv2. For WireGuard: udp port 51820 (or your custom port). For OpenVPN-UDP: udp port 1194. For L2TP: udp port 1701. For GRE: ip proto 47. Add host filters for the VPN server IP to avoid city-wide noise.
Recipe: on client, run tcpdump -ni eth0 udp port 51820 and host X.X.X.X to show just WireGuard traffic to the target. On server, monitor external interfaces to check for packet loss and the internal tunnel interface (wg0, tun0, ipsecX) to compare flows. Differences in counters indicate pain points: lost frames or PMTUD hell.
Reading Packet Fields Manually
In Wireshark, expand an ESP packet. See SPI and Sequence? SPI identifies the Security Association; Sequence increments with each packet, serving as a loss and replay marker. In WireGuard, watch the Counter—a monotonic sequence preventing replays and ensuring order. OpenVPN’s header is simpler but still provides a Key ID and message type.
Check outer IP: TTL, DF bit, size. Inner IP is usually hidden except at endpoints after decryption. If outer packets fragment, look for something breaking MTU. Rising ICV errors mean key issues, desyncs, or in-transit corruption. A simple logic that saves hours every time.
Debugging MTU and Fragments: Quick Tips
Use ping -M do -s 1472 8.8.8.8 (Linux) to find the max packet size without fragmentation on a 1500 MTU path (1472 payload + 28 IP+ICMP). Check inside tunnel with pings between internal addresses. See big-packet losses? Lower MTU or MSS. Simple. Effective.
Another trick: enable "Fragmentation needed" logging on border routers or firewalls. If bursts appear, PMTUD is failing. Temporarily ease TCP MSS clamping, then investigate where ICMP is lost. Sometimes it’s a long-standing ACL copied from a template that now breaks your life.
Safe Dumps and Masking Sensitive Data
Capture on the external interface if you want to hide inner IPs and app ports. Outer traffic is encrypted, so the payload stays private. But metadata—who talks to whom, when, and packet sizes—remains visible. When sharing dumps with contractors, trim pcap by address and time, and apply anonymization in Wireshark (Replace MAC/IP).
In 2026, “privacy by design” policies for debugging are standard. Keep dumps short-term, encrypt archives, delete keys after incident closure, and write small README files with filters, client versions, and MTU. In a month, you’ll thank yourself.
Cryptography and Security at the Packet Level
Authentication and Replay Protection
Every secured packet undergoes integrity and authenticity checks. In IPsec ESP, sequence numbers and replay windows prevent replays. WireGuard’s single-key pair and counter do the same. Any counter mismatch drops packets, so watch replay error spikes as signs of connection issues or "restarts without rekey."
SA identification (SPI in IPsec) directs which keys to use and how to verify tags. Rekeying happens by time or traffic volume. Delayed rekeys increase nonce reuse risk. Timers matter. Good logs show key changes like smooth gear shifts.
GCM vs. ChaCha20-Poly1305
AES-GCM is the de facto standard in IPsec and TLS thanks to hardware acceleration (AES-NI, ARMv8 crypto). It's fast and parallelizable. ChaCha20-Poly1305 shines where AES acceleration is lacking and predictable performance is needed—increasing WireGuard’s appeal. Both provide AEAD: encryption and authentication in one pass.
Latency-wise, ChaCha20 offers a smooth profile even on budget ARM routers. AES-GCM dominates on servers with hardware blocks. Hybrid schemes are rare in 2026 but post-quantum pilots already happen at key exchange layers (IKEv2, TLS 1.3)—interesting to know but not yet part of every packet.
PFS, Rekey and Key Lifetimes
Perfect Forward Secrecy means compromising long-term keys won’t reveal past sessions. Invisible in packets, but PFS mandates periodic key changes. SA lifetimes are limited by time and traffic. Logs show planned SPI rotations and counter resets. Long timers risk nonce reuse.
Practically, high-load tunnels rekey every 30–60 minutes or 1–2GB traffic, depending on risk. Short timers add handshake overhead but improve crypto hygiene. Find your balance and keep telemetry ready.
Metadata and Pattern Leaks
While content is encrypted, metadata leaks: endpoint IPs, ports, packet sizes, intervals. DPI learns to fingerprint WireGuard or OpenVPN by size stats and keepalive frequency. The fight is about traffic obfuscation: QUIC transport, variable ports, padding, HTTP/3 mimicry.
Engineering-wise, diverse behavior is best defense. Regular rekeys, variable keepalive, no anomalies like perfectly fixed packet sizes. It’s like masquerading—move naturally in a crowd, and DPI guards find it harder to single you out.
Optimization and 2026 Cases
Home Router and WireGuard: Quick Win
Case: ARM-based home router with 500 Mbps ISP. Set up WireGuard, MTU 1420, enable flow offload and fq_codel queueing, MSS clamp 1360. Result: 400–480 Mbps VPN throughput with only 2–4 ms added latency. No fancy tricks, just sharp tuning. Even 4K streaming flows smoothly through this tunnel.
Add monitoring: every 5 minutes short iperf3 runs, RTT logs via pings to various regions, interface drop counters. After a week, you'll have a live quality map and a clear baseline. When evening slowdowns hit, you see it—no guessing games with the ISP.
Corporate IPsec with NAT-T: MTU vs. Reality
Case: Branch offices using IPsec (IKEv2, AES-GCM) with inevitable NAT-T. Initial complaints of “slow RDP and weird Zoom issues.” First discovery: occasional fragmentation of outer packets and blocked ICMP. Solution: set MTU to 1400, MSS clamp 1360, allow ICMP Type 3 Code 4 across perimeter, enable DSCP copy for EF (voice). Within hours, graphs stabilized and voice cleared up.
Final touch: load balancing tunnels across two providers and health-check with BFD. On the packet level, we simply maintained size stability and priority instead of trying to "fix" apps. Sometimes, genius is just solid engineering.
OpenVPN in the Cloud and TCP Meltdown: Surviving It
Case: OpenVPN over TCP in a cloud segment behind proxy. 0.2–0.5% loss, but RTT swings 20–40 ms. Inner and outer TCP both react to loss, creating standing waves. Web app struggles. Quick fix: increase TCP window, enable BBRv2 on the outer channel, remove unnecessary TLS renegotiations. Long-term: migrate to UDP transport or QUIC wrapping if policy allows.
Meanwhile, apply a logical trick: lower MTU and MSS to reduce retransmission risk for large packets. Not a cure-all but essential. Also test alternate ports and HTTP/3 masking—modern DPI is quite tolerant of QUIC in 2026.
SASE, QUIC VPN and 2026 Trends
In 2026, SASE and SDP solutions grow: clients connect to the nearest PoP, then traffic travels over private backbones with prioritization. Packet level often shows QUIC transport carrying tunnels and encryption. This bypasses corporate proxies and reduces complex exceptions.
Another trend is edge hardware acceleration: SmartNICs offloading IPsec, eBPF/XDP for fast processing, ARMv9 with SVE2 delivering steady ChaCha20 on branch routers. Post-quantum hybrids for IKEv2 and TLS 1.3 are in pilots, edging closer to reality. The world is preparing keys for the future, and it's exciting.
Packet-Level Tunnel Diagnostics Checklist
Symptoms and Quick Tests
Symptom: pages load choppily, video stutters, RDP feels "rubbery." Test 1: ping with DF set and brute-force packet size to find safe max. Test 2: tcpdump external interface—look for fragments and losses. Test 3: check MSS in SYN packets to confirm clamp. Test 4: watch replay and ICV error counters on IPsec/WG.
If clamping and MTU fixes don’t help, check for jitter from the provider or QoS issues. Simple test: iperf3 to various ports and DSCP marks to watch stability. Sometimes just swapping last-mile provider works wonders. That’s life.
Decision Map During Analysis
Fixes are stepped: 1) Confirm overhead and set tunnel MTU 80–100 bytes below 1500 with margin. 2) Enable MSS clamp. 3) Restore ICMP "Fragmentation Needed" on perimeter. 4) Switch to UDP transport if needed. 5) Configure prioritization for voice and interactive streams. 6) Monitor rekeys and update client versions.
If none help, investigate DPI and proxies. Traffic might be flagged as "suspicious" and throttled. Testing QUIC wrappers or port 443 UDP sometimes opens all doors. This is hypothesis testing, not policy evasion.
Commands for Different OS
Linux: ip link set dev wg0 mtu 1420; iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu; tcpdump -ni eth0 udp port 51820. Windows: netsh interface ipv4 set subinterface "Ethernet" mtu=1500 store=persistent and configure MTU on VPN adapter via GUI or PowerShell; packet capture via pktmon or Wireshark.
BSD and pfSense: The WireGuard/OpenVPN interface has MTU and MSS fields. Don’t forget to enable pf scrub to normalize packets and allow necessary ICMP types. On routers with hardware offload, ensure encryption doesn’t fall into software due to exotic options. One wrong flag can cost hundreds of Mbps.
Common Beginner Mistakes and How to Avoid Them
Classic: leaving MTU at 1500 in tunnels without MSS clamping and then wondering about fragmentation. Second mistake: disabling ICMP "for safety," then spending a month chasing "slow internet." Third: using TCP-over-TCP unless absolutely necessary. Fourth: ignoring ESP and WireGuard error counters that clearly signal replays or corruption.
Fifth: neglecting to test with small packets and interactive services. Throughput is only half the picture. Jitter and latency make up the other half. When both are in order, you feel it instantly—clicks are crisp, video is smooth, files fly like bullets.
FAQ: Quick and To the Point
How to Guess MTU for My VPN Without Knowing Exact Overhead
Start with 1500 and subtract 100 bytes for a conservative estimate. Set tunnel MTU to 1400 and MSS to 1360. Test ping with DF and payload sizes from 1300 to 1472 to find the max. If all passes, raise MTU in 10-byte steps until failure, then step back by 20–30 bytes. It’s a rough but effective method, perfect for unknown networks with ICMP filtering and no documentation.
Why Is WireGuard Often Faster Than OpenVPN on the Same Servers?
Two reasons. First, smaller and stable overhead with predictable ChaCha20-Poly1305 crypto. Second, core protocol design: fewer copies, fewer context switches, simpler implementation. On budget ARM and x86 without AES-NI, WireGuard usually outperforms OpenVPN by 1.5–2x in throughput and maintains steadier RTT under load. Exceptions exist but are rare.
How Harmful Is TCP-over-TCP and When Is It Acceptable?
Harmful where there's loss and jitter. Two layers of congestion control interfere, causing latency spikes. Acceptable if you must run traffic only over TCP 443 due to policies or proxies. Then, careful buffer tuning, BBR on the outer TCP, and proper caching help. But if possible, switch to UDP or QUIC. It’s physics, not preference.
Why Does IPsec Drop Connections on Large Files but Ping Is Fine?
Most likely MTU/PMTUD issues. Small pings are ~64 bytes; large files push segments to max size. If ICMP "Fragmentation Needed" messages get blocked, sender won’t reduce size, leading to timeouts, retransmissions, and drops. The cure: correct MTU on the tunnel, MSS clamping, and allowing necessary ICMP. Easy to verify with ping DF on large sizes plus tcpdump.
Is It Worth Copying DSCP From Inner IP to Outer IP in VPN?
Yes, if you have QoS on the path and want voice, video, and interactive traffic prioritized not just inside your network but also on the outbound channel. Many operators in 2026 honor DSCP in transport cores. Key: coordinate values and don’t overuse EF/CS5 to avoid harsh policing. Pilot first, then scale—golden rule.
Should I Switch to Post-Quantum Algorithms in VPN Already?
For daily traffic, it’s too early. Real post-quantum schemes appear in IKEv2/TLS as hybrids during handshakes. You won’t notice packet-level differences yet, and overhead and compatibility issues remain. For sensitive data with long lifetime, pilots make sense. Track updates in your implementations and keep a migration plan.
How to Tell If My Network Bottleneck Is VPN, Not ISP?
Compare benchmarks on the same channel: iperf3 direct vs. tunneled, plus RTT and jitter on small packets. If throughput drops 30–40% and CPU on encryption rises, VPN stack or MTU/MSS settings are the cause. If the drop is equal without VPN and latency rises under no load, the provider line is the issue. Run short continuous monitoring for a day, and the picture becomes clear.