Discussion on low level data flow and home networks. Found solution to allow 4MB max chunk size as smooth as 1/2 MB - its a setting in QUIC

Sorry for posting direct link to AI -material. I tried to use Claude to get a picture about how this will be handled when Saorsa components go live.

Here is what Claude gave in reply to:

Summarize the flow control and congestion control of ant_quic:

And a few follow up questions, such as:

Evaluate how this would work in a P2P network, where:

  • All nodes are in devices behind your typical, and lower grade consumer router.

  • The P2P network consists of millions of nodes

And:

How would it change things if receive_window in set at 5MB?

(5MB was chosen as per this PR.)

A couple of highlights from the replies:

The core issue is that this config inherits quinn’s defaults, which are tuned for a 100Mbps / 100ms datacenter-adjacent link. A P2P network of millions of consumer nodes needs defaults tuned for asymmetric, high-latency, bufferbloated, NAT-heavy paths — which pulls in almost the opposite direction on most parameters.

Flow Control in a P2P Context

The receive_window: VarInt::MAX default is dangerous at scale. In a client-server setting this is fine because the server is well-resourced. In P2P, every node is simultaneously a receiver for potentially many peers. If VarInt::MAX is truly unconstrained and a node has dozens of active connections, memory pressure becomes a real concern on low-end consumer hardware. The per-stream window of ~1.25MB is the operative constraint in practice, but this should be explicitly bounded at the connection level rather than relying on stream windows to save you.

Congestion Control in a P2P Context

CUBIC is a poor default for this environment. CUBIC was designed for high-bandwidth, low-loss datacenter and backbone links. Consumer NAT environments have very different characteristics:

  • Bufferbloat is endemic. Cheap routers have large, unmanaged buffers. CUBIC will aggressively fill them, causing latency spikes that hurt RTT estimates and trigger spurious loss detection. The time_threshold of 9/8 RTT becomes meaningless when RTT is itself inflated by bloated router queues.

  • Asymmetric bandwidth. Home connections are typically heavily asymmetric (e.g. 500 Mbps down / 20 Mbps up). CUBIC’s window growth assumes reasonably symmetric paths; in practice the upload bottleneck will cause loss long before the download side is stressed.

  • BBR or even NewReno would be more appropriate. BBR models bandwidth and RTT explicitly rather than reacting purely to loss, making it far more resilient to bufferbloat. This is likely the biggest single misconfiguration for this use case.

https://claude.ai/chat/c059bb5a-2e86-4aa2-bec0-9188c2365f8d