You're not maxing out your bandwidth.

You're being throttled. Not by your ISP. Not by your infrastructure, but by TCP congestion control that was designed in 1988 for dial-up modems. You're running 2026 networks on 1988 code.

TCP - A protocol designed for a different era

TCP was designed in an era of dial-up modems and kilobit links. Its congestion control prioritises correctness and fairness, which made sense when bandwidth was scarce and networks were small. But those same mechanisms become bottlenecks on modern infrastructure. Over long-distance or high-latency links, TCP's conservative approach to sending data leaves available bandwidth unused.

Latency and round-trip time (RTT)

TCP relies on acknowledgements to confirm data delivery. As round-trip time increases, the sender must wait longer to receive acknowledgements before transmitting additional data. This directly limits throughput, even when sufficient bandwidth is available.

TCP pacing over long RTT
Over long distances, ACK feedback arrives later (higher RTT), which limits how quickly TCP can increase its sending rate and recover from loss.

Congestion control behaviour

TCP congestion control algorithms increase sending rates gradually and reduce them sharply in response to packet loss. On long-distance paths, a single loss event can cause significant throughput reduction, followed by slow recovery due to increased RTT.

Bandwidth-delay product

High-latency links require large congestion windows to fully utilise available bandwidth. In practice, default TCP configurations often underutilise long fat networks, resulting in lower-than-expected throughput.

Over distance, standard TCP utilises a fraction of available bandwidth. Tillered keeps throughput high across the same infrastructure, closing the gap between what you pay for and what you actually use. Curves are representative.

Tillered's approach

Tillered addresses limitations through a layered architecture. TCP session segmentation provides the foundation, but the system extends well beyond it with purpose-built transport protocols, adaptive quality-of-service, and continuous measurement. A simple proxy or raw UDP tunnel cannot deliver the same results.

No single layer is responsible for the performance improvement. Segmentation reduces the effective distance of each hop, selectable transport modes govern how data moves between nodes for different path conditions, automation removes operational burden, and QoS layers tune traffic handling beyond what standard TCP provides. The combination is what delivers measurable results.

Reducing effective transport distance
A single long-haul flow must retransmit end-to-end when loss or corruption occurs. Segmenting the path through Tillered nodes means each segment can detect and correct errors locally with shorter RTT, recovering faster without impacting the full path.

Network layer operation

Tillered operates at layer 4 of the network stack. It accelerates TCP traffic transparently, meaning existing applications and transfer tools continue to work without modification. If you already use specialised transfer software, Tillered provides additional acceleration underneath, complementing application-level optimisations. VPNs and encrypted tunnels that run over TCP also benefit from the same acceleration without any changes. Non-TCP IP traffic is not optimised, but it is carried between nodes through IP tunnels so that all routed traffic follows the same path.

Non-disruptive integration

Deployment uses policy-based routing configured on your firewall or network hardware to steer selected traffic through Tillered nodes. If a node becomes unavailable, the firewall falls back to the direct path and can load-balance across links. No traffic is dropped. Tillered nodes do not need to sit on your LAN. They can be deployed in a DMZ or isolated VLAN, with routing handled entirely by the firewall.

Under the hood

The approach above relies on several interconnected systems working together. Path segmentation, transport selection, adaptive QoS, and automated orchestration each solve a distinct problem, but the performance gains come from their combination.

Segmented paths and locality

Shorter segments also allow throughput to ramp up faster, reaching higher sustained rates in less time than a conventional end-to-end connection. The result is more predictable performance across varying path conditions, without requiring changes to the applications or protocols running over the link.

See measured results from production deployments →

Transports

Between the entry and exit nodes described above, data moves over a purpose-built transport protocol that handles reliability, congestion control, and path-aware optimisations. Non-TCP traffic routed through Tillered is forwarded via IP tunnels, following the same path but without TCP-level optimisation. The transport layer is extensible: two transports are available today, TCP and UDP, with more under active development. The choice of transport affects only the inter-node link; it does not change what the application or destination sees.

TCP transport is the default and delivers the highest peak throughput. Hardware offloading in modern network interfaces gives TCP an advantage at the packet level that UDP cannot match, making it the best choice for large, continuous transfers where sustained throughput matters most.

UDP transport is built on a modified KCP protocol and designed for workloads that send many small bursts rather than one large continuous stream, such as camera networks or IoT devices. TCP's congestion control treats each burst as a new ramp-up cycle, adding delay across many concurrent streams. UDP transport avoids this overhead, delivering more responsive handling for bursty traffic. Despite running over UDP, this is a reliable protocol with its own congestion handling and delivery guarantees. TCP transport remains the right choice for most environments.

Diagram showing TCP and UDP as two selectable inter-node transport options, with standard TCP connections on both the application and destination sides
Both transports provide reliable data delivery between Tillered nodes. Applications and destinations always see standard TCP connections. Operators select the transport per service or link, and the transport layer is extensible as new protocols are added.

Adaptive QoS and transport optimisation

Static QoS rules are a common approach to traffic management, but they assume network conditions stay the same. In practice, link quality shifts over time: a path that performed well in the morning may show increased loss or jitter by afternoon. Fixed priority rules cannot account for this, and manual re-tuning does not scale across distributed networks.

Tillered takes a different approach. Each segment provides local visibility into latency, loss, and throughput, and the system uses this per-segment information to adjust traffic handling in real time. Critical flows receive appropriate resources based on observed conditions rather than assumptions.

Automation and control plane

Managing a distributed network of optimisation nodes introduces real operational complexity. Provisioning nodes, establishing links, configuring routing, and applying QoS policies all need to happen consistently, and keeping that consistent manually does not scale.

Tillered's control plane handles this automatically. Operators define a control manifest describing the desired link topology and policies. From that manifest, nodes provision network links, configure routing, and apply transport and QoS settings without manual intervention. Operators are given a gateway address for each link, and route traffic through it. When an operator changes QoS rules, transport options, or link configuration, the control plane propagates those changes to the relevant nodes automatically. The gateway and external routes remain stable.

To make this work cleanly, Tillered maintains a strict separation between control-plane and data-plane traffic. The control plane coordinates configuration and topology, while data-plane traffic flows directly between nodes without passing through external services. This separation allows the same core technology to operate in both centrally managed and fully decentralised deployment models. Tillered Cloud and Tillered Self-Hosted implement the control plane differently, with the self-hosted variant providing additional operational capabilities.

Control plane versus data plane
The control plane handles configuration and topology. Data-plane traffic flows directly between nodes.

What Tillered does not do

Tillered is intentionally focused in scope. The system does not:

  • Inspect or modify application payloads
  • Add transport-layer encryption; application encryption passes through unchanged
  • Proxy traffic through external services
  • Require application-level integration
  • Depend on continuous internet connectivity

These constraints simplify deployment, reduce trust assumptions, and make behaviour easier to reason about in regulated environments.

Relationship to deployment models

The technology described here is shared across all Tillered deployment models. Differences between Tillered Cloud and Tillered Self-Hosted relate to control-plane placement and operational responsibility, not core functionality.

Read more about deployment models →