When discussing network performance, bandwidth is often the primary focus. Higher bandwidth is equated with faster speeds, allowing for quicker file transfers and increased productivity. However, another crucial aspect of network performance, latency, has gained significant attention recently.
Latency refers to the time delay within a network, specifically the time it takes for data packets to travel between nodes. While it might seem related to bandwidth, latency is a distinct characteristic.
Consider transmitting a live video feed to a satellite in geosynchronous orbit. A noticeable delay occurs, as seen in news broadcasts where anchors pause for reporters overseas to respond. This delay is latency, representing the time taken for the signal to travel to the satellite and back down to Earth.
Doubling the bandwidth in this scenario wouldn’t reduce the delay. While insufficient bandwidth can cause packet queues, once that’s addressed, increasing bandwidth won’t make the signal travel faster.
This is because the signal, traveling at the speed of light, is already at its maximum speed. The speed of light (186,000 miles per second) sets a limit on how fast data can travel between two points. While we can’t exceed this speed, we can certainly make it slower.
Satellite internet users experience this slow speed firsthand. VoIP calls become impractical unless used in a half-duplex mode, like walkie-talkies, with only one person speaking at a time. This contrasts with the full-duplex communication we’re accustomed to, where simultaneous conversations are possible without interruption.
Low latency was inherent in the analog telephone system due to the minimal distance between connected phones. Similarly, TDM (Time Division Multiplexed) circuits, such as T1 lines, maintain low latency because the signal propagation speed is limited only by the physical medium (copper wires or fiber optic cables).
The primary culprit for slow internet speeds is the numerous routers between the source and destination. Each router adds a small delay, and with multiple routers in the path, the latency becomes noticeable.
The significance of latency depends on the application. Latency doesn’t significantly impact applications like email or file transfers, which were central to the internet’s initial design. However, real-time interactive applications have highlighted the importance of low latency.
VoIP telephony is highly sensitive to latency, although 100ms is generally considered acceptable. While noticeable in online gaming, this delay can be detrimental in high-frequency financial trading where milliseconds can mean the difference between profit and loss. The rise of automated trading systems has further emphasized the need for low latency in high-speed networks.
To reduce network latency, relying on the internet or similar architectures is not ideal. The internet’s design prioritizes reliability over speed, routing packets through various paths to reach their destination.
The most effective low-latency networks utilize high-speed fiber optic cables in the straightest possible path between locations. Minimizing the equipment between endpoints is crucial. While signal regeneration might be necessary, switches and routers should be minimized. Any necessary switching and routing equipment should be optimized for speed, minimizing processing time.
When low latency is critical, it’s important to be specific with your requirements. Simply requesting a high-bandwidth connection doesn’t guarantee low latency. Many major carriers understand the importance of latency for financial institutions and other latency-sensitive businesses. They offer specialized low-latency network connections tailored to these needs.
If your business depends on low latency, make sure to prioritize this requirement when comparing high-bandwidth network service prices and availability for your locations.

