In fact, packet drops are part of the very nature of IP networks. ![]() The greater the latency, the longer it takes to detect a data loss. If no acknowledgment is received, the source end must resend the data. When one end sends a chunk of data to the other, it usually needs an acknowledgment that the data has been received by the other end. Latency impact is all about flow and congestion control (which these days is mainly part of TCP). This would be correct if we never experienced any packet loss in the network. ![]() What's the big deal if our 100GE link is one kilometer or one thousand kilometers long, it's still 100GE isn't it? At first glance, it might seem that it doesn't matter how long it takes for a chunk of data to cross the Atlantic ocean, as long as we have enough bandwidth between the source and destination. While the concept of bandwidth is easily understood by most IT engineers, latency has always been something of a mystery. Finally, we consider the more advanced topic of the relationship between latency and common bandwidth limitation techniques. ![]() Next, we examine factors that contribute to latency including propagation, serialization, queuing and switching delays. ![]() We take a look at the relationship between latency, bandwidth and packet drops and the impact of all this on "the speed of the internet". In this blog post, we attempt to demystify the topic of network latency.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
June 2023
Categories |