Latency metrics

Latency measures are crucial for checking how well your apps and services perform. Latency means the total time it takes for a piece of data to go from where it starts to where it ends up, usually on a network. When we talk about latency, we’re mostly talking about how fast things move in a network. It’s one of the main things we look at to see if a service is good or not. We usually measure it in milliseconds. The lower the latency, the better the user’s experience.

By looking at P90, P95, and P99 latencies, you can find where things might be slowing down, make things better for users, and make sure your systems work really well for most people.

Metrics

P99 (99th percentile): The P99 metric indicates that 99% of requests are completed within the recorded latency. As an example, if we say that our application has a P99 latency of less than or equal to 5 milliseconds, then we mean that 99% of calls are serviced with a response under 5 milliseconds.

P95 (95th percentile): P95 latency reveals that 95% of requests fall below the specified threshold.

P90 (90th percentile): This metric signifies that 90% of requests are completed within the given latency value, while the remaining 10% took longer.

The P99 percentile is a metric we use to monitor and enhance the overall network latency or the response time of our application. Percentiles serve as a tool to distinguish between unusual occurrences and typical performance patterns. Network administrators strive to optimize the P99 percentile of network latency to enhance the overall responsiveness, particularly during periods of high demand.

    Leave a Reply

    Your email address will not be published. Required fields are marked *