gRPC performance testing - 3 ways to serve your customer better
How to improve the quality of your gRPC microservices?
Response time is one of three key metrics in performance testing, and business-critical application requirements are challenging.
For example, the Average response time requirement in essential microservices is often 20-100 milliseconds.
Total response time has three components: client-network and server time.
Formula: Total Response time = Client time + Network time + Server time
As a great client and network performance are of gRPC’s main benefits, server time is tricky as it usually uses databases or external services to get the job done.
One way to serve your customer is by Testing real-world scenarios.
A Good example of Preparing a real-world scenario is Black Friday. For many microservices, a performance testing tool is the only way to verify a large number of users and the load they generate. Usually these tests will be executed in test/staging environments, but with good planning also testing in production may be possible. Luckily Performance testing tools for gRPC testing have been improved lately and a testing same way as in more traditional web stores and REST APIs is possible easily.
The Second way to serve your customer is by knowing your maximum capacity and bottleneck.
To serve your customer, knowing the maximum throughput of your service is a must. Obviously, exact or even estimated requirements need to be known to compare the maximum, for example, 200 transactions per second, to the required 500 transactions per second. With this information we know that improvements are needed and after doing so, it is easy to run exactly the same test again to see the difference.
The Third way to serve your customer is an adequately tested scaling solution.
When knowing a big performance peak is coming up it is important to set up hardware and network capacity big enough with automated scaling enabled. However, automated scaling does not always work as it may be that more hardware does not increase your performance/throughput. Also sometimes automated scaling is too slow with default settings and problems become very bad before cavalry (=more capacity) is there to help. Once you test and know that scalability works as expected, customers will always be served well.
Read also another article: