Many of the mobile solutions we build rely on backend systems. When the userbase for such a mobile application is rapidly growing, it is important to know the weak spots of the environment in which the application lives. This article explains how we setup and performed a stress test at one of our clients.
For this example, we are looking at a mobile application that heavily relies on a backend environment provided by our client. The app does not connect directly to these systems; our client provides an enterprise service bus (ESB) through which these systems are made available. We have also added a middleware server in between the app and the ESB that adds functionality not available in the backend systems, allowing the app to be smart as opposed to being just an interface on these systems.
Preparing the test
Besides constant user research and application updates the adoption of this app is actively stimulated within the client’s company. We already have seen a 500% growth in daily active users last year and it is expected that usage will keep growing considerably. A growth in usage means a growth in traffic. Given the above described setup, there are several components that should be prepared to handle such an increase in traffic. To identify if there’s work to be done (and if so, where) we organized a stress test with all involved parties.
By observing the server logs of our middleware, we determined a baseline consisting of three parameters:
1. the average usage (during working hours) expressed in requests per second,
2. the peak usage,
3. the peak concurrency.
This last parameter describes the number of requests that are being processed simultaneously.
The usage is estimated to grow with a factor 6 over the next year. As an endpoint for the stress test we proposed to double this. If all involved components could handle this, we knew it would keep functioning on days with an exceptional load.
The stress test
To simulate an increasing load of traffic that would reach all involved components, we set up a dedicated test server with sufficient specs to ensure this would not be the weakest link. On this server we installed an adapted version of ApacheBench that could perform requests to multiple URLs, read randomly from an input file. The input file was constructed such that the variety of URLs resembled production traffic; some backend systems are queried less often than others and this was reflected in how many request URLs were included in the input file.
An accompanying script was written to run ApacheBench in consecutive configurations that would gradually increase the number of requests and their concurrency.
A stress test like this is not only successful if no component falls over. The main goal is to make apparent any component that needs attention, which this test did clearly. Under increased concurrent requests our middleware turned out to become a bottleneck. The amount of concurrent requests was limited to what we had configured in our application server. As such, the load passed on by our middleware to the ESB and backend systems never exceeded what it could process.
An easy fix would be to increase the maximum number of application processes, but in recent years we have been working with Docker on new projects. With Docker (Swarm) we can easily configure a cluster of nodes over which we can deploy services and effectively spread the load. This also allows for more flexible outscaling whenever usage of the app increases more.
The next step
As soon as we have made the necessary changes, we will repeat this stress test to ensure our middleware is no longer a bottleneck. This will also increase the pressure on other systems which will help us identify what system needs attention next.
If you are encountering load issues with your app, or want to share thoughts on how to setup & perform a stress test don’t hesitate to contact us.