Skip to content

Performance testing

Tiit edited this page Mar 23, 2017 · 13 revisions

We used Apache JMeter 3.1 to generate and execute performance tests.

Configuration

Tests are executed using the v2.0.0 tagged release on the live test environment (https://potatonet.ee).

Test plan configuration is shown on the following screenshot.

Test plan configuration

Thread Group

Allows configuring the amount of threads (users) which will execute the tasks defined in it.

HTTP Request Defaults

Defines the web context address. Being set to potatonet.ee in current test scenario.

HTTP Cookie Manager

This element is responsible for storing the JSESSIONID cookies upon authentication.

Recording Controller

This controller was used to record requests from a browsing session through a local proxy. It'll contain the mappings which will be requested by each thread.

Once Only Controller

These elements will only be executed once in the beginning of a test for each thread. This guarantees that authentication will be done only once.

Regular Expression Extractor

This is necessary to obtain the CSRF (Cross-Site Request Forgery) token from the initial /login HTTP GET request's repsonse in order to use the token for the authentication HTTP POST request to /login. Without the CSRF token, authentication would fail.

Following regular expression is used on the HTTP GET /login request's response

<input type="hidden" name="_csrf" value="(.+?)"/>

in order to extract the CSRF token value from it.

Its value is passed to the HTTP POST /login request parameter along with the username and password.

Constant Throughput Timer

This limits throughput to only allow up to 120 requests per minute. Timer enables us to fulfil the requirement for the performance testing:

Lahendus tuleb katta jõudlustestidega, mis simuleerivad 20 samaaegset kasutajat, 
kes teostavad iga 10 sekundi tagant mõne seisundit mittemuutva operatsiooni 

20 user agents will be configured in the Thread Group's configuration. 120 requests per minute divided by 20 users results to 6 requests per user per minute. Thus, each user will do a request every 10 seconds on average.

Summary Report

After letting the test run for two hours, we gathered the following results:

Summary Report

Analysis

From the previous summary report, we can see that the average request time for / and /users/1 is over 0.5 seconds with 514 and 554 milliseconds taken respectively. / being the mapping for the Feed view and /users/1 maps to Tiit Oja's profile. Longer response times for the mentioned mappings can be explained with the necessity to load 50 posts for both of the mentioned views. Other profiles did not contain that many posts.

Initial authentication requests were done only 20 times. Due to small sample size, we can not make any conclusions of it.

Average request time is 394 milliseconds whilst being under load which is acceptable when taking into account the fact that the application is running on a single core, 1GB RAM, micro EC2 Amazon instance. Almost like a Raspberry Pi.

Response Time Graph

Data points are set in a 300 second interval.

Response Time Graph Clicking on the image will enlarge it

From inspecting the graph, we can see that the response time fluctuates from time to time. This is probably due to peculiarity of the computer that the server is running on as described in the previous section.

Conclusion

In order to improve performance, we should lower the amount of posts initially loaded in user's profiles and feed.

All in all, 20 concurrent users should not cause any noticeable performance problems to our server/webapp.

Optimizations

Following screenshot shows the stats of another two hour long test after changing the amount of initial posts shown in feed and profiles to 15 instead of initial 50.

Retest stats

After the optimization, all the requests take ~300 milliseconds. On average, the response time improved by ~100 milliseconds.

Clone this wiki locally