Skip to content
This repository was archived by the owner on Aug 26, 2021. It is now read-only.

Load Balancing

sufferingtrout edited this page Jun 19, 2014 · 6 revisions

Janus Load Balancing Strategies

In Janus, load balancing is handled by implementations of the interface com.kixeye.core.janus.loadbalancer.LoadBalancer, and several implementations are provided out of the box.

Random Load Balancer

This is the default load balancing strategy used by the Janus.Builder class when constructing a Janus instance. Create a Janus instance using this strategy:

Janus.builder("UserService")
    .build();

or

Janus.builder("UserService")
    .withRandomLoadBalancing()
    .build();

or

Janus.builder("UserService")
    .withLoadBalancer(new RandomLoadBalancer())
    .build()

Least Used Server Instance

This Eureka-specific load balancing strategy will select the server instance which is being used the least. Janus determines each server instance's current usage by inspecting a custom Eureka meta-data field called sessions. Each server instance must keep Eureka up-to-date with its current session count. Janus will then use that information to determine which server instance is being used the least. The meaning of the sessions field is defined by the server instances. For example, sessions could be the number of concurrent connections a server instance has. Or, it could be the number of concurrent http requests it is serving.

Janus.builder("UserService")
    .withSessionLoadBalancing()
    .build();

or

Janus.builder("UserService")
    .withLoadBalancer(new SessionLoadBalancer())
    .build()

As stated earlier, this load balancing strategy requires that the configured service instance discovery strategy be integrated with Eureka. Specifically, the ServerList implementation must return List<EurekaServerInstance> from the getListOfServers() method (See Server-Instance-Discovery for more information.

Closest Server Instance

This Eureka-specific load balancing strategy will select a server instance nearest to it.

Each server instance is given a score based on its location relative to the load balancer instance (the client instance running Janus), and the server instance's current load. The server instances score's are then compared to each other and the server with the best score is selected.

There are 3 location buckets for which a server instance will be placed: Availability Zone, Region, and Area. Preference is given to server instances which are nearest to the client (the load balancing server) starting with Availability Zone, then Region, and finally Area. Each server instance reports its Availability Zone to Eureka using the meta-data field availability-zone.

A server instance may be "downgraded" to a lower location bucket if their load factor is considered too high relative to the load factor of the other servers. The load factor of a server instance is calculated based on the amount of traffic sent to each service instance BY THIS CLIENT ONLY. Ultimately, the nearest server instance with the lowest amount of load will be selected. Server instances which are unavailable (short circuited, offline etc...) will not be considered candidates for selection.

Janus.builder("UserService")
    .withZoneAwareLoadBalancing("us-west-2a") //us-west-2a in this case is the availability zone that client running Janus is in
    .build();

or

Janus.builder("UserService")
    .withLoadBalancer(new ZoneAwareLoadBalancer("UserService","us-west-2a", metricRegistry))
    .build()

Use Your Own Implementation

If you'd like to implement your own load-balancing strategy, implement the com.kixeye.core.janus.loadbalancer.LoadBalancer interface, and build your Janus instance with it:

public class MyCustomLoadBalancer implements LoadBalancer{
    public ServerStats choose(Collection<ServerStats> serverStats){
        //your logic here
    }
}
Janus.builder("UserService")
    .withLoadBalancer(new MyCustomLoadBalancer())
    .build()

Clone this wiki locally