Setting Up a Load Balancer

You can configure a load balancer on your Sisense Linux cloud instances to automatically distribute traffic across multiple nodes. This provides an externally-accessible IP address that sends traffic to the correct port on your cluster nodes.

Sisense supports load balancing for Google GKE, Microsoft AKS, and Amazon EKS.

You can enable a load balancer when configuring your deployment as described here or you can update your configuration file and then run the configuration script again.

To implement load balancing:

  1. In the cloud_config.yaml file:

    • For an internet-facing load balancer, set:
      • cloud_load_balancer to true
      • cloud_load_balancer_internal to false
    • For an internal load balancer, set:
      • kubernetes_cloud_provider to aws
      • cloud_load_balancer to true
      • cloud_load_balancer_internal to true
  2. After you have set the remaining parameters, run the script with the following command.

    ./sisense.sh cloud_config.yaml

To switch an internet-facing load balancer to an internal load balancer, or vice versa:

  1. Delete the old load balancer by either:
    • deleting the Kubernetes service that creates the load balancer (that is, api-gateway-external)
    • deleting the load balancer resource directly from the cloud provider dashboard

To handle Sisense traffic in an existing Kubernetes cluster on Amazon EKS, Sisense supports using an AWS Load Balancer Controller for managing a cluster's Elastic Load Balancers, see Using an AWS Load Balancer Controller with Sisense on Amazon EKS .