Installing Sisense using Helm Charts
You can deploy Sisense in your Kubernetes cluster in the form of a set of container images in a package provided by Sisense as a Helm chart. This page describes how to install Sisense using Helm charts on both single and multi-node deployments.
Note:
The Helm Charts installation option can be used to install Sisense up to version L2021.1. For later Sisense versions, use Provisioner to install, upgrade, and/or maintain Sisense on an existing Kubernetes environment. See Installing Sisense using Provisioner and Helm.
Prerequisites
For Single-node deployments:
- Ability to add disks and mount them to the designated node
For Multi-node deployments:
- RWX and RWO persistent storage class available in the cluster for Sisense deployment
For both deployments:
- Sisense version L8.2.6.582 or later
- Helm 3.4.1 installed. If you are currently using Helm 2 charts, you can migrate them to Helm 3 charts. For information see Migrating Helm v2 to v3.
- The helm charts included as part of the deployment. Download the latest version, extract the archive, and retrieve the charts in: kubespray/roles/sisense/files
- ALB Controller Helm chart path: kubespray/roles/sisense/setupalbcontroller/files/
Deploying Sisense
To deploy Sisense with Helm charts:
-
Verify that your kubectl is connected to the correct K8S cluster.
-
For single-node deployments, add another disk to the designated node and map it to /opt/sisense. See Mounting on a Dedicated Disk for Sisense Single Nodes for more information.
Create the following directories with the commands below:
install -d -m 0755 -o 999 -g 999 /opt/sisense/mongodb
install -d -m 0755 -o ${your_linux_user}/opt/sisense/config/umbrella-chart
install -d -m 0755 -o ${your_linux_user}/opt/sisense/config/logging-monitoring
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage
install -d -m 0755 -o ${your_linux_user}/opt/sisense/zookeeper
install -d -m 0755 -o ${your_linux_user}/opt/sisense/dgraph-io
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/nlq
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/plugins
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/external-plugins
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/external-plugins/apiPlugins
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/external-plugins/apiPlugins/plugins
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/external-plugins/apiPlugins/swaggerDefs
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/backups
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/connectors
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/data
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/emails
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/exports
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/reports
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/farms
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/samples
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/translations
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/licensing
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/system_backups
install -d -m 0755 -o ${your_linux_user}/opt/sisense/storage/serverSidePlugins
-
Label the nodes in your deployment.
For single-node deployments:kubectl label node ${NODE_NAME} node=single --overwrite
For multi-node deployments:Run the commands below and replace the values ${NODE_NAME} and ${NAMESPACE}
On application+query nodes:
kubectl label node ${NODE_NAME} node-${NAMESPACE}-Application=true --overwrite
kubectl label node ${NODE_NAME} node-${NAMESPACE}-Query=true --overwrite
On build nodes:kubectl label node ${NODE_NAME} node-${NAMESPACE}-Build=true --overwrite
Note:
If Sisense is live, Sisense recommends you label all your nodes as Application, Query, and Build.
-
Create a Sisense namespace in your cluster, with a namespace.yaml file that contains:
${NAMESPACE} - namespace name
Example
-
Apply the namespace file with the following command:
kubectl apply -f namespace.yaml
-
For multi-node deployments only, deploy a RWX PersistentVolumeClaim for Sisense.
(For FSx only) Deploy the FSx driver and storage class object included in the original installation package.
kubectl apply -f kubespray/roles/storage/files/fsx-latest-driver.yamlkubectl apply -f kubespray/roles/storage/templates/fsx-sc.yaml.j2
-
Create a pvc.yaml file that contains the following:
- ${SISENSE_DISK_SIZE} - PersistentVolumeClaim Disk size (default recommended value is 70Gi)
- ${STORAGE_CLASS_NAME} - Read Write Many Supported StorageClass (aws-fsx for example)
- ${NAMESPACE} - namespace name
- You can download an example file here.
If you are using AWS-FSx as your shared storage, you should also generate a persistent volume:
pv.yaml
`${fsx_file_system_id} - AWS FSx file system id
${fsx_region} - AWS FSx region
${fsx_mount_name} - AWS FSx mount name
${NAMESPACE} - namespace name`
For example:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: ${NAMESPACE}-storage-pv
spec:
capacity:
storage: ${SISENSE_DISK_SIZE}Gi
volumeMode: Filesystem
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: storage
namespace: ${NAMESPACE}
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
mountOptions:
- flock
storageClassName: aws-fsx
csi:
driver: fsx.csi.aws.com
volumeHandle: ${fsx_file_system_id}
volumeAttributes:
dnsname: ${fsx_file_system_id}.fsx.${fsx_region}.amazonaws.com
mountname: ${fsx_mount_name}
Create the resources:
kubectl apply -f pvc.yaml
kubectl apply -f pv.yaml
-
Create value files for your Helm charts according to your deployment type:
Single-node :
You can download examples here:
version L2021.1.0
version L2021.1.1
version L2021.10Multi-node :
You can download examples here:
version L2021.1.0
version L2021.1.1
version L2021.10Description of all the parameters:
- ${IS_CLOUD_LOAD_BALANCER_ENABLED} - Set to "true" for a load balancer to be deployed as part of the deployment. When "true", after the Helm installation is complete, the load balancer will be available. Update the value of ${API_SERVER_ADDRESS} and re-run the Helm upgradeade
- ${K8S_NODE_1} - This is the name of the DNS. If there is no DNS, this is the external IP of the first/only cluster node. This parameter will be used to configure fluentd (logging) - the location of the combined logs file
- ${APPLICATION_DNS_NAME} - If a DNS exists for the relevant nodes, otherwise leave it blank.
- ${GATEWAY_PORT} - This is the ${API_SERVER_ADDRESS} port. Sisense recommends port 30845, but it can be changed to any port between 30000-32000
- ${API_SERVER_ADDRESS} - This is composed of three parts
* http_protocol://address:port
* http_protocol - If ${SSL_ENABLED}, then "https://", otherwise it is "http://"
* address - This is one of the following:
* If a DNS exists, set the master-node name.
* If a load balancer exists, set its address.
* Otherwise set the external IP of a master-node in the cluster.
* port - This is one of the following:
* If a DNS exists, leave it empty.
* If a load balancer exists and (${SSL_ENABLED}==false), set it to 80, otherwise, leave it empty.
* Otherwise, set it to${GATEWAY_PORT}. - ${NAMESPACE} - The name of your Sisense deployed namespace.
- ${IS_K8S_CLOUD} - If this is a cloud Kubernetes cluster, set this to "true", otherwise, set it to "false".
- ${IS_ALB} - If using AWS Load balancer controller, set this to "true", otherwise, set it to "false".
- ${KUBE_API_PORT} - If k8s_cloud, then set this to 443, otherwise, set it to 6443.
- ${DOCKER_REGISTRY} - If online_installation, set this to "quay.io/sisense", otherwise set this to "
". - ${EXPOSE_NODE_PORTS} - To expose a third-party node, set the port to "true", otherwise, set it to "false".
- ${KUBE_NETWORK_PLUGIN} - Set your network plugin name (CNI).
- ${MONITORING_OWNER_ID} - Set your username in logz.io. The default should be devops@sisense.com. (Note: This will be deprecated soon.)
- ${ENABLE_INTERNAL_MONITORING} - Set to true to indicate Prometheus and Grafana, otherwise, set it to false. (Note: This will be deprecated soon.)
- ${ENABLE_EXTERNAL_MONITORING} - Set this to true if you allow uploading logs to logz.io to allow monitoring for Sisense. (Note: This will soon be changed to LOGZIOֹMONITORING.)
- ${STORAGE_CLASS} - Your RWO storage class name.
- ${K8S_CLOUD_PROVIDER} - If you are deploying Sisense on the cloud, the value should be the name of your provider: AWS, GKE, or Azure.
- ${DEPLOYMENT_TIMEZONE} - Enter the system time zone. Default: UTC Applicable to the timezone of the relative date-time filters. Format: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones (e.g. UTC, US/Central, Asia Tokoyo, Etc/GMC+6)
- Multi-Node only parameters:
- ${ZERO_DISK_SIZE} - The default should be "1GB", but it can be increased
- ${ALPHA_DISK_SIZE} - The default should be "1GB", but it can be increased
- ${MONGO_DB_DISK_SIZE} - The default should be "3GB", but it can be increased
- ${ZK_DISK_SIZE} - The default should be "2GB", but it can be increased.
-
Deploy SSL if required. If you deploy SSL, you must provide your SSL key and certificate files. Enter the following variables:
- ${SSL_KEY_PATH} - The path to your SSL key file
- ${SSL_CERT_PATH} - The path to your SSL certificate file
- ${NAMESPACE} - Your Sisense installation namespace
- Delete any existing certificates that might already exist with this name:
kubectl delete secret -n ${NAMESPACE} sisense-tls --ignore-not-found
Create new certificates for each namespace:kubectl create secret tls sisense-tls -n ${NAMESPACE} --key ${SSL_KEY_PATH} --cert ${SSL_CERT_PATH}
-
For multi-node deployments only. Deploy a descheduler (binds pending pods to nodes) and Nginx (replica controller) charts.
Use the following parameters when running the Helm commands below:
* ${DESCHEDULER_CHART_PATH} - Sisense Helm chart path
* ${NGINX_CHART_PATH} - Sisense Helm chart path
* ${UTILS_NAMESPACE} - Cluster utils namespacehelm upgrade descheduler --install --namespace ${UTILS_NAMESPACE} ${DESCHEDULER_CHART_PATH}
helm upgrade nginx-ingress --install--namespace ${UTILS_NAMESPACE} ${NGINX_CHART_PATH}
-
Create a ServiceAccount for the management service by creating a management-admin.yaml file that contains:
* ${NAMESPACE} - The namespace name.
Example management-admin.yaml:
kind: ServiceAccount
apiVersion: v1
metadata:
name: management
namespace: ${NAMESPACE}
labels:
app.kubernetes.io/managed-by: Helm
annotations:
meta.helm.sh/release-namespace: ${NAMESPACE}
meta.helm.sh/release-name: ${NAMESPACE}
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: management-${NAMESPACE}
subjects:
- kind: ServiceAccount
name: management
namespace: ${NAMESPACE}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: management
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: management
rules:
- apiGroups: ["", "apps", "extensions"]
resources:
- configmaps
- endpoints
- persistentvolumeclaims
- replicationcontrollers
- secrets
- serviceaccounts
- services
- pods
- crontabs
- secrets
- deployments
- statefulsets
- daemonsets
- services
- replicasets
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["batch", "extensions"]
resources: ["jobs", "cronjobs"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch", "patch"]`
Then run:
kubectl apply -f management-admin.yaml
-
Deploy your Sisense chart using the following parameters when running the Helm command below:
helm upgrade ${RELEASE_NAME} --install --force --namespace ${NAMESPACE} --values ${VALUES_YAML_PATH} ${SISENSE_CHART_PATH}
where:- ${RELEASE_NAME} - Helm release name (unless explicitly defined otherwise, the name of the Helm release is sisense)
- ${NAMESPACE} - Namespace name
- ${VALUES_YAML_PATH} - The values.yaml path
- ${SISENSE_CHART_PATH} - Sisense helm chart path
-
Monitor the Sisense Helm installation by running:
kubectl get po -n ${NAMESPACE} -w
where:- ${NAMESPACE} : Namespace name