Troubleshooting Rancher Kubernetes Engine (RKE) Deployment and Installation on Linux

Introduction

The purpose of this guide is to provide the reader an in-depth understanding of the process of the deploying, installing, and rolling out a Sisense Linux-based platform. The document provides best practice details about the kind of infrastructure to deploy, and walks you through the installation part of the platform, including troubleshooting and overcoming installation problems.

Intended audience

Any person charged with deploying and installing the Sisense platform, and supporting and troubleshooting issues.

Sisense Version Consideration

This document was written based on release L2022.03, using the Ansible-based installation approach for Linux, using RKE as the method of deploying Kuberenetes.

Installation Logic and Log

The following section outlines in detail the entire installation logic for deploying Sisense on top of the supported Linux infrastructure, and includes:

  1. Single-Node
  2. Multi-Node (Cluster)
  3. Offline Installation
  4. Cloud (Amazon AWS, Google, Azure)
  5. Openshift

Note: The shell script is the code that is executed when an installation is initiated. It includes several preparation and other steps, and is responsible for calling and executing the given Ansible playbooks. The shell script code is divided into several functions, each performing tasks that are based on the input configuration (e.g., single_config.yaml). The Ansible playbooks are invoked by the shell script functions and execute various tasks.

How to Use This Document

This section of the document can be used to trace what the installation is performing, given the deployment approach (e.g., multi-node) and configuration, and identify the steps (task) in which the installation failed/stopped. A failure can occur either in the shell script or Ansible tasks. ;

You can identify which task was executed or failed via the log that is generated as part of the installation. Any failure will list the task in which the failure occurred, and the shell script function name will be displayed. You can use that to search through the document below and find which shell script section it failed in, by looking at the “Function” column. ; Then, trace through the steps (either shell script or playbooks) in the given function. ; ;

For example, the following highlights that a failure occurred in the “install_sisense” shell script function. Go to the applicable function and step through it to see which log entry executed prior to the failure, for example, in this case it was monitoring_stack: ;monitoring_stack : Logging | Install chart logging monitoring.

TASK [ <monitoring_stack : Logging | Install chart logging monitoring>]
*****************************************************************************************************************************************
changed: [node1]
Wednesday 18 August 2021 ; 10:59:47 +0300 (0:00:01.044) ; ; ; 0:04:30.703

******PLAY [kube-master[0]]
****************************************************************************************************************************************************************************************
Wait for databases migrations get finished .
Wait for databases migrations get finished ..
Wait for databases migrations get finished ...
Wait for databases migrations get finished ....
Wait for databases migrations get finished .....
Wait for databases migrations get finished ......
Wait for databases migrations get finished .......
Wait for databases migrations get finished ........
Error occurred during
<install_sisense> section
Exiting Installation ...

Sometimes an explicit message is written by the shell script exception handling that is performed, or as part of the steps performed inside the tasks of the Ansible playbooks. ; Note that success/notification messages and exception messages are indicated in the columns accordingly (either at the shell script level or as part of Ansible tasks).

For example, the following occurred as part of a misconfiguration of the given YAML file that was caught by the checks of the given shell script function. ; This is an example of a user that was configured as a linux_user that does not exist (sisense2). Therefore, the installation fails in the validate_user_config function section of the shell script. ; Searching for this function below illustrates the checks performed to validate the user.

Preparing System ...
Detecting Host OS ...
OS: Ubuntu, Version: 18.04
Validation OS and version
Fail to find linux user <sisense2> ... Please set an existing user with sudo permission in the configuration file and try again...
Error occurred during validate_user_config sectionExiting Installation ...

Once correcting this to a valid existing user, the log shows that the validation succeeded.

Preparing System ...
Detecting Host OS ...
OS: Ubuntu, Version: 18.04
Validation OS and version
Linux user: sisense
Validating Sudo permissions for user sisense …
User sisense has sufficient sudo permissions

Shell Script and Ansible Playbooks

The following is the walkthrough of the shell script/Ansible playbooks that are used to deploy Sisense, where, in this case, RKE is used (as opposed to Kubespray) to deploy the Kubernetes infrastructure.

The document outlines the installation logic that includes Linux/other commands, the configuration usage, and decision making used in each step of the installation. This includes the messages that would be written in the installation log/console for the steps being executed, and any exception/error messages that will be raised for a problem during the installation. ; ;

Note that not all messages are outlined in the document, as some are not code-based but are automatically generated by Ansible.

Script Logic Logic Description Success/Notification Message Exception Message Shell Script Function
The following outlines the sisense.sh shell script that is responsible for performing some information gathering, validations, and installing things on the server that is set to be the installation server (where Sisense is deployed from running the installation package).
Load and parse the configuration files All the configuration files are loaded and parsed:
  1. The passed yaml config file (e.g., single_config.yaml) or if not passed the config.yaml file used.
  2. “./installer/default.yaml”
  3. “./installer/extra_values/installer/installer-values.yaml”
Default values are going to be loaded from config file ; located in “./installer/default.yaml”.The installation additional “extra values” config file will be loaded also from “./installer/extra_values/installer/installer-values.yaml”.Host file is going to be located at “installer/inventory/sisense/sisense.hosts”, which will include the various nodes and their roles involved in the cluster being installed.Ansible configuration setting is located in “./installer/ansible.cfg”. ; The configuration contains the settings for how to run the ansible playbooks. ; Note this configuration sets the log path and name for the install log ; (e.g, log_path = ../sisense-ansible.logs)The default.yaml file is located under the given installation directory, in the “kubespray” directory and contains:
  1. The Sisense version for the installation package
  2. The docker registry and location
    1. docker_registry: quay.io/sisense
    2. sisense_helm_repo: data.sisense.com/linux/helm/charts/incubator
  3. logz_key used in order to enable monitoring
The configuration file used can be either of the following, given the deployment approach being used:
  • single_config.yaml - used for single node installation
  • openshift_config.yaml - used for OpenShift installation
  • cluster_config.yaml - used for multi-node installation
  • cloud_config.yaml - used to install on cloud services (Amazon, Google, Azure)
  • config.yaml - contains all the options for configuration for all deployment approaches. ; This is used for backwards compatibility purposes.
The given Config is passed into the script as part of the install command. ; For example in the example below the single_config.yaml passed as a parameter to the script will be used by installation.
./sisense.sh <configuration file>.yaml
From this point forward, values set in parameters in this given file will be used by the installation. ;
Preparing System ... Yaml file will be validated for correcteds. ; Any problem with the yaml file structure will fail the installationFor example:
yq: Error running jq: ScannerError: while scanning a simple key
; ;in "<stdin>", line <line number where the problem is>, column <location in the line>.
You can use tools or websites that can validate the yaml file (e.g., http://www.yamllint.com)
Make sure values set in the configuration are valid (example, a valid ip for internal/external ip address).
load_parse_yaml
Set debug mode based on configuration If the installer value “debug_mode” is set to true, the debug mode for the installation will be enabled which means that extra information is going to be written to the log. ; This includes writing the commands that are performed during the execution of the steps of the installation.
Check installer User value This step will use the configured “linux_user” and validate if the user is set up on the server by running command “getent passwd <the supplied user name in the config yaml file>”. Linux user: <name of configured user> Fail to find linux user <the supplied user name in the config yaml file> ... Please set an existing user with sudo permission in the configuration file and try again…This could be the result also of an invalid running mode for the installation including:
  • You are running the installation with sudo (and you should not), for example ;
  • sudo ./sisense.sh <yaml file>
  • Please also make sure you are using the installation user previously used to deploy in the case you are running an upgrade.
  • Another reason for a potential failure could be iIf the installation is running on a modified OS (such as security hardening) that is causing compatibility issues
validate_user_config
Check the user has proper permission This step will check that the user has Sudo rights required to run the installation by running the following command:
timeout -k 2 2 sudo /bin/chmod --help
To further check permission you can run the following:
sudo -l -U "<configured user>"
The resulting permission is to run “ALL” commands as follows:
User sisense may run the following commands on <node used for installation>: ; ; ; ;(ALL : ALL) NOPASSWD: ALL
Validating Sudo permissions for user <configured user>...
User <configured user> ;has sufficient sudo permissions
Validating Sudo permissions for user <configured user>...
User <configured user> ;has insufficient sudo permissions
sudo configuration should include NOPASSWD for the installation and can be removed at the end of the installation.
validate_sudo_permissions
Determine if an installation mode should be set to Upgrade. Check if the installation config is set to run an “update” mode, which means the installation will assume Sisense is deployed already and is being upgraded.
If config is set to “update = true”, then an upgrade is assumed.
If it is set to false, the installation will double check that no Sisense deployed is in place by doing the following:
Run the command “kubectl api-resources”. If this returns Kubernetes resources list, it will run “kubectl get nodes” and look for a node returned with status “Ready”. ; If it is, the update mode will be enabled regardless of config.
Nodes are not ready. In order to run the installation your nodes must be in ready state set_update_parameter
Determine if installation is configured to be an off-line (Air Gapped). The installation will run in offline mode given that:
  1. Not a Cloud installation
  2. Not an Openshift installation
  3. And installation is configured for offline installation
Given that if installation is that of a cluster (multi-node) installation, cluster mode will be set based on the fact a Kubernetes cluster is already in place.
Detect OS Gather the OS and version in which the server is running.It executes the various linux commands ; to determine the OS and version running.This determines the OS and Operating System version that is deployed, by reading the given file for the given OS and checking the given set parameters to determine the OS Name and Release
OSFile to Read/CommandReturned ParametersExample
Linux (freedestop.org and systemd)cat /etc/os-releaseNAMEVERSIONName ="Ubuntu"Version ID ="18.04.4 LTS (Bionic Beaver)"
Linux (linuxbase.org)cat /etc/lsb_release -sicat /etc/lsb_release -sr Name = UbuntuVersion ID = 18.04
Debian/Ubuntu without lsb_release commandcat /etc/lsb-releaseDISTRIB_IDDISTRIB_RELEASEName = UbuntuVersion ID = 18.04
Older Debian/Ubuntu OScat /etc/debian_version Name = "Debian GNU/Linux"Version ID="9"
Susecat /etc/Suse-releaseNAMEVERSIONName = openSUSE 13.1 (x86_64)Version ID ; = 13.1
Redhatcat /etc/redhat-releaseNAMEVERSIONName = Red Hat Enterprise Linux ServerVersion ID = “7.6 (Maipo)”
Otheruname -suname -r Name = LinuxVersion ID= 4.15.0-76-generic
Detecting Host OS ; …OS: $OS, Version: $VER Potential issues for this failure could be:
  • You are running the installation with sudo (and you should not), for example ;
  • sudo ./sisense.sh <yaml file>
  • Please also make sure you are using the installation user previously used to deploy in the case you are running an upgrade.
  • Another reason for a potential failure could be iIf the installation is running on a modified OS (such as security hardening) that is causing compatibility issues.
detect_os
Validate the OS/Version The script validates that given the OS detected from the previous step is with the right version that is supported.If the OS version is valid, then install “gawk”. ; If it is installed already it will be skipped.
OSValid VersionsIf ValidException
Red Hat“7.” or ; “8.”sudo yum install -y gawkUnsupported version has been detected, Sisense Linux deployment is certified to run on Red Hat 7.x/8.x only"
Ubuntu“18.04" or ; "20.04"sudo apt install -y gawkUnsupported version has been detected, Sisense Linux deployment is certified to run on Ubuntu 18.04 / 20.04 only
CentOS “8”sudo yum install -y gawkUnsupported version has been detected, Sisense Linux deployment is certified to run on CentOS 8.x only ;
Amazon“2”sudo yum install -y gawk ;Unsupported version has been detected, Sisense Linux deployment is certified to run on Amazon Linux 2 only
Validation OS and version. Unsupported version has been detected, Sisense Linux deployment is certified to run on Red Hat 7.x/8.x onlyUnsupported version has been detected, Sisense Linux deployment is certified to run on Ubuntu 18.04 / 20.04 onlyUnsupported version has been detected, Sisense Linux deployment is certified to run on CentOS 8.x onlyUnsupported version has been detected, Sisense Linux deployment is certified to run on Amazon Linux 2 onlyOS is not supported, please vist https://documentation.sisense.com/latest/linux/step1.htm for more information validate_os_version
Checks configuration of K8s nodes Check to see there is a value assigned for “K8S_nodes” configuration. check_k8s_array
Check which packages are installed on the OS If the OS is Ubuntu or Debian then run:lsof /var/lib/dpkg/lock ;If OS is Redhat or CentOS then run the service with PID yum (use yum command to see packages installed):sudo rm -rf /var/run/yum.pidThis will determine if yum or apt will be used to perform repo installations. check_dpkg
Check Password Check if there is a password configured for SSH operation.For a non-offline installation, the password is either received from ; “password” in the yaml file and if configured this message will be received:“Using password for Ansible Installation …” will appear to indicate that a password is used for the installation. ;If the password does not exist, ; the script will check if the configuration “ssh_key” contains an ssh_key. ; ; If configured:“Using private key path $config_ssh_key for Ansible Installation…”If not, a password prompted is going to be displayed for the installer to enter:"Enter the password for user $config_linux_user and press [ENTER]: " Using password for Ansible Installation …OREnter the password for user $config_linux_user and press [ENTER]:ORUsing private key path $config_ssh_key for Ansible Installation... SSH key path, <configured path> is not a valid path get_sshpass
Determine which Ansible version is deployed ; NOTE: ; THIS STEP IS AN INSTALLING THE OLDER VERSION OF ANSIBLE WHICH IS USED BY KUBESPRAY OR OLDER INSTALLATION WHICH MEANS THAT THE OLDER PACKAGES WILL NOT RUN UNLESS THE OLDER VERSION OF ANSIBLE IS REINSTALLED.Remove old ansible version if exists:Run the command to determine the version of Ansible that is deployed.pip show ansibleIf version older than 4.10 is deployed (e.g, 2.9.16), the ansible version will be uninstalled:pip uninstall -y ansibleRemove old ansible core version if exists:Run the command to determine the version of Ansible that is deployed.sudo pip show ansible-coreIf version older than 2.11 is deployed, the ansible core version will be uninstalled:sudo pip uninstall -y ansible-core Removing old ansible version <version name>Removing old ansible-core version <version name>
Deploy some of the packages required for the installation, including python. The following step confirms if python is deployed already on the server. ; If not, it installs it. ; The package the install is downloaded from the Docker Hub that Sisense is configured to.This step is not run for offline installation.Install all of the packages outlined below for each given OS.
OSAction Performed
UbuntuRun the following:
  1. sudo apt-get update
  2. sudo apt-get install -y python3 netcat sshpass python3-apt python3-pip dbus
  3. sudo python3 -m pip install --upgrade --force-reinstall pip==21.1.3
  4. sudo ln -sf /usr/local/bin/pip /usr/bin/pip
  5. sudo apt-get install -y jq
CentOSRun the following:
  1. sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
  2. sudo yum install -y epel-release --enablerepo="epel" python3-pip nc sshpass jq libselinux-python3
  3. sudo python3 -m pip install --upgrade --force-reinstall pip==21.1.3
  4. sudo ln -sf /usr/local/bin/pip /usr/bin/pip
  5. sudo python3 -m pip install configparser zipp
AmazonRun the following:
  1. sudo amazon-linux-extras install epel -y
  2. sudo yum reinstall -y python3 python3-pip || sudo yum install -y python3 python3-pip
  3. sudo yum install -y nc sshpass jq libselinux-python3
  4. sudo python3 -m pip install --upgrade --force-reinstall pip==21.1.3
  5. sudo python3 -m pip install selinux
  6. sudo ln -sf /usr/local/bin/pip /usr/bin/pip
Red HatRun the following:
  1. sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
  2. sudo yum install -y --enablerepo="epel" python3-pip nc sshpass jq libselinux-python3
  3. sudo python3 -m pip install --upgrade --force-reinstall pip==21.1.3
  4. sudo ln -sf /usr/local/bin/pip /usr/bin/pip
  5. sudo python3 -m pip install configparser zipp
If none of the above was executed the following installation will be done:
  1. sudo python3 -m pip install virtualenv
  2. sudo python3 -m virtualenv sisense-installer
  3. source sisense-installer/bin/activate
  4. sudo python3 -m pip install -r installer/requirements.txt --ignore-installed
Python is installed based on the version requirements set under “installer/requirements.txt”.
Verifying Python packages exist ... generic_pre_install
Disable OS from being auto-updated This step is used to disable automatic Kernel updates on the OS. ; This is required to protect the Sisense platform from being affected from such tasks being performed automatically and uncontrolled.The commands to execute the hold on Kernel update will be executed with either the supplied linux user password in the config, or will request the password to be entered if password is not supplied.To point to the specific Kernel to hold updates, a command of “uname -r” will be executed to get the Kernel version.
OSCommand
Debiansudo -S apt-mark hold ; <kernel version>
Ubuntusudo -S apt-mark hold ; <kernel version>
Red Hatsudo tee -a /etc/yum.conf
CentOSsudo tee -a /etc/yum.conf
Fedorasudo -S apt-mark hold ; <kernel version>
All Othersudo -S apt-mark hold ; <kernel version>
For example, the following command ran on Ubuntu will hold updates on all of the specified kernel versions:
  • linux-headers-4.15.0-76-generic was already set on hold.
  • linux-image-4.15.0-76-generic was already set on hold.
  • linux-modules-4.15.0-76-generic was already set on hold.
  • linux-modules-extra-4.15.0-76-generic was already set on hold.
  • linux-buildinfo-4.15.0-76-generic set on hold.
  • linux-cloud-tools-4.15.0-76-generic set on hold.
  • linux-image-unsigned-4.15.0-76-generic set on hold.
  • linux-tools-4.15.0-76-generic set on hold.
  • linux-modules-nvidia-390-4.15.0-76-generic set on hold.
This step will also not be performed if GlusterFS is set as the storage in the configuration file as we assume the installation will not run due to GlusterFS not being supported anymore for new installations. ; This step is also not performed for offline installation.
disable_kernel_upgrade
Prompt for user/password to connect to registry for offline installation. For an offline installation with a configured docker registry, the user is prompted to enter a user and password to log into the docker. Enter the username with permissions to pull images from <configured docker registry> and press [ENTER]:Enter your password and press [ENTER]: get_registry_credentials
Validate K8s Node Configuration Validate the k8s_nodes configuration for each configured node:
  1. Node: has a value assigned to it
  2. A valid IP is set for internal_ip
  3. A valid IP is set for external_ip
k8s_nodes: ; ;- { node: node1, internal_ip: 0.0.0.0, external_ip: 0.0.0.0 }For more than one node configured:
  1. check the disk volume device path is set.
  2. Check that the “roles” for the node is set, to have application, query and/or build installed.
k8s_nodes: ;- { node: node1, internal_ip: 0.0.0.0, external_ip: 0.0.0.0, disk_volume_device: /dev/sdb, roles: "application, query" } ;- { node: node2, internal_ip: 0.0.0.0, external_ip: 0.0.0.0, disk_volume_device: /dev/sdb, roles: "application, query" } ;- { node: node3, internal_ip: 0.0.0.0, external_ip: 0.0.0.0, disk_volume_device: /dev/sdb, roles: "build"
The array of configured nodes must be valid with the proper format outlined in the configuration.Wrong syntax was found at <configuration file> section .k8s_nodes.Please verify the following:" ; ; ; ; ; ;[[ $node_name_validation != 0 ]] && echo -e "- node: \"$(yq . $VARS_FILE | jq -r -c '.k8s_nodes['"$i"'].node')\" must not contain '_'" ; ; ; ; ; ;[[ $internal_ip_validation != 0 ]] && echo -e "- internal_ip: \"$(yq . $VARS_FILE | jq -r -c '.k8s_nodes['"$i"'].internal_ip')\" must follow IPv4 structure x.x.x.x or be different from 127.0.0.0 and 0.0.0.0" ; ; ; ; ; ;[[ $external_ip_validation != 0 ]] && echo -e "- external_ip: \"$(yq . $VARS_FILE | jq -r -c '.k8s_nodes['"$i"'].external_ip')\" must follow IPv4 structure x.x.x.x or be different from 127.0.0.0 and 0.0.0.0" ; ; ; ; ; ;[[ $disk_volume_validation != 0 ]] && echo -e "- disk_volume_device: \"$(yq . $VARS_FILE | jq -r -c '.k8s_nodes['"$i"'].disk_volume_device')\" must follow /dev/xxxx syntax" ; ; ; ; ; ;[[ $roles_validation != 0 ]] && echo -e "- roles: \"$(yq . $VARS_FILE | jq -r -c '.k8s_nodes['"$i"'].roles')\" possible values are \"applicationֿ\", \"query\", \"build\" , \"query, build\" or \"application, query, build\"" validate_k8s_configuration_syntax
Determine if to upgrade Kuberenetes. Determine if we need to upgrade Kubernetes.Kubernetes upgrade will be skipped if any of the following is true
  • The file used is “cloud_config.yaml”
  • The file used is “openshift_config.yaml” ;
  • The configuration “offline_installer” is set to true
  • The configuration “uninstall_cluster” is set to true
  • The configuration “uninstall_cluster” is set to true.
Given Kubernetes is installed, we check which version it is by running the command “kubectl version”.For example:clientVersion: ; ;buildDate: "2020-05-20T13:16:24Z" ; ;compiler: gc ; ;gitCommit: d32e40e20d167e103faf894261614c5b45c44198 ; ;gitTreeState: clean ; ;gitVersion: v1.17.6 ; ;goVersion: go1.13.9 ; ;major: "1" ; ;minor: "17" ; ;platform: linux/amd64serverVersion: ; ;buildDate: "2020-05-20T13:08:34Z" ; ;compiler: gc ; ;gitCommit: d32e40e20d167e103faf894261614c5b45c44198 ; ;gitTreeState: clean ; ;gitVersion: v1.17.6 ; ;goVersion: go1.13.9 ; ;major: "1" ; ;minor: "17" ; ;platform: linux/amd64If the existing Kuberentes minor version is lower than 17, the installation will not run and/or K8s will not be upgraded.
Error due to K8s installed not being supported by the given release:Error: Current Kubernetes minor version is: <Existing minor version of Kubernete>This version is earlier than the current version: <supported minor version of Kubernete>Updating Sisense from <existing minor version> to <supported minor version> is not supported"To upgrade Kubernetes to the supported version, go to the following link: https://documentation.sisense.com/latest/linux/upgrade_k8s.htm#gsc.tab=0 validate_k8s_version_on_update
Determine what other configurations are set that will be used throughout the install Check if the following configurations have either a true or false value:
  • internal_monitoring
  • external_monitoring
  • uninstall_cluster
  • uninstall_sisense
  • remove_user_data
Toggle value can be true or false validate_common_config_toggles_syntax
Validates configured docker registry For offline installation, the docker registry will be used for the installation and is going to be validated in this step.If docker is configured on “private_docker_registry: ; true”, the docker registry value is validated. ** Docker Registry enabled but not defined.. Exiting. ** validate_general_configuration_syntax
Check SSH Connectivity to the configured Node servers Given this is not an offline installation, or a cloud/openshift install, the script checks ssh connectivity to each of the configured ips/nodes, using the command:nc -zv -w 10 <configured_ip> ; 22If sshpass is used, the following will be performed:If pem key is used:sshpass -p <configured_password> ; ssh -o "StrictHostKeyChecking no" -q <configured linux_user>Of if ip is used:ssh -o "StrictHostKeyChecking no" -q -i "configured_key" <config_linux_user> Validating ssh connectivity …Validating ssh connection to <node target ip>..Connection OKValidating ssh authentication …...Authentication OK ...Fail to connect <node target ip>:22ERROR - Fail to connect, Please make sure port 22 is open or ip addresses are correct...Fail to authenticate <configured linux user>ERROR - Fail to authenticate user <configured linux user> ; via SSH, Please make sure user, password or ssh_key are correct validate_network_connectivity
Check connectivity to all required docker sites Check connectivity to the required dockers in the case this is not an offline installer or uninstallation of sisense/cluster.The list of sites is grouped into:
  • common_sites=("quay.io" "docker.io" "pypi.org" "github.com" "auth.cloud.sisense.com" "bitbucket.org" ; "download.docker.com" "github.com" "gcr.io" "kubernetes.io" "l.sisense.com" "ppa.launchpad.net"
;"quay.io" "registry-1.docker.io" "storage.googleapis.com")
  • ubuntu_sites=("ubuntu.com")
  • yum_sites=("dl.fedoraproject.org")
  • yum_http_sites=("mirror.centos.org")
If the OS is Ubuntu, sites required are “common_sites”, “ubuntu_sites”.If another OS is used, “yum_sites” and “common sites”.Https port 443 connectivity is tested for each of the “ubuntu_sites”, “common_sites”, and “yum_sites” as follows: ; nc -zv -w 10 <site> ; 443.http port 89 is tested for “http_site”.
Validating connection to <each http site required>…[OK[Validating connection to <each https site required>…[OK[ …[Failed]ERROR - outband validation failed for one or more sites ... Please make sure it not being blocked by firewall or proxy settings validate_outbound_connectivity
Validate connection to Sisense docker. This step is run only if the registry configured is that of the sisense docker.If it is, the connection to “docker.artifactory.sisense.com” on port 80.nc -zv -w 10 <docker url> ; 80 Validating connection to <configured docker registry> ;... [OK] [Failed] validate_private_registry_connectivity
Check SSL cert/key exists Check if certificate and path specified in configuration exists in the given directories.Make sure paths and cert/key files are in place. Error occurred during validation_ssl_certs section validate_ssl_certs
Check that installation is running from within node1. If this is not an offline installation, or cloud/openshift, then check to see that installation is running on the first node configured in the yaml configuration based on comparing the server ip to what is configured in internal_config). Sisense Installer running IP Address for k8s node has been verified. Sisense Installer cannot verify Host IP. **"** Please make sure installing from the first k8s node ** ;Sisense Installer must run from the first k8s node of the cluster ** ;Sisense Installer local IP address for k8s node does not match **Please check k8s_nodes IP address in config file ** ;Installer will exit... ** validate_first_node_installer
Display yaml configuration content if requested to confirm. The content of the yaml file will be displayed based on if parameter passed to script prompted to display the configuration file before installation starts.
Shell Script Function: generate_inventory

Log Entry: Generating Ansible inventory...

The function calls the ansible “create_inventory” playbook.
If the installation is that of an offline installation, the playbook is run with GUI of “1000” and is run locally.
Playbook:“create_inventory”

This creates a host file, based on the type of installation being performed, which will be used throughout the installation to determine which installation parts to run on which node(s).

[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Task/Role Logic Success Message Fail Message Root Cause/Workaround/Solution
inventory : Create ansible inventory for Compute installation This step will run for single or cluster node that does not have rwx_cs_name/rwo_sc_name defined):Create the host file that will be used throughout the installation to determine which server (and its IP) will have what installed and configured on, and for Kubernetes to register to manage.The file will be created as: ; “ ../installer/inventory/sisense/sisense.host” and will be based on the configuration “../installer/roles/inventory/templates/sisense.hosts.j2" template.This file will be used by the system as a layout of the server(s) installations. ; It will identify who the K8 master (main node) is and list out the rest of the nodes.The below is an example of a simple single_node, and each of the types of servers are defined in the inventory of hosts, including:
  • [kube-master] designates the host that runs Kubernetes orchestration
  • [Kube-node] lists the various configured nodes in which Sisense will be deployed on.
  • [k8s-cluster:children] which lists all the hosts that are managed by Kubernetes.
  • ansible_ssh_host indicates the server that is used to run the installation.
  • [etcd] indicates which node the Kubernetes etcd is running on.
Other type of nodes:[heketi_node] - determine which server has the Heketi software running on to manage storage.Single-Node Inventory Examples:The below example has one node (node 1) which is used to run the installation and Kubernetes, with all of Sisense running on the same child node.
[all]
node1 ansible_ssh_host=0.0.0.0
[kube-master]
node1
[etcd]
node1
[kube-node]

node1
[k8s-cluster:children]

kube-node
kube-master
Multi-Node Inventory Examples:The example below illustrates that installation is running from node 1, node 1 is running Kubernetes. ; Node 2 is running Application and Query. ; Node 3 is running Build. ; Node 1 is the master node.
[k8s-cluster]
node1 ansible_ssh_host=0.0.0.0

[etcd]

node1

[kube-node]

node1
node2
node3

[application]
node2

[query]
node2

[build]
node3

[k8s-cluster:children]

kube-node
kube-master
inventory : Create ansible inventory for Cloud managed or OpenShift If Cloud or OpenShift or offline installation and configuration is based off “../installer/roles/inventory/templates/cloud.hosts.j2" template.
inventory : Create ansible inventory for CI system If CI System is used for deployment, the configuration file will be based on “../installer/roles/inventory/templates/jenkins.hosts.j2" template.
Shell Script Function:sisense_validations

Runs the playbook to validate the node(s) involved in the infrastructure setup.
Playbook: sisense-validation
Perform various validations, configuration and installations on infrastructure. ; ; The validation will be executed only for single or cluster multi node setup.

Note that validation values used throughout this section are configured by default in the following file:
..\Kubespray\sisense_validation\defaults\main.yml.

Note that some validations will run on each node, and the log will indicate the NODE name that succeeded or failed.
sisense_validation : Loading installer values from extra_values folder If installation is not offline, cloud, or openshift, and NOT an update (update=true):Load installer values from the extra_values folder if exists (../extra_values/installer/installer-values.yaml). ; Refer to the section of the document that details the extra values configuration. fatal: [node1]: FAILED! => {"ansible_facts": {}, "ansible_included_var_files": [], "changed": false, "message": "Could not find or access '/installer/playbooks/../extra_values/installer/installer-values.yaml' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"} File does not exist or is invalid. ; This file is required.
sisense_validation : ;Validate storage type is supported Validate if storage defined in installation configuration file is supported, which includes: ; rook-ceph, efs, nfs, nfs-server, fsx, glusterfs, azurefile,cephfs,trident, and portworx Unsupported Storage Type: ; <value set> Check the installation config that you correctly set up the storage.
Refer to the table task column for the specific task name On each Node (meaning not the master), the following will be performed, given the installed OS for installation that is new installation and not offline.
  • Install EPEL8 repo
  • Install Python 3 (including simple json)
  • install Chrony
  • Install sshpass
OS/VersionTaskCommand
CentOS 8Install EPEL8 repo - CentOS8Install Python3 - CentOS8yum -y -q installhttps://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
apt update
apt --yes install chrony sshpass python3-simplejson;
Redhat 8Install EPEL8 ; repo - RedHat8Install Python3 - RedHat8yum -y -q install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpmyum -y install chrony sshpass python3 python36-simplejson.x86_64
Redhat 7Install EPEL7 ; repo ; - RedHatInstall Python3 ; - RedHatyum -y -q install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum -y install chrony sshpass python3 python3-simplejson
DebianInstall Python3 (Debian)apt update && apt --yes install chrony sshpass python3-simplejson
Check that the repositories being installed are accessible for an online install:https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
sisense_validation : Test user for sufficient sudo permissions Test/validate user for sufficient sudo permissions (only required for Single and Multi-node cluster install) by running the following command which confirms sudo access:
timeout -k 2 2 sudo /bin/chmod --help
sisense_validation : Validate user has sufficient sudo permissions Write to log if user permission is sufficient and valid. user <configured user> has sufficient sudo permissions user <configured user> has insufficient sudo permissions Check that the user has the right permission as required by the installation.
sisense_validation : get kernel version Execute the following to get kernel version number:uname -r
sisense_validation : check if kernel exist in yum.conf (RedHat) Check if the kernel version is in the conf file if RedHat is the OS:grep <kernel version> /etc/yum.conf
sisense_validation: ; hold kernel package version (RedHat) Will be executed only for new single or multi-node cluster installations.Given the OS, Disable OS kernet automatic updates as they could create a problem for platform operations. ; It is assumed kernel updates are performed manually in a controlled matter outside of business operating hours. ; This is applicable only for a new install for single or multi-node cluster installation.When Redhat:exclude=kernel-<kernel version> | sudo tee -a /etc/yum.conf
sisense_validation: ; hold kernel package version (Debian) Will be executed only for new single or multi-node cluster installations.Given the OS, Disable OS kernet automatic updates as they could create a problem for platform operations. ; It is assumed kernel updates are performed manually in a controlled matter outside of business operating hours. ; This is applicable only for a new install for single or multi-node cluster installation.When Debian:Find out the kernel version:uname -rapt-mark hold <kernel version which was returned from uname -r command>For example:sudo apt-mark hold 4.15.0-76-generic
sisense_validation: ; Mask and disable firewall service (RedHat) - Accepted failure Disable the OS firewall only in the case when new install for single or multi-node cluster installation.For CentOS/RedHad/ Oracle Linux/Amazon:systemctl disable firewalld
sisense_validation: ; Mask and disable firewall service (Debian) - Accepted failure Disable the OS firewall only in the case when new install for single or multi-node cluster installation.For Ubuntu/Debian:systemd disable ufw As long as the user is root and has permission to disable this servier, there should be no error.
sisense_validation: Check memory ansible Check the memory allocated for the node running the installation. ; This is performed only for a new single or multi-node cluster installation.This is done by using Ansible built in ability to get the available memory similar to running:Memory command:cat /proc/meminfo | grep MemTotal Total memory by ansible is <memory>MB
sisense_validation : Check cpu ansible Check the # of CPUs allocated for the node running the installation. ; This is performed only for a new single or multi-node cluster installation.This is done by using Ansible built in ability to get the available cpu similar to running:Get ; # CPU command:lscpu Amount of vcpu cores is: <# of CPUs found>
sisense_validation: Check cpu and ram for single For single node install.Check CPU and RAM for min requirements set in the default configuration “./installer/roles/sisense_validation/defaults/main.yml" specifically for a single node installation.Check that node has enough CPU and Memory to run for single or custer based on the min requirement set by default:min_cpu_single: 8min_ram_single: 16384 All assertions passed "The system requires at least <min required> cpu (on system: <actual>), And <min required> GB RAM (on system: <actual> GB) Increase the memory allocated to the minimum required.
sisense_validation: ; Check cpu and ram for cluster For multi node cluster install.Check CPU and RAM for min requirements set in the default configuration “./installer/roles/sisense_validation/defaults/main.yml" specifically for a single node installation.Check that node has enough CPU and Memory to run for single or custer based on the min requirement set by default:min_cpu_cluster: 8min_ram_cluster: 32768 All assertions passed "The system requires at least <min required> cpu (on system: <actual>), And <min required> GB RAM (on system: <actual> GB) Increase the memory allocated to the minimum required.
sisense_validation: Check Kubernetes ports are avaiable Check that the following ports are available and not occupied by another application.The check is performed only for single or multi node new installation. ; ; ; ;- "80" #HTTP ; ; ; ;- "8181" #NGINX Ingress Controller ; ; ; ;- "443" #Rancher agent ; ; ; ;- "9099" #Canal/Flannel/Calico-node livenessProbe/readinessProbe ; ; ; ;- "2380" #etcd peer communication ; ; ; ;- "2376" #Docker daemon TLS port used by Docker Machine ; ; ; ;- "2379" #etcd client requests ; ; ; ;- "179" #calico BGP port ; ; ; ;- "6783" #Weave Port ; ; ; ;- "6784" #Weave UDP Port ; ; ; ;- "6443" #Kubernetes apiserver ; ; ; ;- "10250" #kubelet ; ; ; ;- "10255" #kubelet read-only port ; ; ; ;- "10248" #kubelet healthz ; ; ; ;- "10249" #kube-proxy ; ; ; ;- "10251" #kube-scheduler ; ; ; ;- "10252" #kube-controller ; ; ; ;- "10254" #Ingress controller livenessProbe/readinessProbe ; ; ; ;- "10256" #kube-proxy ; ; ; ;- "10257" #kube-controller ; ; ; ;- "10259" #kube-scheduler ok: ; [<node> ⇒ (Item=<port #>) Port <port> ; is already in used on <node> , Sisense may already be installed and running on <node> Should this step fail, check the ports availability using linux command:sudo lsof -i -P -nNote it is assumed no application is running on the server that might be affecting the Sisense platform, and in this case occupying a required port.Make sure to shutdown all conflicting applications permanently.
sisense_validation : Check Glusterfs ports are available For a GlusterFS installation, check that the ports are available: ; ; ;- "24007" #TCP for the Gluster Daemon ; ; ; ;- "49152" #glusterfsd ; ; ; ;- "49153" #glusterfsd ; ; ; ;- "49154" #glusterfsd ; ; ; ;- "49155" #glusterfsd ; ; ; ;- "49156" #glusterfsd ; ; ; ;- "49157" #glusterfsd ; ; ; ;- "49158" #glusterfsd ; ; ; ;- "49159" #glusterfsd ; ; ; ;- "49160" #glusterfsd ; ; ; ;- "49161" #glusterfsd ; ; ; ;- "49162" #glusterfsd ; ; ; ;- "49163" #glusterfsd Glusterfs port <port> ; is already in used on <node> , Glusterfs may already be installed and running on <node>
sisense_validation : Check Rook-ceph ports are available For a rook-ceph ports check the following: ; ; ; ;- "9080" #CSI_RBD_LIVENESS_METRICS_PORT ; ; ; ;- "9081" #CSI_CEPHFS_LIVENESS_METRICS_PORT Rook-ceph port <port> is already in used on <node>, Rook-ceph may already be installed and running on <node> Same as above.
sisense_validation : Check api-gateway port is available Given the following:
  1. non SSL and where configured gateway_port not 80
  2. with no cloud load balancer used
  3. with no alb controller used
Check the port that is configured in gateway_port.
api-gateway port <port> ; is already in used on <node> , Sisense may already be installed and running on <node> Same as above.
sisense_validation : Check Node Ports are available Check on a non multi-node installation where expose_nodeports config from install extra values is set to true the following ports: ; ; ;- "30017" #Mongodb nodePort ; ; ; ;- "31234" #Adminmongo nodePort ; ; ; ;- "30086" #Build app nodePort ; ; ; ;- "30096" #Build debug nodePort ; ; ; ;- "30555" #Connectors debug nodePort ; ; ; ;- "30082" #Management app nodePort ; ; ; ;- "30092" #Management debug nodePort ; ; ; ;- "30084" #Query app nodePort ; ; ; ;- "30094" #Query debug nodePort ; ; ; ;- "30870" #Translation app nodePort ; ; ; ;- "30022" #Translation debug nodePort Node Port <port> ; is already in used on <node> , Sisense may already be installed and running on <node>
sisense_validation : debug Gather and print out the defined mount(s) information.
sisense_validation : Gathering mount details ;sisense_validation : Setting mount details(note this will be written as 7 steps in the install log) Retain the value for:root_mount_point (“/”)var_sisense_mount_point (“/var”_docker_sisense_mount_point (“var/lib/docker”)
sisense_validation : Verify /var disk size ( At least 150GB ) Check that var meets required disk size and available space. ; This is using ansible built in ability to get disk mount details.This is similar to running:df -m /varThe value will be compared based on the given configuration:Default values from “..\kubespray\roles\sisense_validation\defaults\main.yml” that set the minimum requirement:root_disk_limit_mb: 153600root_disk_avail_limit_mb: 30720delta_root_disk_limit_mb: 15360Mount Size required is derived from : ; root_disk_limit_mb - delta_root_disk_limit_mb /var mounted on <required>, The partition size is: <mount size> GB, When Sisense requires <mount size required> GB /var mounted on <required>, The partition size is: <mount size> GB, When Sisense requires <mount size required> GB Increase the size of the disk in order to meet minimum requirements.
Check directories, mounts, and storage size - Note this is going to be performed only for a new installation of a single node deployment.
sisense_validation : Verify /var available size Check the “/var” amount of disk space availableRequired size is derived from:root_disk_avail_limit_mb: 30720 /var mounted on <required>, The available size is: <mount size> GB, When Sisense requires <Required size> GB /var mounted on <required>, The available size is: <mount size> GB, When Sisense requires <Required size> GB Make sure a mount is in place.
sisense_validation : Gathering mount details Confirm that a dedicated mount is in place.opt_mount_point “/opt”opt_sisense_mount_point “/opt/sisense” Sisense has a dedicated mount point on /opt or /opt/sisense Sisense has no dedicated mount point on /opt or /opt/sisense Make sure a mount is in place.
sisense_validation : Verify Sisense has a dedicated mount point Check to see if a mount is defined already Dedicated Disk size is: <available> MB, and it is met with the requirements (>=50GB). Sisense has no dedicated mount point on /opt or /opt/sisenseDedicated Disk size is: <available> MB, and Sisense requires at least (>=50GB)
sisense_validation : Verify disk size of dedicated mount point ( At least 50GB ) determine if disk size meets required for “/opt” mount.Default values from “..\kubespray\roles\sisense_validation\defaults\main.yml”:second_disk_limit_mb: 51200 Dedicated Disk size is: <size> MB, and it is met with the requirements (>=50GB). Dedicated Disk size is: <size> MB, and Sisense requires at least <amount set in second_disk_limit_mb> MB
OPTIONAL STEP

Perform disk performance test: Check read and write performance to the root directory and then to opt.

This step will be performed only in a case where configuration has “check_disk_utilization” = true and when the installation is new. The check will be performed only for a single and cluster node installation. The configuration is located in the “./installer/extra_values/installer/installer-values.yaml"”

The check is executed only on single or multi-node cluster deployment.
sisense_validation : Create {{root_disk_path}} directory create a directory that will be used to read/write to for the performance under root.
sisense_validation : {{root_disk_path}} ; - WRITE Command to assess performance:dd if=/dev/zero of={{root_disk_path}}/test.img bs=512 count=2000 oflag=dsyncResults need to be lower than the minimum required in seconds. ; Default values from “..\kubespray\roles\sisense_validation\defaults\main.yml” and is stated in seconds:root_disk_performance_write_time: 15The value is in seconds and designates how long it took to write the file. ; If it takes longer, the disk performance check will fail and will result in aborted installation.For example:dd if=/dev/zero of=/var/lib/kubelet/sisense_disk_test/test.img bs=512 count=2000 oflag=dsyncThe output for this is:2000+0 records in2000+0 records out1024000 bytes (1.0 MB, 1000 KiB) copied, 21.1628 s, 48.4 kB/And the value of how long it took to copy is compared to the configuration. root disk WRITE performance validation success root disk WRITE performance validation failed. does not meet requirements {{ root_disk_performance_write_time }}s If disk performance requirement is not met, a switch to a lower latency storage capability is needed.
sisense_validation : Validate Disk Performance {{root_disk_path}} - READ Command to assess performance:dd if={{root_disk_path}}/test.img of=/dev/null bs=512Results need to be lower than the minimum required in seconds. ; Default values from “..\kubespray\roles\sisense_validation\defaults\main.yml” and is stated in seconds.root_disk_performance_read_time: 10dd if=/var/lib/kubelet/sisense_disk_test/test.img of=/dev/null bs=512 root disk READ performance validation success root disk READ performance validation failed. does not meet requirements {{ root_disk_performance_read_time|int }}s If disk performance requirement is not met, a switch to a lower latency storage capability is needed.
sisense_validation : Delete performance file root Delete test file that was created for the ;
sisense_validation : Create {{second_disk_path}} directory create a directory that will be used to read/write to for the performance under opt.
sisense_validation : Validate Disk Performance {{second_disk_path}} - WRITE Command to assess performance:dd if=/dev/zero of={{second_disk_path}}test.img bs=512 count=2000Results need to be lower than the minimum required in milliseconds. ; Default values from “..\kubespray\roles\sisense_validation\defaults\main.yml””:second_disk_performance_write_time: 15 /opt disk WRITE performance validation success /opt disk WRITE performance validation failed. does not meet requirements {{ second_disk_performance_write_time|int }}s If disk performance requirement is not met, a switch to a lower latency storage capability is needed.
sisense_validation : Validate Disk Performance {{second_disk_path}} - READ Command to assess performance:dd if={{second_disk_path}}test.img of=/dev/null bs=512Results need to be lower than the minimum required in milliseconds. ; Default values from “..\kubespray\roles\sisense_validation\defaults\main.yml””:second_disk_performance_read_time: 10 {{second_disk_path}} disk READ performance validation success {{second_disk_path}} disk READ performance validation failed. does not meet requirements {{ second_disk_performance_read_time }}s If disk performance requirement is not met, a switch to a lower latency storage capability is needed.
sisense_validation : Print Disk Utilisation Print out the results:First Disk Read/Write: {{ root_disk_performance_output_read.stdout_lines|first }}/{{ root_disk_performance_output_write.stdout_lines|first }}"Second Disk Read/Write: {{ opt_disk_performance_output_read.stdout_lines|first }}/{{ opt_disk_performance_output_write.stdout_lines|first }}"
Only for the heketi node of a new installation for cluster multi-node: ; Perform disk size validation ===> Storage is ‘glusterf’ or ‘rook-ceph’

Note that if no heketi node is configured, the following message will be displayed in the log:
[WARNING]: Could not match supplied host pattern, ignoring: heketi-node
skipping: no hosts matched
sisense_validation : validate disk is exist Validate if a disk exists.
sisense_validation : validate unpartitioned disk Validate that disk is not partitioned The disk is partitioned, has <partition> partitions, please make sure to have unpartitioned disk Remove partition
sisense_validation : Checking For active mounts Check mount is active, and flag if it is not. The partition <device> is mounted to ;<mount> Activate the mount.
sisense_validation : sum up total size configured The formula to determine the total amount required is:total_configured_ size = (mongodb_disk_size X 3) + (zookeeper_disk_size X 3) + (sisense_disk_size) + (second_disk_metadata_gb)The default configured in cluster_config.yaml of the installation:sisense_disk_size: 70mongodb_disk_size: 20zookeeper_disk_size: 2This is defined in “..\kubespray\roles\sisense_validation\defaults\main.yml””:second_disk_metadata_gb: 3A message summary will be displayed as follows:In ; <node installation is performed on>, The disk is ; <device volume> and his size is <storage size> GBThe MongoDB replica size is: <mongodb_disk_size> GBThe Zookeeper replica size is: <zookeeper_disk_size> GBThe Sisense Persistence volume size is: <sisense_disk_size> GBThe Metadata size is: <second_disk_metadata_gb> GBThe total configured size is <total_configured_size> GB
sisense_validation : validate disk size really meets the size specified in the configuration Validate that the actual storage size meets the calculated total size required, and if not fail installation. The read disk size is <storage size> GB and total configured size is < total_configured_size> GB Increase storage size
Check if storage packages are deployed ; in the ; Kubernetes node
Gather package facts Use Ansible built in capability to get which packages are deployed.
Validate Glusterfs packages in RedHat/CentOS/Amazon check that “glusterfs-fuse” is deployed for storage “glusterfs” given Redhat, CentOS, or Amazon OS installation. glusterfs-fuse package is not installed, please install it and rerun installation
Validate Glusterfs packages in Debian/Ubuntu check that “glusterfs-client” is deployed for storage “glusterfs” given Ubuntu or Debian installation. glusterfs-client package is not installed, please install it and rerun installation
Validate Rook-ceph packages check that “lvm2” is deployed for storage “rook-ceph” lvm2 package is not installed, please install it and rerun installation
Validate NFS/EFS packages in RedHat/CentOS check that “nfs-utils” is deployed for storage “efs” or “nfs” given CentOS or Redhat installation. nfs-utils package is not installed, please install it and rerun installation
Validate NFS/EFS packages in Debian/Ubuntu check that “nfs-common” is deployed for storage “efs” or “nfs” given CentOS or Redhat installation. glusterfs-fuse package is not installed, please install it and rerun installation
Shell Script Function: uninstall_cluster_wrapper

This will be performed if uninstallation is configured for either a cluster or Sisense completely.

Successful Log Entry: Uninstall has finished successfully
Playbook: uninstall-sisense.yml
Shell Script Function: recover_kuberentes

This will be performed if the installation was configured to recover the Kubernetes deployment that might have been broken and needs to be recovered.

Successful Log Entry: Uninstall has finished successfully
Shell Script Function: upgrade_rke

This step of the installation will run if “config_update_k8s_version” is set to true, which means that the installation is going to look to upgrade the Kuberentes version deployed currently to the specific version that is supported by the given release being deployed.

NOTE: This assumes that the Kuberenetes deployment was already performed on the infrastructure using the RKE installation.


Successful Msg: The RKE cluster has been upgraded successfully
Parse out the information stored in the ; Parse out the information that was stored int “../installer/inventory/sisense/group_vars/all/all.yml” which details the Kubernetes infrastructure manifest.The installation is assumed to be running from the same bastion (or node) that was used to deploy Sisense and Kuberentes previously.A check to see that the path is configured for rke as stated in “../installer/roles/rke/defaults/main.yml” in config “config_rke_bin_path” RKE CLI is missing, exiting…In order to Upgrade RKE Cluster, you need to execute the installer from the last known bastion... upgrade_rke
Run the command to determine version deployed for RKE:rke -vCheck against the version configured currently for the deployment, located at:“../installer/roles/rke/defaults/main.yml”. Kubernetes will upgraded to: RKE: ; <RKE install version> Kubernetes: ; <Kubernetes version>
Check that all the nodes are in Ready state:kubectl get nodes Checking all nodes are healthy...
"<config_rke_bin_path>" etcd snapshot-save --config "<rke config folderconfig_rke_config_folder>"/"<config_rke_config_file>" \ ; ; ; ; ; ; ; ; ; ;--name pre-k8s-upgrade-$(date '+%Y%m%d%H%M%S')All values are configured in “../installer/roles/rke/defaults/main.yml”.. Taking an etcd snapshot...
Jump to RKE Common Section >>
Confirm upgrade ran ;successfully Check that RKE has been configured and is up.<config_rke_bin_path> up --config <config_rke_config_folder>/<config_rke_config_file>"In this case the etc snapshot taken before will be restored:<config_rke_bin_path> etcd snapshot-restore --config <config_rke_config_folder>/<config_rke_config_file>\ ;--name <config_rke_config_folder>/<etcd_snapshot_filename>Installation will exitted. The RKE cluster has been upgraded successfullyor ;The RKE already up to date. RKE Upgrade has failed, you can view the logs at ./sisense-ansible.logRestoring an etcd from snapshot
jump to >> “Install Provisioner”
Playbook: ; ;sisense-post-cluster.yml
Configure Kubernetes
Get users homedir Get the home director for the user with the following command:echo ~
Registering users home directory Retain the value of the directory for later on.
Create .kube folder in home directory Create .kube folder for home directory:<user home directory>/.kubeMode=0755; Owner/Group = the user used for the installation.
Copy kubernetes admin file to kube folder install -m 0600 -o <install user> -g <install user>/etc/kubernetes/admin.conf <user home directory>/.kube/configFor example:install -m 0600 -o sisense -g sisense /etc/kubernetes/admin.conf /home/sisense/.kube/configThis configuration file contains certificates used for Kubernetes.
Add labels to the nodes (only given the skip_labeling configuration is not set to false)
Add single label Add label for the given node for a single-node deployment and given the option to disable labeling is not on.kubectl label node <The only one Node> node=single --overwrite=trueFor example:kubectl label node node1 node=single --overwrite=true
Add application node label For a cluster deployment add the label for each node that is configured to run the application component on:kubectl label node <node> node-<namespace>-Application=true --overwrite=true
Add build node label ; For a cluster deployment add the label for each node that is configured to run the build component on:kubectl label node <node> node-<namespace>-Build=true --overwrite=true
Add query node label ; For a cluster deployment add the label for each node that is configured to run the query component on:kubectl label node <node> node-<namespace>-Query=true --overwrite=true
Remove master NoSchedule taint (Ignoring not found...) In a case labeling is off:For a cluster deployment, taint all the nodes assigned for the deployed Kubernetes cluster, which means all pods not associated to the node roles will be repelled. ; This is to ensure only pods that are set to run on the given node are allowed to. ; For example, on the build node only build pods will be running. ;This command will run for each node individually and will apply only to nodes that are assigned as “kube-node” (for example as specified in inventory specified in "../installer/inventory/sisense/sisense.hosts" file.kubectl taint nodes <node> node-role.kubernetes.io/master:NoSchedule
Remove master NoSchedule taint (Ignoring not found... In a case labeling is on:For a cluster deployment, taint all the nodes assigned for the deployed Kubernetes cluster, which means all pods not associated to the node roles will be repelled. ; This is to ensure only pods that are set to run on the given node are allowed to. ; For example, on the build node only build pods will be running.This command taints all nodes at onces:kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule
Install storage utility for a cluster cluster installation with storage =’efs’ or ‘nfs’ or ‘nfs-server’ defined.
Install nfs mount utils (RedHat) Install NFS utils for Redhat OS:yum install nfs-utils Issue could occur if the package is not available to be installed, make sure it is.
EFS/NFS | Install nfs mount utils (Debian) Install NFS utils for Debian type OS:apt install nfs-common Issue could occur if the package is not available to be installed, make sure it is.
Install storage utility for cluster deployment with storage = ‘glusterfs’
Add glusterfs-4 repository (Ubuntu 18.04, Debian) Register the proper repo. ; If anisble_distribution is for Ubuntu/Debian and major version is 18:install from repo: ; ppa:gluster/glusterfs-4.0 for each Node
Add bionic repo key (needed for gluster4.0 support) - Ubuntu 20 Register the proper repo. ; If anisble_distribution is for Ubuntu and major version is 20:Install from repo: keyserver.ubuntu.comid is: ; F7C73FCC930AC9F83B387A5613E01B7B3FE869A9
Add glusterfs-4 repository (Ubuntu 20.04) Register the proper repo. ; If anisble_distribution is for Ubuntu and major version is 20, install from repo:deb http://ppa.launchpad.net/gluster/glusterfs-4.0/ubuntu bionic main
Update apt cache Update install cache by running apt update for the Ubuntu/Debian OS. Issue could occur if the package is not available to be installed, make sure it is.
Add glusterfs-4 repository (CentOS, RedHat) If CentOS or Redhat, Install for all nodes (associated as kube-node) get all rpms available via the following repo site:http://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0Performing a yum install for all rpms for all notes (associated as kube-node).For example:yum install -y http://glusterfs-4.0.0-0.1.rc0.el7.x86_64.rpm Issue could occur if the package is not available to be installed, make sure it is.
Install glusterfs-4 mount utils (Amazon, CentOS) Install the Glusterfs utilities rpms for all notes (associated as kube-node) all of the following repos If OS is Amazon or CentOS:yum install -y https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/glusterfs-4.0.2-1.el7.x86_64.rpm \ ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/glusterfs-client-xlators-4.0.2-1.el7.x86_64.rpm \ ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/glusterfs-extra-xlators-4.0.2-1.el7.x86_64.rpm \ ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/glusterfs-fuse-4.0.2-1.el7.x86_64.rpm \ ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/glusterfs-libs-4.0.2-1.el7.x86_64.rpm \ ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/python2-gluster-4.0.2-1.el7.x86_64.rpm Issue could occur if the package is not available to be installed, make sure it is.
Install RedHat Dependencies) Find out of the following packages are installed, is OS is RedHat:yum install -y psmiscyum install -y attr Issue could occur if the package is not available to be installed, make sure it is.
Install openssl10 (Redhat 8) For Redhat version 8, install the following:dnf install -y compat-openssl10 Issue could occur if the package is not available to be installed, make sure it is.
Install glusterfs-4 mount utils (RedHat) For Redhat install the following:glusterfs-fuse-4.0.2-1.el7 Issue could occur if the package is not available to be installed, make sure it is.
Install glusterfs-4 mount utils (Ubuntu, Debian) For Ubuntu/Debian, install:glusterfs-client=4.0.2-ubuntu2~bionic1 Issue could occur if the package is not available to be installed, make sure it is.
Shell Script Function:
install_heketi

Msg: Installing GlusterFS && Heketi ...
Playbook:
installer/contrib/network-storage/heketi/heketi.yml

This will run on the hekti node and install the utils required for managing hekti storage.

Note that at this point this playbook is not detailed here.
Shell Script Function:kubernetes_cloud

If cloud install then: “Installing Sisense on Kubernetes Cloud provider ..” will appear.
If single node: “Installing Sisense on a single node cluster …” will appear
If multi-node (cluster): “Installing Sisense on a multi node cluster …” will appear.
Playbook: ; ;installer/playbooks/kubernetes-cloud.yml
Load values from extra values folder Load extra install configuration values - refer to the section regarding configuration../extra_values/installer/installer-values.yaml
Get users homedir Determine user home directory.
Cloud | Set Tiller namespace For cloud install that is not OpenShift:TILLER_NAMESPACE=kube-system cacheable=true
OpenShift | Set Tiller namespace For OpenShift:TILLER_NAMESPACE={{tiller_namespace | default('default')}} cacheable=true
Environment | Remove Tiller namespace dest=/etc/environment regexp='^TILLER_NAMESPACE*' state=absent
Cloud | Set Tiller namespace dest=/etc/environment regexp='^TILLER_NAMESPACE*' state=present line="TILLER_NAMESPACE=kube-system" create=yes
OpenShift | Set Tiller namespace dest=/etc/environment regexp='^TILLER_NAMESPACE*' state=present line="TILLER_NAMESPACE={{TILLER_NAMESPACE}}" create=yes
Environment | Adding the environment in the Profile files dest={{user_home_dir}}/.bashrc line='source /etc/environment' state=present create=yes
Environment | Source the bash_profile file source {{user_home_dir}}/.bashrc
Make sure RWX and RWO StorageClass names are defined For a non OpenShift install and in the case of stoage_type=”portwork”:Check that “rwx_sc_name “and “rwo_sc_name” are defined. Both RWX and RWO StorageClass names must be defined to use manual deployed StorageClass
Make sure the correct storage type is defined If rw_sc_name and rwo_sc_name are defined, the storage must be ‘portworx’. Incorrect storage type: {{ storage_type }} ; ; ; ; ; ; ; ;Must be equal to 'portworx' when storage class name is defined
Check if RWX StorageClass exist Check that the configured RWX_SC storage configured in the installation exists:kubectl get storageclasses <configured rwx_sc_name>
Check if RWO StorageClass exist Check that the configured RWP_SC storage configured in the installation exists:kubectl get storageclasses <configured rwo_sc_name>
folders : Ensure Sisense directories exist For non cluster/cloud installs, check the following directories exist with the given specified path/owner/group/mode:{ path: "/dgraph-io", owner: "1000", group: "1000", mode: "0755" }{ path: "/storage", owner: "1000", group: "1000", mode: "0755" }{ path: "/mongodb", owner: "999", group: "999", mode: "0755" }{ path: "/zookeeper", owner: "1001", group: "1001", mode: "0755" } ok: [<given node>] => (item={'path': 'dgraph-io', 'owner': '1000', 'group': '1000', 'mode': '0755'})ok: [<given node>] => (item={'path': 'storage', 'owner': '1000', 'group': '1000', 'mode': '0755'})ok: [<given node>] => (item={'path': 'mongodb', 'owner': '999', 'group': '999', 'mode': '0755'})ok: [<given node>] => (item={'path': 'zookeeper', 'owner': '1001', 'group': '1001', 'mode': '0755'}) Fail: [<given node>] => (item={'path': 'dgraph-io', 'owner': '1000', 'group': '1000', 'mode': '0755'})Fail: [<given node>] => (item={'path': 'storage', 'owner': '1000', 'group': '1000', 'mode': '0755'})Fail: [<given node>] => (item={'path': 'mongodb', 'owner': '999', 'group': '999', 'mode': '0755'})Fail: [<given node>] => ; (item={'path': 'zookeeper', 'owner': '1001', 'group': '1001', 'mode': '0755'}) Make sure the directory is there, create them manually if needed.
folders : Clean dgraph folder Create the directory in Sisense, directory “/dgraph” when installation is a non multi-node cluster.
Install GKE Cloud - kubernetes_cloud_provider=’gke’
configure gcloud to the given cluster Set to use the configured cluster in GKE:gcloud config set compute/zone <configured kubernetes_cluster_name>
get credentials and configure kubectl Get the credentials for the given zone in GKE:gcloud container clusters get-credentials <configured kubernetes_cluster_name> ;--zone <configured kubernetes_cluster_location>
Install EKS Cloud - kubernetes_cloud_provider=’eks’ and kubernetes_cluster_nameis not defined as “kops”
EKS | Update kubeconfig Update the configuration for the given cluster and location:aws eks --region {{kubernetes_cluster_location}} update-kubeconfig --name {{kubernetes_cluster_name}}
Install AKS Cloud - kubernetes_cloud_provider=’aks’
AKS | Update kubeconfig Update the configuration for the given cluster and location:az aks get-credentials --overwrite-existing --resource-group <configured kubernetes_cluster_location> --name <configured kubernetes_cluster_name>
Install OpenShift when is_openshift = true
OpenShift | Login system admin user Login as system admin user:oc login -u system:admin
OpenShift | Create Sisense namespace before applying security contexts Create the configured name space:oc create ns <namespace> --dry-run=client
OpenShift | Managing Security Context Constraints Set security context for each of these:oc adm policy add-scc-to-group anyuid system:authenticatedoc adm policy add-scc-to-user privileged system:serviceaccount:<namespace>:defaultoc adm policy add-role-to-user admin system:serviceaccount:<namespace>:default"oc adm policy add-scc-to-user privileged system:serviceaccount:<namespace>:<namespace>-droppy"
Install Kubernetes Client (not applicable for offline install)
Download kubernetes-client archive Download package. ; Values are set in “..\kubespray\roles\kubernetes-cloud\kubectl\defaults\main.yml”https://dl.k8s.io/v1.17.6/kubernetes-client-linux-amd64.tar.gzWith Checksum provided.Place into the following directory: “/tmp” or “TMPDIR” Access to the given repository is required.
Unarchive kubernetes-client Unarchive to tmp directory.
Copy kubectl binary to destination directory Copy to bin directory:“/usr/local/bin”
Jump to section >> Common Section: ; Helm
Validate Pods are Sufficient
Dockerized | Gather list of nodes kubectl get no --no-headers
Validate Current Pods settings Loop through all nodes to validate pod capacity:kubectl get nodes
Validate Kubectl capacity nodes configured Determine if the pod capacity number setting for each given node is equal or greater than what is configured in “kubernetes_minimum_pods” parameter in “installer/roles/sisense_validation/defaults/main.yml” (for example, set at 58 pods). OK, {{ pod_capacity }} Kuberenetes pods does meet with the requirements ERROR, {{ pod_capacity }} Kubernetes pods doest not meet with the requirements."
Jump to section >> Common Section: Storage
Jump to section >> Common Section: ; Monitoring - Internal & External
Jump to section >> Common Section: ; Sisense Installation, Logging Monitoring, and Wait for Sisense to come up
Playbook: ;sisense.yml
Loading installer values from extra_values folder. Load installation extra values:../extra_values/installer/installer-values.yaml
Get users homedir Run the following command to determine the home directory for the installation user:echo ~
Set users homedir Set the value of the user_home_dir to be used throughout the installation.
Tiller | Set Tiller namespace Tiller namespace = kube-system and will be set.
Tiller | Remove Tiller namespace Remove any tiller setting currently set in “etc/environment”.
Tiller | Set Tiller namespace Set environment tiller in “/etc/environment” as follows:TILLER_NAMESPACE=kube-system
Environment | Adding the environment in the Profile files Add the tiller line in the environment profile file.
Environment | Source the bash_profile file Run command to execute the bash to add tiller line:source <user_home_dir>/.bashrc
Jump to the section “sisense_post_cluster” section >>
Jump to section >> Common Section: ; Helm
Jump to section >> Common Section: Storage
Jump to section >> Common Section: ; Monitoring - Internal & External
Jump to section >> Common Section: ; Sisense Installation, Logging Monitoring, and Wait for Sisense to come up
Print Endpoints A mode detailed message will be displayed if expose_nodeports is true.- "Sisense App: {{ base_proto }}{{ print_ip }}{{ print_port }} ; ; ; ; ; ;{% if cloud_load_balancer|default(false)|bool -%} ; ; ; ; ; ;Load balancer address: {{ load_balancer_address }} ; ; ; ; ; ;{% endif -%}" ; ; ; ;- "Kubernetes dashboard: https://{{ print_ip }}:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy" ; ; ; ;- "Management Swagger UI at {{ base_proto }}{{ print_ip }}:30082/swagger-ui.html" ; ; ; ;- "Query Swagger UI at {{ base_proto }}{{ print_ip }}:30084/swagger-ui.html" ; ; ; ;- "Build Swagger UI at {{ base_proto }}{{ print_ip }}:30086/swagger-ui.html" ; ; ; ;- "App Test UI at {{ base_proto }}{{ print_ip }}{{ print_port }}/app/test"
Print Disk Utilisation A debug level message will be printed out to show disk utilization
A success message will be displayed if installation is completed fully and successfully.

For example:
Sisense L2021.5.0.180 has been installed successfully!
sisense ~/installation/sisense-L2021.5.0.180
Sisense <version number> has been installed successfully! print_endpoints

Common Sections

Common Section: RKE
Shell Script Function: install_rke
This function will run for single or multi node installations. It will run the rke.yaml playbook in two modes, one as main installation and then as post RKE installation.

Message: Installing Rancher Kubernetes Engine Cluster…

Success will be predicated on the following command running successfully:
<config_rke_bin_path> up --config <config_rke_config_folder>/<config_rke_config_file>
Successful Message: Rancher Kubernetes Engine Cluster installed successfully
Playbook: rke.yml

This playbook will either deploy Kubernetes using RKE for the first time, or will be used to upgrade Kubernetes.
Task Logic Description Success Message Fail Message Root Cause / Workaround / Solution
Check ansible version <minimal_ansible_version> Check that the minimal version of ansible is deployed on the server running the installation. ; The version is set to 2.10.0 for example and will be stated in the log entry for the task. Ansible must be <minimal_ansible_version> or higher ;
ssh_key : Prepare SSH Keys Run for the node running the installation and for all nodes designated as “kube-node” in the inventory file, and prepare and test the ssh session.
ssh_key : Copy SSH key to host Look for the ssh key in “~/.ssh/id_rsa.pub” in any of the kube-node servers.
ssh_key : Test ssh connection Test the connection for each of the nodes in “kube-node” using the following command:ssh <linux_user>@<node ip> Success Fail Make sure ip is configured and node is up.
rke-defaults : Configure defaults Gather the configuration set in “../installer/roles/rke-defaults/defaults/main.yaml”. Check roles/rke-defaults/defaults/main.yml
rke-defaults: Gather ansible_default_ipv4 from all hosts Gather the ips for all nodes.
rke-defaults : create fallback_ips_base Assign the fallback ips.
rke-defaults : set fallback_ips Set the values for the fallback ips to be used later in the installation.
rke-defaults : Set no_proxy to all assigned cluster IPs and hostnamesset no_proxy Setup no proxy in the case http_proxy or https_proxy is configured, and no proxy is defined.
rke-defaults : Populates no_proxy to all hosts Assign to each host no proxy.
bootstrap-os : Fetch /etc/os-release Get the OS and version to determine which OS bootstrap to perform given the OS, based on the values returned for the OS:CentosAmazonRedHatClear LinuxFedora-coreosFlatcarDebianFedoraOpensuse
Bootstrap Debian (When ID=”ubuntu” or “debian”)
bootstrap-os : Check if bootstrap is needed Check to see if python3 needs to be installed.
bootstrap-os : Check http::proxy in apt configuration files] Check if http proxy is deployed by running apt-config dump | grep -qsi 'Acquire::http::proxy' and looking for http proxy.
bootstrap-os : Add http_proxy to /etc/apt/apt.conf if http_proxy is defined Add for the http proxy to be configured given it is required by adding it to “/etc/apt/apt.conf”.
bootstrap-os : Install python3 Install python3 if required:apt-get update && \DEBIAN_FRONTEND=noninteractive apt-get install -y python3-minimal
bootstrap-os : Update Apt cache Run the below for OS ID Debian vs 10 or 11:apt-get update --allow-releaseinfo-change
bootstrap-os : Set the ansible_python_interpreter fact Set location for python3:/usr/bin/python3
bootstrap-os : Install dbus for the hostname module Install Dbus:apt update sudo apt install dbus
Bootstrap CentOS (When ID=”centos” or “ol” or “almalinux”)
bootstrap-os : Gather host facts to get ansible_distribution_version ansible_distribution_major_version Determine the deployed ansible version.
bootstrap-os : Add proxy to yum.conf or dnf.conf if http_proxy is defined Check if http proxy is defined to be configured, and if so add to yum.conf or dnf.con config files.
bootstrap-os : Download Oracle Linux public yum repo Download https://yum.oracle.com/public-yum-ol7.repoto “/etc/yum.repos.d/public-yum-ol7.repo”.
bootstrap-os : Enable Oracle Linux repo Deploy the following:- ol7_latest- ol7_addons- ol7_developer_EPEL
bootstrap-os : Install EPEL for Oracle Linux repo package Install the Oracle package:"oracle-epel-release-el<ansible version>”
bootstrap-os : Enable Oracle Linux repo Configure the Oracle Linux package
bootstrap-os : Enable Centos extra repo for Oracle Linux Deploy addition repo.
bootstrap-os : Check presence of fastestmirror.conf Determine if the fastestmirror plugin is configured.
bootstrap-os : Disable fastestmirror plugin if requested If the fastestmirror plugin is configured, disabled it, as it slows down ansible deployments.
bootstrap-os : Install libselinux python package Install the proper python package if needed
Bootstrap Amazon OS (When ID=”amzn”)
bootstrap-os : Enable EPEL repo for Amazon Linux Download and install EPEL repo if not deployed already:“http://download.fedoraproject.org/pub/epel/7/$basearch
bootstrap-os : Enable Docker repo Enable docker as follows:amazon-linux-extras enable docker=latest
Bootstrap Red Hat (When ID=”rhel”)
bootstrap-os : Gather host facts to get ansible_distribution_version ansible_distribution_major_version Determine the deployed ansible version.
bootstrap-os : Add proxy to yum.conf or dnf.conf if http_proxy is defined Check if http proxy is defined to be configured, and if so add to yum.conf or dnf.con config files.
bootstrap-os : Check RHEL subscription-manager status Check subscription manager for status by running command:/sbin/subscription-manager status
bootstrap-os : RHEL subscription Organization ID/Activation Key registration Retain subscription organization ID and activation key.
bootstrap-os : RHEL subscription Username/Password registration Check if username is defined.
bootstrap-os : Check presence of fastestmirror.conf Check if fastestmirror plugin is active by checking:/etc/yum/pluginconf.d/fastestmirror.conf
bootstrap-os : Disable fastestmirror plugin if requested Disable fastestmirror plugin due to the fact that it slows down ansible deployments.
bootstrap-os : Install libselinux python package Install libselinux python package.
Bootstrap Clear Linux (When ID=”clear-linux-os”)
bootstrap-os : Install basic package to run containers install “containers-basic” in order to be able to run containers.
bootstrap-os : Make sure docker service is enabled Ensure that docker service is active by checking ;systemd service docker for status started.
Bootstrap clear Fedora Coreos (When ID=”fedora” or When Variant_ID = “coreos”)
bootstrap-os : Check if bootstrap is needed install “containers-basic” in order to be able to run containers.
bootstrap-os : Remove podman network cni Run command:podman network rm podman
bootstrap-os : Clean up possible pending packages on fedora coreos Run command:rpm-ostree cleanup -p
bootstrap-os : Temporary disable fedora updates repo because of base packages conflicts Disable repo updates by running command and turning it to disabled:sed -i /etc/yum.repos.d/fedora-updates.repo
bootstrap-os : Install required packages on fedora coreos Run command to install ostree:rpm-ostree install --allow-inactive
bootstrap-os : Enable fedora updates repo Reenable repo updates by configuring to enabled = 1:sed -i /etc/yum.repos.d/fedora-updates.repo
bootstrap-os : Reboot immediately for updated ostree, please run playbook again if failed first time. Run command to reboot to apply updated ostree:nohup bash -c 'sleep 5s && shutdown -r now'
bootstrap-os : Wait for the reboot to complete Wait by trying to connect to ostree 240 times with 5 seconds delay between retries. ; This step must succeed.
bootstrap-os : Store the fact if this is an fedora core os host Set the fact that the OS was bootstrapped.
Bootstrap Flatcar (When ID=”flatcar”)
bootstrap-os : Check if bootstrap is needed Gather facts from “/opt/bin/.bootstrapped”
bootstrap-os : Force binaries directory for Flatcar Container Linux by Kinvolk Set directory to “/opt/bin”
bootstrap-os : Run bootstrap.sh Run the script bootstrap.sh located in “../installer/roles/bootstrap-os/files/bootstrap.sh”This downloads the proper python package from “https://downloads.python.org/pypy/pypy3.6-v${PYPY_VERSION}-linux64.tar.bz2” and installs to “/opt/bin” directory.
bootstrap-os : Set the ansible_python_interpreter fact Set directory for python “/opt/bin/python”.
bootstrap-os : Disable auto-upgrade Disable auto-upgrade by disabling service “locksmithd.service”.
Bootstrap Fedora Classic (When ID=”fedora” and NOT Variant_ID = “coreos” )
bootstrap-os : Check if bootstrap is needed Determine if python needs to be deployed.
bootstrap-os : Add proxy to dnf.conf if http_proxy is defined Add dnf.conf entry for proxy if http_proxy is defined in file:/etc/dnf/dnf.conf
bootstrap-os : Install python3 on fedora Run the command to install python:dnf install --assumeyes --quiet python3
bootstrap-os : Install libselinux-python3 Install package called “libselinux-python3”.
Bootstrap Fedora Classic (When ID=”opensuse-leap” or ID=”opensuse-tumbleweed” )
bootstrap-os : Check that /etc/sysconfig/proxy file exists Check the file “/etc/sysconfig/proxy” for facts.
bootstrap-os : Create the /etc/sysconfig/proxy empty file Create a config file in “/etc/sysconfig/proxy” if http_proxy or https_proxy is defined.
bootstrap-os : Set the http_proxy in /etc/sysconfig/proxy Set the configured http_proxy setting in file “/etc/sysconfig/proxy” if http_proxy is defined.
bootstrap-os : Set the https_proxy in /etc/sysconfig/proxy Set the configured https_proxy setting in file “/etc/sysconfig/proxy” if https_proxy is defined.
bootstrap-os : Enable proxies Enable the proxies by setting parameter PROXY_ENABLED="yes"
bootstrap-os : Install python-xml Run the installation via command:zypper refresh && zypper --non-interactive install python-xml
bootstrap-os : Install python-cryptography Install the package “python-cryptography” if it is not installed already.
Finish up Bootstrap OS
bootstrap-os : Create remote_tmp for it is used by another module Create temporary directory for ansible, for example ; the default is “~/.ansible/tmp”
bootstrap-os : Gather host facts to get ansible_os_family Gather ansible facts.
bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS, non-Flatcar, Suse and ClearLinux) Set inventory host name for non-CoreOS, non-Flatcar, Suse and ClearLinux.
bootstrap-os : Assign inventory name to unconfigured hostnames (CoreOS, Flatcar, Suse and ClearLinux only) Set inventory host name for CoreOS, Flatcar, Suse and ClearLinux only.hostnamectl set-hostname <inventory_hostname>
bootstrap-os : Update hostname fact (CoreOS, Flatcar, Suse and ClearLinux only) Update facts.
bootstrap-os : Install ceph-commmon package Install package “ceph-common” ;
bootstrap-os : Ensure bash_completion.d folder exists Ensure that a directory “/etc/bash_completion.d/” exists.
Gather ansible facts
Gather minimal facts Gather basic facts from ansible, which include several details, including installer user information, ansible version, machine architecture, Linux kernel version, etc. ; Full details are available via enabling debug mode for the installation with an entry titled “ansible_facts” in the generated log.
Gather necessary facts (network) Gather network facts from ansible, which include:ansible_default_ipv4ansible_default_ipv6ansible_all_ipv4_addressesansible_all_ipv6_addresses
Gather necessary facts (hardware) Gather ansible memory available:
Configure defaults Get configuration defaults form “../installer/roles/rke-defaults/defaults/main.yml
Install container engine on all nodes
container-engine/containerd-common : containerd-common | check if fedora coreos Determine if installation is on Fedora Coreos
container-engine/containerd-common : containerd-common | set is_ostree Set the fact if ostree is on given Fedora Coreos
container-engine/containerd-common : containerd-common | gather os specific variables Gather the OS specific details to be used by ansible to install, which includes determining the packages that will be installed given the OS.
container-engine/docker : check if fedora coreos Determine if installation is on Fedora Coreos
container-engine/docker : set is_ostree Set the fact if ostree is on given Fedora Coreos
container-engine/docker : gather os specific variables Gather the OS specific details to be used by ansible to install, which includes determining the packages that will be installed given the OS.
container-engine/docker : Warn about Docker version on SUSE This task will appear in the case OS is Suse“SUSE distributions always install Docker from the distro repos”
container-engine/docker : set dns server for docker Get value for docker_dns_servers
container-engine/docker : show docker_dns_servers Write the DNS server value in the log. <DNS Server Value>
container-engine/docker : add upstream dns servers Add docker dns address to variable.
container-engine/docker : add global searchdomains Add docker_dns_search_domains (e.g., "default.svc.rke-cluster", ; ; ; ; ; ; "svc.rke-cluster")
container-engine/docker : check system nameservers Check system nameservers in /etc/resolv.conf
container-engine/docker : check system search domains Get the nameservers ip from /etc/resolv.conf An ip entry is required to be set in the conf file.
container-engine/docker : add system nameservers to docker options Retain value for later use.
container-engine/docker : add system search domains to docker options Add the system search value to the docker options.
container-engine/docker : check number of nameservers Check the number of nameservers that are set. Too many nameservers. You can relax this check by set docker_dns_servers_strict=false in docker.yml and we will only use the first 3.
container-engine/docker : rtrim number of nameservers to 3 If more than 3, set only to 3 values.
container-engine/docker : check number of search domains A warning message will appear when the number of search domains exceeds 6. Too many search domains
container-engine/docker : check length of search domains A warning message will appear when the number of characters set for the search domains exceeds 256. Search domains exceeded limit of 256 characters
container-engine/docker : disable unified_cgroup_hierarchy in Fedora 31+ Run the following command if OS is Fedora:grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"
container-engine/docker : ceboot in Fedora 31+ Reboot “systemd.unified_cgroup_hierarchy” if OS is Fedora.
container-engine/docker : Remove legacy docker repo file Remove old repo if it exists if using the RedHat OS:<yum_repo_dir>/docker.repo
container-engine/docker : Ensure old versions of Docker are not installed. | Debian For OS Debian remove via apt the following packages:
  • docker
  • docker-engine
  • docker.io
container-engine/docker : Ensure podman not installed. | RedHat For OS RedHat, remove package if exists:
  • podman
container-engine/docker : Ensure old versions of Docker are not installed. | RedHat For OS RedHat, remove the following packages if they exist:
  • docker
  • docker-common
  • docker-engine
  • docker-selinux.noarch
  • docker-client
  • docker-client-latest
  • docker-latest
  • docker-latest-logrotate
  • docker-logrotate
  • docker-engine-selinux.noarch
container-engine/docker : ensure docker-ce repository public key is installed Ensure the key is available for install.
container-engine/docker : ensure docker-ce repository is enabled Ensure repo is enabled for install.
Configure docker repository on Fedora For OS “Fedora” create a configuration file in <yum_repo_dir>/docker.repo ; based on “fedora_docker.repo.j2".
Configure docker repository on RedHat/CentOS/Oracle/AlmaLinux Linux For OS “RedHat” create a configuration file in <yum_repo_dir>/docker-ce.repo ; based on “rh_docker.repo.j2"..
container-engine/docker : Remove dpkg hold Remove dkpg hold for OS using apt for deployment.
container-engine/docker : ensure docker packages are installed Ensure proper packages are installed.
container-engine/docker : Tell Debian hosts not to change the docker version with apt upgrade Ensure that version is not changed for following packages during apt upgrade:
  • containerd.io
  • docker-ce
  • docker-ce-cli
container-engine/docker : ensure service is started if docker packages are already present This will be executed once, and if docker not started, remove our config and try to start again. Docker start failed. Try to remove our config
container-engine/docker : Install Docker plugin Install Docker plugin:docker plugin install --grant-all-permissions
container-engine/docker : Create docker service systemd directory if it doesn't exist Create a docker service if it does not exist already.
container-engine/docker : Write docker proxy drop-in If http_proxy or https_proxy is defined, create a config file “http-proxy.conf” based on “http-proxy.conf.j2” and place it to “/etc/systemd/system/docker.service.d/”.
container-engine/docker : get systemd version Get Systemd version:systemctl –version
container-engine/docker : Write docker.service systemd file Create a docker configuration file “docker.service” based on “docker.service.j2” and place it into “/etc/systemd/system/”.This will not be performed on Flatcar OS or Fedor CoreOS.
Write docker options systemd drop-in Create a docker configuration file “docker-options.conf” based on “docker-options.conf.j2” and place it into “/etc/systemd/system/docker.service.d/”.
container-engine/docker : Write docker dns systemd drop-in Create a docker configuration file “docker-dns.conf” based on “docker-dns.conf.j2” and place it into “/etc/systemd/system/docker.service.d/”.
container-engine/docker : Copy docker orphan clean up script to the node Copy script”cleanup-docker-orphans.sh“ into “<bin_dir>/cleanup-docker-orphans.sh”.
container-engine/docker : Write docker orphan clean up systemd drop-in Create a docker configuration file “docker-orphan-cleanup.conf” based on “docker-orphan-cleanup.conf.j2” and place it into “/etc/systemd/system/docker.service.d/”.
container-engine/docker : restart docker Restart the docker.
container-engine/docker : Docker | reload systemd Reload systemd docker service.
container-engine/docker : Docker | reload docker Reload docker.
container-engine/docker : Docker | wait for docker Wait for docker to be up.
container-engine/docker : ensure docker service is started and enabled Check that docker service is up.
container-engine/docker : adding user '<configured install user>' to docker group Add configured install user to the docker group.
Install Kubernetes
rke : Install packages requirements Determine packages required for installation (not for Flatcar or ClearLinux OS).
rke : Set User Namespaces sysctl value Set the value “user.max_user_namespaces” value in /etc/sysctl.d/00-namespaces.conf
rke : Configure Kernel Runtime Parameters Set /etc/sysctl.d/90-kubelet.conf values for the following parameters:vm.overcommit_memoryvm.panic_on_oomkernel.panickernel.panic_on_oopskernel.keys.root_maxbytes
rke : Add etcd group Add a user group “etcd”
rke : Add etcd user Add a user to the group, which is the etcd service user account.
rke : Add Rancher user Add a user to group “docker” called “rancher”
rke : Create kubernetes manifests folder Add a directory for Kubernetes manifests (e.g., /etc/kubernetes/manifests)
rke : Enable kernel modules Check that the list of required OS Kernel modules are enabled based on the list outlined in “../installer/roles/rke/defaults/main.yml” configuration under default_kernel_modules.
rke : Enable kernel system modules Check the list of required kernel modules enabled based on the list outlined in “../installer/roles/rke/defaults/main.yml” configuration under extended_kernel_modules.
rke : Enable ipvs kernel modules Check the list of required ipvs kernel modules enabled on the list outlined in “../installer/roles/rke/defaults/main.yml” configuration under ipvs_kernel_modules.
rke : Disable SWAP since kubernetes cannot work with swap enabled (1/2) Disable swap by running the command:swapoff -a
rke : Disable SWAP in fstab since kubernetes cannot work with swap enabled (2/2) Disable the wap in /etc/fstab.
rke : Modify sysctl entries Modify sysctl entries for:net.bridge.bridge-nf-call-ip6tablesnet.bridge.bridge-nf-call-iptablesnet.ipv4.ip_forward
rke : Hosts | create list from inventory Create inventory for hosts for deployment of Kubernetes.
rke : Hosts | populate inventory into hosts file Populate the values of hosts into /etc/hosts file.
rke : Hosts | populate kubernetes loadbalancer address into hosts file Populate load balancer address into hosts file given load balancer is defined in /etc/hosts.
rke : Hosts | Retrieve hosts file content Load values from /etc/hosts file.
rke : Hosts | Extract existing entries for localhost from hosts file Extract values from hosts file for localhost.
rke : Hosts | Update target hosts file entries dict with required entries Update host file with the entries based on the list outlined in “../installer/roles/rke/defaults/main.yml” configuration under ; “etc_hosts_localhost_entries”.
rke : Hosts | Update (if necessary) hosts file Update etc/host files with localhost entries.
rke : RKE | Prepare RKE folder Create folder for the rke configuration based on configuration in “../installer/roles/rke/defaults/main.yml” under rke_config_folder.
rke : Check if RKE binary exists Check if the folder contains binary to install rke, where the folder is located based on configuration in “../installer/roles/rke/defaults/main.yml” under rke_config_folder.
rke : Get the RKE version Check the version of the rke binary by running the binary with –version.
rke : Download the RKE Binary If no binary exists in the given folder, download it from the location defined in “rke_binary_download_url” baked on configuration in “../installer/roles/rke/defaults/main.yml”.
rke : RKE | Generate Weave password If weave_enabled is true, generate a weave password by running the command:tr </dev/urandom -dc A-Za-z0-9 | head -c9
rke : RKE | Calculate Kube Reserved based node size Run the shell script “rke-node-size.sh” located in “../installer/roles/rke/files/” to assess the reserved memory available for RKE.
rke : RKE | Setting facts for Kube Reserved based node size Store returned memory values in variable.
rke : RKE | Create RKE cluster configuration file ; Create the configuration file based off of the template “rke.j2” located in “../installer/roles/rke/templates”.
rke-defaults : Configure defaults Load default configuration for fallback ips.
rke-defaults : create fallback_ips_base Define fallback ip for accessing Sisense.
rke-defaults : set fallback_ips Set the value.
kubernetes-cloud/kubectl : Kubectl | Check if Kubectl binary exists. Check if directory exists “/usr/local/bin/kubectl”.
kubernetes-cloud/kubectl : Kubectl | Check Kubectl version. Check the version installed by running:Kubectl version
kubernetes-cloud/kubectl : Kubectl | Download kubernetes-client archive If not installed already, download the kubectl client from:https://dl.k8s.io/v1.21.6/kubernetes-client-linux-amd.tar.gz
kubernetes-cloud/kubectl : Kubectl | Copy kubectl binary to destination directory Copy binary from tmp directory to “/usr/local/bin”.
kubernetes-cloud/kubectl : Kubectl | Install kubectl bash completion Generate the bash file into “/etc/bash_completion.d/kubectl.sh” by running:kubectl completion bash
kubernetes-cloud/kubectl : Kubectl | Set kubectl bash completion file permissions Assign proper permission to generate bash file.
provisioner/helm : Helm | Check if Helm binary exists. Check if the directory exists “sr/local/bin/helm”.
provisioner/helm : Helm | Check Helm version. Run the command to determine version:helm version
provisioner/helm : Helm | Download helm. If not in place already, download the given helm from the following based on the values defined in “../installer/roles/provisioner/helm/defaults/main.yaml”:https://get.helm.sh/helm-{{ helm_version }}-{{ helm_platform }}-{{ helm_arch }}.tar.gz
provisioner/helm : Helm | Copy helm binary into place. Copy downloaded binary from temporary to the /usr/local/bin/helm directory.
provisioner/helm : Helm | Check if bash_completion.d folder exists Check if completion bash files exists in “/etc/bash_completion.d/”.
provisioner/helm : Helm |Get helm completion If not generate completion bash by running: ;/usr/local/bin/helm completion bash
provisioner/helm : Helm | Install helm completion Copy generated completion file to:“/etc/bash_completion.d/helm.sh”
rke : Create .kube folder in home directory Create the ../.kube directory.
rke : Copy kubeconfig Copy the configuration file “kube_config_cluster.yml” into the kube directory.
rke : Update kubeconfig IP address on cluster Update the ip address to config file.
rke : Kubernetes Apps | Lay down dashboard template Define the configuration file from the template located in “../installer/roles/rke/templates/dashboard.yml.j2”.
rke : Kubernetes Apps | Start dashboard State the dashboard.


Common Section: ; Provisioner
Shell Script Function:install_provisioner
Message: ?
Successful Message: ?
Playbook: provisioner.yml
This playbook will either deploy Kubernetes using RKE for the first time, or upgrade Kubernetes.
Task Logic Description Success Message Fail Message Root Cause / Workaround / Solution
Load install values from extra values ./extra_values/installer/installer-values.yaml
folders : Ensure Sisense directories exist Check that all of the following directories exist:config/umbrella-chartconfig/sslconfig/logging-monitoring ok: [<given node> ]=> (item={'path': 'config/umbrella-chart'})ok: [<given node>] => (item={'path': 'config/ssl'})ok: [<given node>] => (item={'path': 'config/logging-monitoring'}) Fail: [<given node> ]=> (item={'path': 'config/umbrella-chart'})Fail: [<given node>] => (item={'path': 'config/ssl'})Fail: [<given node>] => (item={'path': 'config/logging-monitoring'}) Make sure the directory exist. Create them manually if necessary.
folders : Clean dgraph folder Clean the “/dgraph” directory in Sisense, for single node deployment only.
folders: Ensure that applications directories exist in single mode For non-cluster/cloud installations, check that the following directories exist with the specified path/owner/group/mode:
{ path: "/dgraph-io", owner: "1000", group: "1000", mode: "0755" }
{ path: "/storage", owner: "1000", group: "1000", mode: "0755" }
{ path: "/mongodb", owner: "999", group: "999", mode: "0755" }
{ path: "/zookeeper", owner: "1001", group: "1001", mode: "0755" }
ok: [<given node>] => (item={'path': 'dgraph-io', 'owner': '1000', 'group': '1000', 'mode': '0755'})
ok: [<given node>] => (item={'path': 'storage', 'owner': '1000', 'group': '1000', 'mode': '0755'})
ok: [<given node>] => (item={'path': 'mongodb', 'owner': '999', 'group': '999', 'mode': '0755'})
ok: [<given node>] => (item={'path': 'zookeeper', 'owner': '1001', 'group': '1001', 'mode': '0755'})
Fail: [<given node>] => (item={'path': 'dgraph-io', 'owner': '1000', 'group': '1000', 'mode': '0755'})
Fail: [<given node>] => (item={'path': 'storage', 'owner': '1000', 'group': '1000', 'mode': '0755'})
Fail: [<given node>] => (item={'path': 'mongodb', 'owner': '999', 'group': '999', 'mode': '0755'})
Fail: [<given node>] => ; (item={'path': 'zookeeper', 'owner': '1001', 'group': '1001', 'mode': '0755'})
If need be, create a directory manually with the correct settings.
Check whether the Helm binary exists Check that the helm bin directory exists (e.g., /usr/local/bin/helm) changed: [<given node>] failed: [<given node>]
Check the Helm version Check which version is installed by running the command:
/usr/local/bin/helm version
Download Helm Download package: If the Helm version is not 3 or does not exist: https://get.helm.sh/helm-{{helm_version }}-{{ helm_platform }}-{{ helm_arch}}.tar.gz
“..\kubespray\roles\provisioner\helm\defaults\main.yaml”:
helm_version: 'v3.4.1'
helm_platform: linux
helm_arch: amd64
helm_bin_path: /usr/local/bin/helm

Helm releases are available at:https://github.com/helm/helm/releases/
Copy the Helm binary to its correct location Once downloaded, Helm will be copied from /tmp/{{ helm_platform }}-{{ helm_arch}}/helm to: /usr/local/bin/helm
If SSL is on (is_ssl = true)
Create a namespace if one does not exist kubectl create namespace {{ namespace_name }} --dry-run=client -o yaml | kubectl apply -f -
SSL | Ensure that a TLS Secret does not exist kubectl delete secret -n <namespace_name> {{ tls_secret_name }} --ignore-not-found
SSL | Create a Sisense Kubernetes TLS Secret kubectl create secret tls {{ tls_secret_name }} -n <namespace_name> \ --key <ssl_key_path> \ --cert <ssl_cer_path>
Install Provisioner
HELM | Generate Provisioner Values Create the configuration file for provisioner: /config/umbrella-chart/{{ namespace_name}}-provisioner-values.yaml
HELMHELM | Install/Upgrade Sisense Provisioner For an offline installation, a Helm chart will be installed from the /config/umbrella-chart/ directory, where the file will be: provisioner-<sisense version.tgz
For an online installation, the installation will be downloaded from the Sisense repo:
helm upgrade prov-<namespace_name> \
; ; ; ; ; ; ; ; ; ;https://<sisense help rep>/provisioner-<sisense version>.tgz \
; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ;--namespace={{ namespace_name}}
; ; ; ; ; ; ; ; ; ;--values={{ sisense_dir}}/config/umbrella-chart/{{ namespace_name }}-provisioner-values.yaml \
; ; ; ; ; ; ; ; ; ;--set-file sisense={{ playbook_dir}}/../extra_values/helm/sisense.yml \
; ; ; ; ; ; ; ; ; ;--set-file prometheus-operator={{playbook_dir }}/../extra_values/helm/prometheus-operator.yml \
; ; ; ; ; ; ; ; ; ;--set-file cluster-metrics={{playbook_dir }}/../extra_values/helm/cluster-metrics.yml\
; ; ; ; ; ; ; ; ; ;--set-file logging={{ playbook_dir}}/../extra_values/helm/logging.yml \
; ; ; ; ; ; ; ; ; ;--set-file nginx-ingress={{playbook_dir }}/../extra_values/helm/nginx-ingress.yml \
; ; ; ; ; ; ; ; ; ;--set-file alb-controller={{playbook_dir }}/../extra_values/helm/alb-controller.yml \
; ; ; ; ; ; ; ; ; ;--set-file cluster-autoscaler={{playbook_dir }}/../extra_values/helm/cluster-autoscaler.yml \
; ; ; ; ; ; ; ; ; ;--set-file nfs-client={{playbook_dir}}/../extra_values/helm/nfs-client.yml \
; ; ; ; ; ; ; ; ; ;--set-file nfs-server={{playbook_dir}}/../extra_values/helm/nfs-server.yml \
; ; ; ; ; ; ; ; ; ;--set-file aws-ebs-csi={{playbook_dir}}/../extra_values/helm/aws-ebs-csi.yml \
; ; ; ; ; ; ; ; ; ;--set-file descheduler={{playbook_dir}}/../extra_values/helm/descheduler.yml \
; ; ; ; ; ; ; ; ; ;--set-file efs={{ playbook_dir}}/../extra_values/helm/efs.yml \
; ; ; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ;--cleanup-on-fail"
Bash | Add completion file kubectl get cm --namespace <namespace_name> ; add-completion -ojsonpath='{.data.*}'>~/add_completion-<namespace_name>.sh
chmod u+x ~/add_completion-<namespace_name>.sh

Common Section: Helm
The following section will run and perform an upgrade to Helm if Helm v2 is still deployed.
The below is not performed for offline installations.
Install/Upgrade Helm to v3 (not applicable for offline install)
Task Logic Success Message Fail Message Root Cause/Workaround/Solution
Migration | Check whether Helm is installed Check whether Helm is installed by running the command that will return the helm directory in “usr/local/bin/helm”:
command -v helm
Migration | Get Tiller pod Find the Helm deployment, for example, in the tiller namespace “kube-system”:
kubectl get po --namespace=<tiller namespace> --selector=app=helm
Empty KUBERNETES_EXEC_INFO, defaulting to client.authentication.k8s.io/v1alpha1. This is likely a bug in your Kubernetes client. Please update your Kubernetes client. This points to the fact that the Kubernetes client being installed by Sisense is not compatible with the server side version of Kubernetes. For example, a client attempts to install Kubernetes 1.17 client where the server has 1.21 installed on it.
Check the version by typing the command kubectl version and upgrade the client to the given server version.
Note Kubespray is configured by default to install a certain client version. Navigate to “ ../installer/roles/kubernetes-cloud/kubectl/defaults/main.yml” to find the version and change its parameters appropriately in “kubectl_version” for the compatible version with the server.
Migration | Get the Helm version Check the Helm version deployed:
helm version --short --client
Helm3 | Install migration plugin dependencies Determine whether yum or apt will be used to install Git:
bash -c -f /usr/bin/aptapt update && apt -yes git
bash -c -f /usr/bin/yumyum -y install git
Helm3 | Install the migration plugin Install the plugin that will be used to migrate to Helm 3.14:
helm plugin remove 2to3
Helm3 | Create plugins directory Create the directory under:
<user home directory>/.helm/plugins
Helm3| Install plugins Install the plugin: helm plugin install https://github.com/helm/helm-2to3.git Access to GitHub is required to install the plug-in.
Get installed releases Get the release of the Helm deployed: helm ls --short --deployed --all
Registering installed releases Retain the value of the release.
Get installed releases Find out the installed versions: helm ls --short -A --deployed
Registering installed releases Retain the value of the release.
Helm3 | Migrate Helm v2 releases Convert releases from helm 2 to 3:
helm 2to3 convert <for each released item>
--release-versions-max=3 -t <TILLER_NAMESPACE>
Helm3 | Clean up Helm v2 data Perform cleanup:
helm 2to3 cleanup --tiller-cleanup --skip-confirmation -t <TILLER_NAMESPACE>
Helm3 | Validate Helm2 configmap Validate the Helm configuration:
kubectl get cm --namespace=<TILLER_NAMESPACE> -l OWNER=TILLER,STATUS=DEPLOYED
Helm3 | Get all namespaced resources Check all the resources deployed in the namespace deployed (e.g., sisense):
kubectl api-resources --namespaced=true -oname
And find the following resources:
pods.metrics.k8s.io
localsubjectaccessreviews.authorization.k8s.io
Helm3 | Get all namespaced resources Check all the resources deployed in the namespace deployed (e.g., sisense):
kubectl api-resources --namespaced=true -oname
Deteremine monitoring namespace resources:
localsubjectaccessreviews.authorization.k8s.io
Helm3 | Setting facts Set the Helm chart log to “logmon-<namespace_name>”.
Prometheus | Label all resources for Helm (1/2) Label Prometheus resource: if internal_monitoring is enabled:
kubectl label --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ; -l 'app.kubernetes.io/instance=sisense-prom-operator' \
; ; ; ; ; ; ; ; ; ; ; ;app.kubernetes.io/managed-by=Helm \
; ; ; ; ; ; ; ; ; ; ; ;-n kube-system \
kubectl label --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-l 'release=sisense-prom-operator' \
; ; ; ; ; ; ; ; ; ; ; ;app.kubernetes.io/managed-by=Helm \
; ; ; ; ; ; ; ; ; ; ; ;-n monitoring \
Prometheus | Annotate all resources for Helm (2/2) Annotate all resources accordingly:

kubectl annotate --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-l 'release=sisense-prom-operator' \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace=monitoring \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name=sisense-prom-operator \
; ; ; ; ; ; ; ; ; ; ; ;-n monitoring \

kubectl annotate --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-l 'app.kubernetes.io/instance=sisense-prom-operator' \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace=monitoring \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name=sisense-prom-operator \
; ; ; ; ; ; ; ; ; ; ; ;-n monitoring \

kubectl annotate --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-l 'release=sisense-prom-operator' \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace=monitoring \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name=sisense-prom-operator \
; ; ; ; ; ; ; ; ; ; ; ;-n kube-system \

kubectl annotate --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-l 'app.kubernetes.io/instance=sisense-prom-operator' \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace=monitoring \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name=sisense-prom-operator \
; ; ; ; ; ; ; ; ; ; ; ;-n kube-system \
Nginx | Annotate all resources for Helm (1/2) Annotate Nginx resources:

kubectl annotate --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-l 'release=nginx-ingress' \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace=kube-system \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name=nginx-ingress \
; ; ; ; ; ; ; ; ; ; ; ;-n kube-system \
Nginx | Label all resources for Helm (2/2) Label Nginx resources:

kubectl label --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-l 'release=nginx-ingress' \
; ; ; ; ; ; ; ; ; ; ; ;app.kubernetes.io/managed-by=Helm \
Descheduler | Annotate all resources for Helm (1/2) Annotate descheduler resources:

kubectl annotate --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-n kube-system \
; ; ; ; ; ; ; ; ; ; ; ;-l 'release=descheduler' \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace=kube-system \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name=descheduler \
Descheduler | Label all resources for Helm (2/2) Label descheduler resources:

kubectl label --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-n kube-system \
; ; ; ; ; ; ; ; ; ; ; ;-l 'release=descheduler' \
; ; ; ; ; ; ; ; ; ; ; ;app.kubernetes.io/managed-by=Helm \
Cluster-autoscaler | Annotate all resources for Helm (1/2) Annotate auto-scaler resources:

kubectl annotate --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-n kube-system \
; ; ; ; ; ; ; ; ; ; ; ;-l 'app.kubernetes.io/instance=cluster-autoscaler' \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace=kube-system \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name=cluster-autoscaler \
Cluster-autoscaler | Label all resources for Helm (2/2) Label auto-scaler resources:

kubectl label --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-n kube-system \
; ; ; ; ; ; ; ; ; ; ; ;-l 'app.kubernetes.io/instance=cluster-autoscaler' \
; ; ; ; ; ; ; ; ; ; ; ;app.kubernetes.io/managed-by=Helm \
NFS-CLIENT | Annotate all resources for Helm (1/2) Annotate NFS client resources:

kubectl annotate --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-n default \
; ; ; ; ; ; ; ; ; ; ; ;-l 'release=nfs-client-provisioner' \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace=default \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name=nfs-client-provisioner \
NFS-CLIENT | Label all resources for Helm (2/2) Label NFS client resources:

kubectl label --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-n default \
; ; ; ; ; ; ; ; ; ; ; ;-l 'release=nfs-client-provisioner' \
; ; ; ; ; ; ; ; ; ; ; ;app.kubernetes.io/managed-by=Helm \
NFS-SERVER | Annotate all namespaced resources for Helm (1/2) Annotate NFS server resources:

kubectl annotate --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-n default \
; ; ; ; ; ; ; ; ; ; ; ;-l 'release=nfs-server-provisioner' \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace=default \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name=nfs-server-provisioner \
NFS-SERVER | Label all namespaced resources for Helm (2/2) Label NFS server resources:

kubectl label --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-n default \
; ; ; ; ; ; ; ; ; ; ; ;-l 'release=nfs-server-provisioner' \
; ; ; ; ; ; ; ; ; ; ; ;app.kubernetes.io/managed-by=Helm \
Sisense | Annotate resources namespace for Helm (1/3) Annotate Sisense resources:

kubectl annotate --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-l 'app.kubernetes.io/instance={{ namespace_name }}' \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace={{ namespace_name }} \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name={{ namespace_name }} \
; ; ; ; ; ; ; ; ; ; ; ;-n {{ namespace_name }} \

kubectl annotate --overwrite {{ item }} \
; ; ; ; ; ; ; ; ; ; ; ;--all -n {{ namespace_name }} \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace={{ namespace_name }} \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name={{ namespace_name }} \
Sisense | Label resources namespace for Helm (2/3) Label Sisense resources:

kubectl label --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;-l 'app.kubernetes.io/instance={{ namespace_name }}' \
; ; ; ; ; ; ; ; ; ; ; ;app.kubernetes.io/managed-by=Helm \
; ; ; ; ; ; ; ; ; ; ; ;-n {{ namespace_name }} \

kubectl label --overwrite <each resource> \
; ; ; ; ; ; ; ; ; ; ; ;--all -n {{ namespace_name }} \
; ; ; ; ; ; ; ; ; ; ; ;app.kubernetes.io/managed-by=Helm \
Sisense | Annotate monitoring resources (3/3) Annotate monitoring resources:

kubectl annotate --overwrite ServiceMonitor --all \
; ; ; ; ; ; ; ; ; ; ; ;-n monitoring \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace={{ namespace_name }} \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name={{ namespace_name }} \

kubectl annotate --overwrite PrometheusRule --all \
; ; ; ; ; ; ; ; ; ; ; ;-n monitoring \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace={{ namespace_name }} \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name={{ namespace_name }} \

kubectl annotate --overwrite ConfigMap cm-logmon-sisense-env \
; ; ; ; ; ; ; ; ; ; ; ;-n monitoring \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace={{ namespace_name }} \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name={{ namespace_name }} \
Logging | Get all namespaced resources kind (1/3) Get all the monitoring resources in when interenal_monitoring and external_monitiring is enabled:

kubectl get all ; -n kube-system -l 'k8s-app in (logrotate, fluentd-logzio, fluent-bit, fluentd, sysinfo, metricbeat)'
Logging | Label all namespaced resources in system for Helm (2/3) kubectl label --overwrite <for each item below> \
; ; ; ; ; ; ; ; ; ; ; ;-n kube-system \
; ; ; ; ; ; ; ; ; ; ; ;app.kubernetes.io/managed-by=Helm \

For each of these:
- all_logging_resources.stdout_lines }}"
- "cm,logrotate-config"
- "cm,cm-logz-key"
- "cm,fluentd-config-template"
- "clusterrole,fluent-bit-read"
- "clusterrole,fluentd"
- "pdb,fluentd"
- "sa,fluentd"
- "clusterrolebinding,fluentd"
- "Role,fluentd-kinesis"
- "Secret,fluentd-kinesis"
- "RoleBinding,fluentd-kinesis"
- "cj,cronjob-logrotate"
- "ServiceMonitor,fluent-bit-metrics"
- "ServiceMonitor,fluentd-metrics"
Logging | Annotate namespaced resources in system for Helm (3/3) kubectl annotate --overwrite {{ item.split(',')[0] }} {{ item.split(',')[1] }} \
; ; ; ; ; ; ; ; ; ; ; ;-n kube-system \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-namespace={{ namespace_name }} \
; ; ; ; ; ; ; ; ; ; ; ;meta.helm.sh/release-name=logmon-{{ namespace_name }} \

For each of these:
- all_logging_resources.stdout_lines }}"
- "cm,logrotate-config"
- "cm,cm-logz-key"
- "cm,fluentd-config-template"
- "clusterrole,fluent-bit-read"
- "clusterrole,fluentd"
- "pdb,fluentd"
- "sa,fluentd"
- "clusterrolebinding,fluentd"
- "Role,fluentd-kinesis"
- "Secret,fluentd-kinesis"
- "RoleBinding,fluentd-kinesis"
- "cj,cronjob-logrotate"
- "ServiceMonitor,fluent-bit-metrics"
- "ServiceMonitor,fluentd-metrics"
resolve platform specific vars Determine the specific platform variables based on the operating system.
kubernetes-cloud/helm : download... For non-offline installations:

If Helm is not installed, or is installed with version 2, download Helm version 3.
Download the helm package. The configuration resides under “../installer/roles/kubernetes-cloud/helm/defaults/main.yaml”.

The default site set is https://get.helm.sh/helm-v3.4.1-linux-amd64.tar.gz.
The checksum will be used to validate the download. It will be downloaded to a /tmp directory.

kubernetes_helm_mirror: https://get.helm.shkubernetes_helm_ver: v3.4.1
kubernetes_helm_platform: linux-amd64

kubernetes_helm_checksums:
; ;v2.17.0:
; ; ; ;linux-amd64: sha256:f3bec3c7c55f6a9eb9e6586b8c503f370af92fe987fcbf741f37707606d70296
; ;v3.4.1:
; ; ; ;linux-amd64: sha256:538f85b4b73ac6160b30fd0ab4b510441aa3fa326593466e8bf7084a9c288420
kubernetes-cloud/helm : make install dir Set the installation directory to the default set in the config file:
“../installer/roles/kubernetes-cloud/helm/defaults/main.yaml”.
/usr/local/bin/helm-v3.4.1”
kubernetes_helm_bin_dir: /usr/local/bin
kubernetes-cloud/helm : unarchive Unarchive the downloaded TGZ file to the the directory:
“ /usr/local/bin/helm-v3.4.1/linux-amd64/”
kubernetes-cloud/helm : cleanup... Delete the TGZ file from the tmp directory.
kubernetes-cloud/helm : remove link Remove the link to the old ”/usr/local/bin/helm” directory
kubernetes-cloud/helm : link Link to the new directory “/usr/local/bin/helm-v3.4.1/linux-amd64/” [<node>] => (item=helm)
kubernetes-cloud/helm : Helm3 | Switch stable repo Update repo:
helm repo add stable https://charts.helm.sh/stable --force-update
changed: [<node>]

Common Section: Storage
Task Logic Success Message Fail Message Root Cause/Workaround/Solution
Install Rook-Ceph if configured storage_type = ‘rook-ceph’
storage : Rook-Ceph | Install lvm utils (RedHat) For each node yum install lvm2 given “RedHat” OS.
storage : Rook-Ceph | Install lvm utils (Debian) For each node apt install lvm2 given “Debian” OS.
Install glusterfs if configured storage_type = ‘glusterfs’
storage : Add glusterfs-4 repository (Ubuntu 18.04, Debian) install from repo: ; ppa:gluster/glusterfs-4.0 repo access is required.
storage : Add bionic repo key (needed for gluster4.0 support) - Ubuntu 20 Install from repo: keyserver.ubuntu.com
id is: F7C73FCC930AC9F83B387A5613E01B7B3FE869A9
repo access is required.
storage : Add glusterfs-4 repository (Ubuntu 20.04) install from repo:
deb http://ppa.launchpad.net/gluster/glusterfs-4.0/ubuntu bionic main
repo access is required.
storage : Update apt cache update apt cache.
storage : Add glusterfs-4 repository (CentOS, RedHat) Install from:
http://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0
repo access is required.
storage : Install glusterfs-4 mount utils (Amazon, CentOS) yum install -y https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/glusterfs-4.0.2-1.el7.x86_64.rpm \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/glusterfs-client-xlators-4.0.2-1.el7.x86_64.rpm \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/glusterfs-extra-xlators-4.0.2-1.el7.x86_64.rpm \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/glusterfs-fuse-4.0.2-1.el7.x86_64.rpm \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/glusterfs-libs-4.0.2-1.el7.x86_64.rpm \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/python2-gluster-4.0.2-1.el7.x86_64.rpm
storage : Install RedHat Dependencies (Redhat) Install packages:

psmisc
attr
storage : ; Install openssl10 (Redhat 8) dnf install -y compat-openssl10
storage : ; Install glusterfs-4 mount utils (RedHat) glusterfs-fuse-4.0.2-1.el7
storage : ; Install glusterfs-4 mount utils (Ubuntu, Debian) glusterfs-client=4.0.2-ubuntu2~bionic1
storage : Load lvm kernel modules Load lvm modules:
; ; ;- "dm_snapshot"
; ; ;- "dm_mirror"
; ; ;- "dm_thin_pool"
storage : Install glusterfs mount utils (Redhat) Install glusterfs-fuse.
storage : Install glusterfs mount utils (Debian) Install glusterfs-client
Install NFS if configured storage_type = ‘nfs’
Install NFS mount utils (RedHat) Install NFS utils for Redhat OS:
yum install nfs-utils
An issue can occur if the package is not available to be installed. Make sure it is.
EFS/NFS | Install NFS mount utils (Debian) Install NFS utils for Debian type OS:
apt install nfs-common
An issue can occur if the package is not available to be installed. Make sure it is.

Common Section: ; Monitoring - Internal & External
Install internal monitoring if it is configured to be installed <monitoring_stack>
Task Logic Success Message Fail Message Root Cause/Workaround/Solution
monitoring_stack : Get Prometheus-Operator status Check the helm chart installed for “prometheus-operator:
helm ls -A
monitoring_stack : Get current installed version Check the current version installed of Sisense:
helm ls -n <namespace>
monitoring_stack : Get New Sisense Version Get the new Sisense version number:
echo <sisense version>
monitoring_stack : Copy logging-monitoring extra values files Copy extra value configuration files for monitoring from “../installer/extra_values/helm/” to "{{ sisense_dir}}/config/logging-monitoring/
Each of the following:
; ; ; ;- { src: "cluster-metrics.yml", dest: "cluster-metrics-override.yml" }
; ; ; ;- { src: "logging.yml", dest: "logging-override.yml" }
; ; ; ;- { src: "prometheus-operator.yml", dest: "prometheus-operator-override.yml" }
monitoring_stack : Uninstall | Remove previous installation of Monitoring Stack If this installation is running in update and has internal monitoring selected, uninstall older release monitoring components:
helm del -n monitoring prom-operator
monitoring_stack : Uninstall | Remove previous installation of Monitoring Stack Remove previous installation for Grafana:
helm del -n <namespace> ; grafana-<namespace>
monitoring_stack : Uninstall | Remove previously installed CRDs Run the command to delete all CRD monitoring packages:

kubectl delete crd

For each:
; ; ;- alertmanagers.monitoring.coreos.com
; ; ; ;- prometheuses.monitoring.coreos.com
; ; ; ;- prometheusrules.monitoring.coreos.com
; ; ; ;- servicemonitors.monitoring.coreos.com
monitoring_stack : Ensure prometheus/alertmanager directories exist Ensure that the directories exists for single_node or cloud installation where storage is not glusterfs or rook-ceph:
/opt/sisense/prometheus
/opt/sisense/alertmanager
/opt/sisense/grafana
monitoring_stack : Template files for Prometheus/Alertmanager volumes Create configuration file for monitoring for each, given single_node or cloud installation where storage is not glusterfs or rook-ceph:

;<sisense opt directory>/config/logging-monitoring/prom-volumes.yaml

<sisense opt directory>/config/logging-monitoring/alertmanager-volumes.yaml
monitoring_stack : Copy CRDs for prometheus-operator Copy CRDs YAML files from "../installer/roles/monitoring_stack/files/crds"” to: {{ sisense_dir }}/config/logging-monitoring/"
monitoring_stack : Install CRDs for prometheus-operator Run the command to install the CRDs:
kubectl apply -f <sisense opt directory>/config/logging-monitoring/crds --wait=true

Any errors generated here will be ignored.
monitoring_stack : Check that CoreOS CRDs are created Check that all 6 CRDs were installed:

kubectl get crd
For each:
alertmanagers.monitoring.coreos.com
podmonitors.monitoring.coreos.com
prometheuses.monitoring.coreos.com
prometheusrules.monitoring.coreos.com
servicemonitors.monitoring.coreos.com
thanosrulers.monitoring.coreos.com
Retry 12 times with a 5 second delay between each retry.
monitoring_stack : Fetch list of namespaces Get the names of all namespaces that are defined:

kubectl get ns
monitoring_stack : Fetch list of internal_monitoring Find all the namespaces that are monitoring:

kubectl get -n kube-system configmap cm-logmon-<each namespace returned>-env
monitoring_stack : Copy the TGZ umbrella Helm package prometheus-operator Copy file "../installer/roles/monitoring_stack/files/prometheus-operator-< sisense_version>.tgz" to “<sisense opt directory>/config/logging-monitoring/"
monitoring_stack : Create HostPath Volumes Apply configuration files for single node or cloud installation where storage is not glusterfs or rook-ceph:

kubectl apply -f <sisense opt directory>/config/logging-monitoring/prom-volumes.yaml"

kubectl apply -f <sisense opt directory>/config/logging-monitoring/alertmanager-volumes.yaml"
monitoring_stack : Create the values file for the prometheus-operator Helm chart Create the configuration file with file template “prometheus-operator-values.j2”.

<sisense opt directory>/config/logging-monitoring/prometheus-operator-values.yaml"
monitoring_stack : Monitoring | Install prometheus-operator chart Install the monitoring help chart. ; The extra values path is set in “./installer/roles/monitoring_stack/defaults/main.yml”.

helm upgrade sisense-prom-operator \
; ; ; ; ; ; ; ; ; ;<sisense opt directory>/config/logging-monitoring/prometheus-operator-<sisense_version>.tgz \
; ; ; ; ; ; ; ; ; ;--values <sisense opt directory>/config/logging-monitoring/prometheus-operator-values.yaml \
; ; ; ; ; ; ; ; ; ;--values <prometheus_operator_extra_values_path> \
; ; ; ; ; ; ; ; ; ;--namespace <monitoring_namespace> \
; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ;--cleanup-on-fail"
Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress", "stderr_lines": ["Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress"], "stdout": "", "stdout_lines": [] This issue occurs due to a potentially failed previous installation or a slate prometheus pod deployed, resulting in a duplication. You can either ignore it (because installation skips it) or you should uninstall and reinstall Sisense to confirm a clean deployment.
monitoring_stack : Wait for prometheus/alertmanager pods to be running Check that the readiness of the monitoring services, including alertmanager and prometheus, is “Ready”:

kubectl get sts -n <monitoring_namespace>

Wait for 120 retries with a 3 second delay between each retry.
monitoring_stack : Prometheus Storage Resizing | Gather PVC Details Get the capacity of Prometheus:

kubectl -n monitoring get pvc -l app=prometheus

Note: The namespace used here is hardcoded (e.g., to “monitoring”), and therefore will not utilize the configuration set in the installer extra values file. Therefore, this can fail if the value has been changed.
monitoring_stack : PVC | Check if PersistentVolumeClaim is Bound For cluster installation only, check if monitoring services are bound:

kubectl -n monitoring get pvc

Note: The namespace used here is hardcoded, and therefore will not utilize the configuration set in the installer extra values file. Therefore, this can fail if the value has been changed.
monitoring_stack : Prometheus Storage Resizing | Check PVC size Get the PVC for monitoring:

kubectl -n monitoring get pvc

The size must be at least what is set in “installer/roles/monitoring_stack/defaults/main.yml”:
prometheus_disk_size: 10

Note: The namespace used here is hardcoded (e.g., to “monitoring”), and therefore will not utilize the configuration set in the installer extra values file. Therefore, this can fail if the value has been changed.
monitoring_stack : Prometheus Storage Resizing | Storage input validation The message will appear if the size does not meet the requirement configured. Prometheus Storage ; size cannot be lowered than the actual size. Actual size is <prometheus_pvc_size>, the requested size is < configured prometheus_disk_size>
monitoring_stack : Prometheus Storage Resizing | Storage input validation Get the prometheus CRD name, applicable for non-cloud installations:

kubectl -n <monitoring_namespace> ; get prometheuses.monitoring.coreos.com -l app=prometheus-operator-prometheus
monitoring_stack : Prometheus Storage Resizing | Get prometheus crd name Get the prometheus number of replicas, applicable for non-cloud installations:

kubectl -n <monitoring_namespace> get prometheus.monitoring.coreos.com ;
monitoring_stack : Prometheus Storage Resizing | Get prometheus number of replicas Find out how many replicas exist for the following:

kubectl -n <monitoring_namespace> patch prometheuses.monitoring.coreos.com <pvc name>
monitoring_stack : Prometheus Storage Resizing | Scale down prometheus Scale down Prometheus to 0 replicas:

kubectl -n <monitoring_namespace patch prometheuses.monitoring.coreos.com <pvc name> -p=0
monitoring_stack : Prometheus Storage Resizing | Delete prometheus PVC Delete each PVC, applicable for non-cloud installations:

kubectl -n <monitoring_namespace delete <pvc name>
monitoring_stack : Prometheus Storage Resizing | Gather PV Details Gather PVC details:

kubectl -n <monitoring_namespace> get pv -l app=prometheus
monitoring_stack : Prometheus Storage Resizing | Release prometheus PV (Delete uid) Delete PVC:

kubectl patch pv <pvc name>
monitoring_stack : Prometheus Storage Resizing | Scale up prometheus Size the PVC based on configuration:

kubectl -n monitoring patch pvc <each PVC> -p {"spec":{"resources":{"requests":{"storage":"{ <prometheus_disk_size>}Gi" }}}}

The configuration is located in “../installer/roles/monitoring_stack/defaults/main.yml”
For example: prometheus_disk_size: 10
monitoring_stack : Prometheus Storage Resizing |Check PVC status Check the PVC that was resized:

kubectl -n <monitoring_namespace> describe pvc
monitoring_stack : Prometheus Storage Resizing | Check for resize limitation For cloud installations, expansion might fail due to storage limitation settings on the cloud provider side. You've reached the maximum modification rate per volume limit. Wait at least 6 hours between modifications per EBS/gp2 volume
monitoring_stack : Prometheus Storage Resizing | Restarting Restart prometheus if an expansion was performed:

kubectl -n <monitoring_namespace> delete po -l app=prometheus
monitoring_stack : Wait for prometheus pods to be running Check on the readiness:

kubectl -n <monitoring_namespace> get sts --selector=app=prometheus-operator-prometheus --ignore-not-found=true

The wait will be 120 retries with a delay of 3 seconds between each retry.
Install Elasticsearch monitoring (If ; is_efk is true)
monitoring_stack : create values files for Helm chart elastic and curator Create configuration files:

<sisense opt directory>/config/logging-monitoring/elasticsearch-values.yaml

<sisense opt directory>config/logging-monitoring/elasticsearch-curator-values.yaml ; - ;
monitoring_stack : deploy elastic with curator Install both Helm charts for elasticsearch:

helm upgrade elasticsearch \
; ; ; ; ; ; ; ;./
installer/roles/monitoring_stack/files/elasticsearch-7.4.1.tgz\
; ; ; ; ; ; ; ; ; ;-f <sisense opt directory>/config/logging-monitoring/elasticsearch-values.yaml \
; ; ; ; ; ; ; ; ; ;--namespace <monitoring_namespace>\
; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ;--cleanup-on-fail"helm upgrade elasticsearch-curator \
; ; ; ; ; ; ; ;./
installer/roles/monitoring_stack/files/elasticsearch-curator-2.0.1.tgz\
; ; ; ; ; ; ; ; ; ;-f <sisense opt directory>/config/logging-monitoring/elasticsearch-curator-values.yaml \
; ; ; ; ; ; ; ; ; ;--namespace <monitoring_namespace>\
; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ;--cleanup-on-fail"
monitoring_stack : deploy kibana Install the Kibana package:

helm upgrade kibana ./installer/roles/monitoring_stack/files/kibana-3.2.3.tgz \
; ; ; ; ; ; ; ; ; ;--set env.ELASTICSEARCH_HOSTS=http://elasticsearch-master.monitoring:9200 \
; ; ; ; ; ; ; ; ; ;--set service.type=NodePort \
; ; ; ; ; ; ; ; ; ;--set service.nodePort=30561 \
; ; ; ; ; ; ; ; ; ;--set replicaCount=1 \
; ; ; ; ; ; ; ; ; ;--namespace <monitoring_namespace>\
; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ;--cleanup-on-fail
monitoring_stack : deploy es-exporter Deploy es-exporter

helm upgrade es-exporter \
; ; ; ; ; ; ; ; ; ;--set es.uri=\"http://elasticsearch-master.monitoring:9200\",serviceMonitor.enabled=true \
; ; ; ; ; ; ; ; ; ;./
installer/roles/monitoring_stack/files/elasticsearch-exporter-1.10.1.tgz \
; ; ; ; ; ; ; ; ; ;--namespace <monitoring_namespace>\
; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ;--cleanup-on-fail"
Install Loki monitoring when Loki is enabled
monitoring_stack : Copy Loki chart Copy package from "./installer/roles/monitoring_stack/files/loki-2.5.0.tgz" to "<sisense opt directory>/config/logging-monitoring/"
monitoring_stack : Setup Loki values file Create configuration file “<sisense opt directory>/config/logging-monitoring/loki-values.yaml”
monitoring_stack : Configure Loki volume Create configuration file “.<sisense opt directory>/config/logging-monitoring/loki-volume.yaml”
monitoring_stack : Create hostpath dir for Loki Create directory: <sisense opt directory>/loki
monitoring_stack : Create Loki Volume Apply configuration:
kubectl apply -f <sisense opt directory>/config/logging-monitoring/loki-volume.yaml -n <monitoring_namespace >
monitoring_stack : Install Loki chart Run the Helm chart for lok:

helm upgrade loki \
; ; ; ; ; ; ; ; ; ;<opt sisense directory>/config/logging-monitoring/loki-2.5.0.tgz \
; ; ; ; ; ; ; ; ; ;--values <opt sisense directory>/config/logging-monitoring/loki-values.yaml \
; ; ; ; ; ; ; ; ; ;--namespace <monitoring_namespace> \
; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ;--cleanup-on-fail

Common Section: ; Sisense Installation, Logging Monitoring, and Wait for Sisnese to come up
This section will run for single-node, multi-node cluster and cloud installations
Task Logic Success Message Fail Message Root Cause/Workaround/Solution
sisense : Copy SSL Certificates If SS is configured, copy the key and certificate from the configured source directory in the installation configuration file to: "<sisense opt directory>/config/ssl/”
(both the ssl_key_path and ssl_cert_path)
sisense : Copy Sisense extra values files Copy extra value files:

src': 'sisense.yml', 'dest': 'sisense-override.yml'
src': 'descheduler.yml', 'dest': 'descheduler-override.yml'
src': 'nginx-ingress.yml', 'dest': 'nginx-ingress-override.yml'
src': 'alb-controller.yml', 'dest': 'alb-controller-override.yml'
src': 'cluster-autoscaler.yml', 'dest': 'cluster-autoscaler-override.yml'
sisense : Copy Helm packages Copy each of the following packages:

"sisense-<sisense_version>.tgz"
"ingress-nginx-<nginx_ingress_version>.tgz"
"descheduler-<descheduler_chart_version>.tgz"
"cluster-autoscaler-<cluster_autoscaler_chart_version>.tgz"

to: "<sisense opt directory>/config/umbrella-chart/"
sisense : Template files Create configuration files for each in the package in directory: “<sisense opt directory>/config/umbrella-chart/”
ok: [node1] => (item={'src': 'cluster-autoscaler-values.yaml', 'dest': 'cluster-autoscaler-values.yaml'})

'nginx-values.yaml'
'sisense-pvc.yaml'
'sisense-namespace.yaml'
'sisense-management-admin.yaml'
'sisense-grafana-volume.yaml'
sisense : Kubernetes | Get Kubernetes Version Check the version of Kuberenetes that is deployed:

kubectl version --short
sisense : Check for existing namespace Check the namespace that was configured for Sisense:

kubectl get ns <namespace_name>
sisense : Global Prometheus | Get Prometheus Service Find the Prometheus service:

kubectl get svc --all-namespaces
sisense : Global Prometheus | Get Prometheus Namespace Find the namespace in which Prometheus is deployed:

kubectl get svc --all-namespaces
sisense : Global Prometheus | Get Prometheus Service Monitor Find the Prometheus service monitor resource name for the service monitor:

kubectl api-resources
sisense : Global Prometheus | Setting facts Assign values for later use.
sisense : Global Prometheus | Setting facts Assign values for later use.
Install Proxy - for an installation that is not OpenShift, or on the cloud:
sisense: ; Check Kube-Proxy label Determine the proxy label:

kubectl get pods -n kube-system -l k8s-app=kube-proxy
sisense : Check if Kube-System has limits Determine whether there is a set CPU limit for the cluster:

kubectl get limitrange cpu-limit-range -n kube-system
sisense : Copy LimitRange Copy the configuration file for the proxy from: “./installer/roles/sisense/fileslimitrange.yaml” to "<sisense opt directory>/config/umbrella-chart/"
sisense : Apply limits for the Kube-System namespace Apply the limit config file:

kubectl apply -f <sisense opt directory>/config/umbrella-chart/limitrange.yaml
sisense : Restart Kube-Proxy Restart the Kubernetes proxy:

kubectl delete po -n kube-system -l k8s-app=kube-proxy --grace-period=60
sisense : Check for Kube-Proxy availability Check if the proxy is up, and wait until its status = Running:

kubectl get pods -n kube-system -l k8s-app=kube-proxy
Create the Namespace, Set Up SSL, Check Storage
sisense : Create a sisense namespace Create the configured namespace:

kubectl apply -f <sisense opt directory>/config/umbrella-chart/<namespace>-namespace.yaml
sisense :SSL | Ensure that a TLS Secret does not exist Delete the current secret to ensure that a new one will be applied:

kubectl delete secret -n <namespace> <tls_secret_name>
sisense :SSL | Create the Kubernetes TLS Secret Assign the SSL certificate:

kubectl create secret tls <tls secretname> -n <namespace> --key /opt/sisense/config/ssl/<ssl key_filename> --cert /opt/sisense/config/ssl/<ssl certificate>
failed to load key pair tls: failed to parse private key.failed to load key pair tls: failed to parse cert.. The key file is not found or cannot be parsed because it is encrypted (set as a private key). Either the location specified for the certificate is invalid or the certificate is not valid.
sisense :PVC | Gather PVC Details If it's a cluster installation, get the storage name:

kubectl get pvc -n <namespace>
sisense :Sisense Storage | Check Sisense Storage Retain the value returned in “Capacity” for Sisense storage:

kubectl get pvc -n <namespace>
Apply FSx storage for cluster installation and storage is ‘fsx’. If the storage is not bound, then the job is run first.
sisense :FSx Subpath | Template Job Create the configuration file for FSx. Use “./installer/roles/sisense/templates/fsx-subpath.yaml.j2” as the template for the configuration and place it in <sisense opt directory>/config/umbrella-chart/<namespace>-fsx-subpath.yaml
sisense :FSx Subpath | Deploy the FSx subpath job invoke the job:

kubectl apply -n <namespace> -f <sisense opt directory>/config/umbrella-chart/<namespace>-fsx-subpath.yaml
sisense :FSx Subpath | Waiting for the FSx subpath job to complete Wait for the job to complete, and its status is “Succeeded”:

kubectl get pod -n <namespace> -l app.kubernetes.io/name=fsx-subpath

Will retry 20 times with a 2 second delay between each retry, to check the job status.
sisense :FSx Subpath | Terminate FSx subpath job Delete the job:

kubectl delete -n <namespace> --force --grace-period=0 -f <namespace>/config/umbrella-chart/<namespace>-fsx-subpath.yaml
sisense :PVC | Deploy RWX PersistentVolumeClaim for Sisense Deploy the PVC for RWX:

kubectl apply -n <namespace> -f <sisense opt directory>/config/umbrella-chart/<namespace>-pvc.yaml
sisense : Grafana | Ensure that the Grafana directory exists Ensure that the directory exists if the installation is single node, cloud, or the storage is “glusterfs” or “rook-ceph”:

/opt/sisense/grafana/grafana-<namespace>
sisense : Grafana | Create HostPath Volume for Grafana Create the Grafana volume:

kubectl apply -n <namespace> ;-f <sisense opt directory>/config/umbrella-chart/<namespace>-grafana-volume.yaml
sisense : PVC | Check if PersistentVolumeClaim is Bound If it's a cluster mode installation, wait for PVC to have the status “Bound”:

kubectl get pvc -n <namespace>
Zookeeper installation - Cluster Multi-Node
This section will be performed given the installation on a multi-node cluster that is that of an update, and the Zookeeper data will be upgraded accordingly.
sisense : Zookeeper | Check for an existing cluster Determine whether Zookeeper is deployed on the given cluster:

kubectl get po -n <namespace> -l app=zookeeper
sisense : Zookeeper | Setting Snapshot default condition
sisense : Zookeeper | Setting deployment status Determining whether the status returned is “Running” by parsing out the returned value from the “Status” column.
sisense : Zookeeper | Check for existing snapshots Determine whether there is a Zookeeper snapshot in place:

kubectl exec -i -n <namespace> ; <zk pod name> -- ls bitnami/zookeeper/data/version-2
sisense : Zookeeper | DataDir contents Zookeeper files found: <outlines the snapshot data files returned>
sisense : Zookeeper | Setting Snapshot condition Retain the fact that snapshots are found. If a snapshot is found, a clean upgrade will be performed. If no snapshut is found, a manual upgrade will be performed for Zookeeper data.
sisense : Retrieve existing PV for Zookeeper Find a PV for Zookeeper:

kubectl get pv
sisense : Patch ZK Persistent Volumes pre-upgrade Patch the Zookeeper persistent volume:

kubectl patch pv <zookeeper> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain" }}

kubectl patch pv ; -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain" }}
sisense : Pre-Upgrade Zookeeper to match Helm3 1/2" Scale down the Zookeeper deployment:

kubectl scale sts -n {<namespace><namespace>--zookeeper --replicas=0
sisense : Pre-Upgrade Zookeeper to match Helm3 2/2 Remove the Zookeeper deployment:

kubectl delete sts -n <namespace><namespace>-zookeeper
Zookeeper installation - Single Node

This section will be performed given the installation on a single node that is that of an update, and the Zookeeper data will be upgraded accordingly.
sisense : Zookeeper Single | Check the current configuration Determine whether Zookeeper is deployed on the given single node:

kubectl get deploy -n <namespace> -l app=sisense-zookeeper
sisense : Zookeeper Single| Check for an existing cluster Determine the name of the pod:

kubectl get po -n <namespace> -l app=sisense-zookeeper
sisense : Zookeeper Single | Get the Zookeeper pod Save returned values.
sisense : Zookeeper Single | Check for existing snapshots Check if a Zookeeper snapshot exists in the given opt directory, where files have the pattern “snapshot.*”:
/opt/sisense/zookeeper/version-2/snapshot
sisense : Zookeeper Single | Get the Zookeeper snapshot condition Save returned values.
sisense : Zookeeper Single| Prepare Volume Create a config file template called zookeeper-pvc.yaml.j2 in the given directory, containing the volume setup for zookeeper:

<opt sisense directory>/config/umbrella-chart/zookeeper-pvc.yaml
sisense : Zookeeper Single| Create a Persistent Volume Create the Zookeeper volume:

kubectl apply -f <opt sisense directory>/config/umbrella-chart/zookeeper-pvc.yaml
sisense : Zookeeper Single | Check if PersistentVolumeClaim is Bound Wait for status to be “Bound” for the Zookeeper volume:

kubectl get pvc -n <namespace> -l app=zookeeper

The wait will last for 20 retries with a 5 second delay between each retry.
sisense : Zookeeper Single| Restore Data Copy data files from the old to the new folder:
From: <opt sisense directory>/zookeeper/version-2/
To: ; <opt sisense directory>/zookeeper/data/version-2/
sisense : Zookeeper Single| Remove old data Delete old data from <opt sisense directory>/zookeeper/version-2'
MongoDB Backup and Cleanup DGraph
sisense : Check for an existing MongoDB 3.x Find out whether MongoDB 3.x is installed:

kubectl get po -n <namespace> -l 'app in (mongodb-replicaset, mongo)’
sisense : MongoDB Single | Create a Persisent Volume Create the MongoDB volume, only for a single node deploy:

kubectl apply -f <opt sisense directory>/config/umbrella-chart/<namespace>-mongodb-pvc.yaml
sisense : MongoDB Single | Check if PersistentVolumeClaim is Bound For a single node, wait until the volume has status “Bound”:

kubectl get pvc -n <namespace> -l app.kubernetes.io/name=mongodb

Wait for 20 retries with a 5 second delay between each retry.
If the volume does not come up, installation will fail.
sisense : MongoDB Upgrade | Get Utils image Run the command to determine the “utils_image_tag” set under the “management” section of the returned details, which is set in the given Helm chart installation package being deployed:

helm show values <opt sisense directory>/config/umbrella-chart/sisense-<version of sisense being installed>.tgz
sisense : MongoDB Upgrade | Generate migration job names Defines the names for MongoDB migration jobs that will be used to upgrade MongoDB during the installation.

The jobs:
  1. Backup job name will start with “manual-mongodb-backup-”
  2. Restore job name will start with “manual-mongodb-restore-”
sisense : MongoDB Upgrade | Template files Create configuration files for each job, which will be placed under “<opt sisense directory>/config/umbrella-chart/”:

<namespace>-mongodb_migration_permissions.yaml
<namespace>-mongodb-backup-job.yaml
sisense : MongoDB Upgrade | Apply permissions to the migration job Apply the configuration for the migration job:

kubectl apply -f <opt sisense directory>/config/umbrella-chart/<namespace>-mongodb_migration_permissions.yaml"
sisense : MongoDB Upgrade | Backup the old MongoDB Run the backup job:

kubectl apply -f <opt sisense directory>/config/umbrella-chart/<namespace>-mongodb-backup-job.yaml
sisense : MongoDB Upgrade | Wait for the backup job to complete Wait for the job to complete in the time allowed, until its status is Completed:

kubectl get job --namespace <namespace> <name of the backup job>

Wait for 400 retries with a 5 second delay between each retry.
sisense : MongoDB Upgrade | Fail installation on backup job failure MongDB backup job failure will fail the installation. Investigate the logs to find the reason for the failure. MongoDB backup job failed. For more information run: kubectl logs -n <namespace> -l job-name=<name of the backup job>
sisense : MongoDB Upgrade | Delete old MongoDB pods Delete the old version of the previously deployed MongoDB:

kubectl scale statefulset -n <namespace> -l 'app in (mongodb-replicaset, <namespace>-mongod)' --replicas=0
sisense : MongoDB Upgrade | Delete old MongoDB PVCs Delete MongoDB when a multi-node cluster is deployed:

kubectl delete pvc -n <namespace> -l app=mongodb-replicaset --force --grace-period=0 --ignore-not-found
sisense : Check Dgraph installation Check if an older version of Dgraph is installed (e.g., app name is “dgraph”):

kubectl get sts -n <namespace> -l app=dgraph
sisense: Single | Check Dgraph installation If installation is single node, check pod STATUS:

kubectl -n <namespace> get pods --selector=dgraph=dgraph --ignore-not-found
sisense: Get current installed version Check the Helm Sisense version deployed, package Name = sisense:

helm ls -n <namespace>
sisense : HA | Delete old Dgraph installation Delete the old Dgraph installation for a multi-node cluster deployment:

kubectl delete -n <namespace> sts -l app=dgraph --ignore-not-found --force --grace-period=0
sisense : HA |Delete existing PVC for Dgraph Delete the old Dgraph PVC for a multi-node cluster deployment:

kubectl delete -n <namespace> pvc -l app=dgraph
sisense : Delete old Dgraph installation Delete the old Dgraph installation for a single node:

kubectl delete -n <namespace> deployment,sts -l dgraph=dgraph --ignore-not-found --force --grace-period=0
sisense : Single | Delete old dgraph data Delete the old Dgraph data for the single node:

Delete all files from /opt/sisense/dgraph-io/
sisense: ; Dgraph Single| Create Persisent Volume Configure the volume for Dgraph:

kubectl apply -f <opt sisense directory>/config/umbrella-chart/dgraph-pvc.yaml
sisense: Dgraph Single | Check if PersistentVolumeClaim is Bound Wait for the volume to be bound:

kubectl get pvc -n <namespace> -l dgraph=dgraph

Wait for 20 retries with a 5 second delay between retries until the volume's status is “Bound”.
Check the storage and expand it if needed for the cluster installation, including resizing MongoDB/Zookeeper/Dgraph
sisense : HELM | Get chart details For an update type installation, get the name of the service for RabbitMQ:

kubectl get svc -n <namespace><namespace>-rabbitmq-ha
sisense : HELM | Set chart details Retain the cluster IP for the RabbitMQ PVC.
sisense : HELM | Pod resource limits Template Values Copy the pod system resource limitation configuration file, based on whether the deployment size is small or large:

“./installer/extra_values/helm/pod-resource-limits-small-deployment-size.yaml”
or
“./installer//extra_values/helm/pod-resource-limits-large-deployment-size.yaml”
to: “<opt sisense directory>/config/umbrella-chart/”.
Sisense Storage | Check Sisense Storage Check for a cluster installation if the storage size limitation is met, check the CAPACITY value:

kubectl get pvc -n <namespace> storage
Sisense Storage | Storage input validation An error will appear if the storage size does not meet the minimum required from the calculated sisense_disk_size. Sisense Storage size cannot be lowered than the actual size. Actual size is <the storage capacity>, the requested size is <sisense_disk_size> Additional storage will be required.
sisense : MongoDB Storage | Setting resize condition Start with resize not required, and assess below if needed.
sisense : MongoDB Storage | Gather PVC Details Gather details on the PVC for MongoDB:

kubectl get pvc -n <namespace> -l app=mongodb-replicaset

kubectl get pvc -n sisense -l app=mongodb-replicaset
sisense : MongoDB Storage | Check PVC size For each PVC, get the capacity value:

kubectl -n <namespace> get pvc <pvc name>
sisense : MongoDB Storage | Setting resize condition A resize is required as the capacity is less than the required configured <mongodb_disk_size>Gi in the YAML file located in “..\kubespray\roles\sisense\defaults\main.yml”.
sisense : Check if chart is deployed Find out if the Sisense Helm chart is deployed, STATUS = deployed:

helm ls -n <namespace>
sisense : PVC | Check if PersistentVolumeClaim is Bound Check if the PVC is bound:

kubectl get pvc -n <namespace><Name of mongodb PV>

Wait for 5 retries, with a 5 second delay between each retry.
sisense : MongoDB Storage | Storage input validation MongoDB Storage size cannot be lowered than the actual size. Actual size is <pvc capacity>, the requested size is <mongodb_disk_size>
sisense : MongoDB Storage | Expansion of PVC size Expand storage based on the installation configuration YAML file:

kubectl -n <namespace> patch pvc <mongodb PVC name> -p <mongodb_disk_size>Gi
sisense : MongoDB Storage |Check PVC status kubectl -n <namespace> describe pvc <mongodb PVC name>
sisense : MongoDB Storage | Check for resize limitation This message will appear when resizing has not yet been achieved. You've reached the maximum modification rate per volume limit. Wait at least 6 hours between modifications per EBS volume
sisense : MongoDB Storage | Restarting (rollout) Restart the service:

kubectl rollout restart -n <namespace> sts <namespace>-mongodb-replicaset
sisense : MongoDB Storage | Delete statefulset (keep pods running) Delete the old pod:

kubectl delete sts -n <namespace> -l app=mongodb-replicaset --cascade=false
Resize the Zookeeper volume if needed for a multi-node cluster installation
sisense : Zookeeper | Set resize condition Initially, set the assumption that a volume resize is not needed.
sisense : ZK Storage | Gather PVC Details Determine the name of the Zookeeper volume:

kubectl get pvc -n <namespace> -l app=zookeeper
sisense : ZK Storage | Check the PVC size Determine the capacity value set for the Zookeeper volume:

kubectl -n <namespace> get pvc ;<zookeeper volume name>
sisense : Zookeeper | Set resize condition Set the installation to resize the volume if the returned size is smaller than the required size in the installation YAML configuration file (e.g., cloud_config.yaml or cluster_config.yaml or openshift_config.yaml) in the zookeeper_disk_size parameter.
sisense : Check if the chart is deployed Check if the Sisense Helm is already deployed. If so, retain the chart name that is deployed. Look at chart NAME = sisense:

helm ls -n <sisense>
sisense : PVC | Check if PersistentVolumeClaim is Bound Determine whether the PVC deployed for Zookeeper is bound:

kubectl get pvc -n <sisense> data-zookeeper-0

Wait for 5 retries, with a 5 second delay between each retry, until PVC is bound.
sisense : Zookeeper Storage | Storage input validation This message will appear if the storage size does not meet the configured size requirement. The Zookeeper storage size cannot be lower than the actual size. Actual size is <Capacity size>; the requested size is <the configured value zookeeper_disk_size>
sisense : Zookeeper Storage | Expansion of the PVC size Expand the volume size for Zookeeper to the required disk size:

kubectl -n <namespace> patch pvc data-zookeeper-0 -p ‘{"spec":{"resources":{"requests":{"storage":”5Gi”}}}}’
sisense : Zookeeper Storage | Check the PVC status Check that the size was set correctly by checking the status:

kubectl -n <namespace> describe pvc data-<namespace>-zookeeper-0
sisense : Zookeeper Storage | Check for a resize limitation If it returns that the maximum modification is reached, it could mean that the volume has not been scaled up yet. You've reached the maximum modification rate per volume limit. Wait at least 6 hours between modifications per EBS volume
sisense : Zookeeper Storage | Restarting (rollout) Restart the Zookeeper app:

kubectl rollout restart -n <namespace> sts <namespace>-zookeeper
sisense : Zookeeper Storage | Delete the statefulset (keep pods running) Delete Zookeeper statefulset:

kubectl delete sts -n <namespace> -l app=zookeeper --cascade=false
Resize the Dgraph Alpha and Zero storage volume if needed for a multi-node cluster installation
sisense : Dgraph alpha Storage | Setting the Alpha default condition Assumes that no resize will be required for Dgraph Alpha storage.
sisense : Dgraph zero Storage | Setting the Zero B default condition Assumes that no resize will be required for Dgraph Zero storage.
sisense : Dgraph alpha Storage | Gather the PVC Details Gather details about PVC for Alpha storage:

kubectl get pvc -n <namespace> -l component=alpha
sisense : Dgraph alpha Storage | Check the PVC size Check the PVC size for Alpha storage:

kubectl -n <namespace> get pvc <pvc name for alpha storage>
sisense : Dgraph zero Storage | Gather the PVC details Gather details about PVC for Zero storage:

kubectl get pvc -n <namespace> -l component=zero
sisense : Dgraph zero Storage | Check the PVC size Check the PVC size for Zero storage:

kubectl -n <namespace> get pvc <pvc name for zero storage>
sisense : Dgraph alpha Storage | Setting the Alpha condition If the Alpha storage size is less than the required storage defined in “alpha_disk_size” in the “./installer/roles/sisense/defaults/main.yml” configuration file, then the condition is set to resize the Alpha storage.
sisense : Dgraph zero Storage | Setting the Zero condition If the Zero storage size is less than the required storage defined in “zero_disk_size” in the “./installer/roles/sisense/defaults/main.yml” configuration file, then the condition is set to resize the Zero storage.
Check that the chart is deployed Check the Helm chart deployed for Sisense:

helm ls -n <namespace>
sisense : PVC | Check if PersistentVolumeClaim is Bound Check if the PVC for Dgraph for both Alpha and Zero has status “Bound”:

kubectl get pvc -n <namespace><pvc name>
sisense : Dgraph alpha Storage | Storage input validation This message will appear if the Dgraph storage for Alpha storage does not meet configured requirements. Dgraph alpha Storage size cannot be lowered than the actual size. Actual size is <Capacity for Alpha PVC>, the requested size is <alpha_disk_size>
sisense : Dgraph alpha | Expansion of the PVC size Expand PVC to the configured required size:

kubectl -n <namespace> patch pvc <Alpha PVC name> -p '{"spec":{"resources":{"requests":{"storage":"<alpha_disk_size>Gi" }}}}

where alpha_disk_size= 3GI by default, as configured in “..\kubespray\roles\sisense\defaults\main.yml”.
sisense : Dgraph alpha | Check the PVC status Check and retain the status for PVC:

kubectl -n <namespace> describe pvc datadir-<namespace>-dgraph-alpha-0
sisense : Dgraph alpha Storage | Check for resize limitation If the status returned from PVC has the value of “maximum modification” then display this message. You've reached the maximum modification rate per volume limit. Wait at least 6 hours between modifications per EBS volume
sisense : Dgraph alpha Storage| Restarting (rollout) Restart Dgraph Alpha:

kubectl rollout restart -n <namespace>} sts <namespace>-dgraph-alpha
sisense : Dgraph alpha Storage| Delete the statefulset (keep the pods running) Delete the Alpha statefulset:

kubectl delete sts -n ; <namespace> -l component=alpha --cascade=false
sisense : Dgraph zero Storage | Storage input validation If the Dgraph Zero PVC storage size is less than the required storage defined in zero_disk_size then this message will appear. Dgraph zero Storage size cannot be lowered than the actual size. Actual size is <Capacity of Zero PVC> the requested size is {<zero_disk_size>
sisense : Dgraph zero | Expansion of PVC size Expand the size of the Dgraph Zero PVC per the configuration:

kubectl -n <namespace> patch pvc <PVC Name for Zero storage> -p '{"spec":{"resources":{"requests":{"storage":"<zero_disk_size>Gi" }}}}

where zero_disk_size= 3GI by default, as configured in “..\kubespray\roles\sisense\defaults\main.yml”.
sisense : Dgraph zero | Check the PVC status Check and retain status for PVC:

kubectl -n <namespace> describe pvc datadir-<namespace>-dgraph-zero-0
sisense : Dgraph zero Storage | Check for resize limitation If the status returned from PVC has the value of “maximum modification” then display this message. You've reached the maximum modification rate per volume limit. Wait at least 6 hours between modifications per EBS volume.
sisense : Dgraph zero Storage| Restarting (rollout) Restart Dgraph Zero:

kubectl rollout restart -n <namespace>} sts <namespace>-dgraph-zero
sisense : Dgraph zero Storage| Delete the statefulset (keep the pods running) Delete the Alpha statefulset:

kubectl delete sts -n ; <namespace> -l component=zero --cascade=false
Install Inginx when:
  • SSL is activated, or SSL is off and the gateway_port is set to 80
  • No load balancer is selected
  • Not installed on OpenShift
sisense : Nginx-Ingress | Install Nginx Ingress Chart Check if the Nginx chart is installed (“nginx-ingress”):

helm ls -A
sisense : Nginx-Ingress | Delete the old release Uninstall the old Nginx from the current namespace:

helm del -n <current_namespace> nginx-ingress
sisense : Nginx-Ingress | Install the Nginx Ingress Chart Install the Helm chart for Nginx. The extra values path is defined in “./installer/roles/sisense/defaults/main.yml”.

helm upgrade nginx-ingress \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;<sisense opt directory>/config/umbrella-chartingress-nginx-3.27.0.tgz\
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--values <sisense opt directory>/config/umbrella-chart/nginx-values.yaml \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--values <nginx_ingress_extra_values_path> \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--namespace <utils_namespace> \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--cleanup-on-fail"
sisense : Nginx-Ingress | Delete pending upgrade secrets Delete any pending upgrade secrets:
kubectl delete secret --namespace <utils_namespace> --ignore-not-found -l status=pending-upgrade,name=nginx-ingress"
sisense : Nginx-Ingress | Install the Nginx Ingress Chart Install the Helm chart for Nginx. The extra values path is defined in “./installer/roles/sisense/defaults/main.yml”.

helm upgrade nginx-ingress \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;<sisense opt directory>/config/umbrella-chartingress-nginx-3.27.0.tgz\
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--values <sisense opt directory>/config/umbrella-chart/nginx-values.yaml \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--values <nginx_ingress_extra_values_path> \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--namespace <utils_namespace> \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--cleanup-on-fail"
sisense : Nginx-Ingress | Wait for the Nginx controller to run Wait for Nginx to come up (status = “Running”):

kubectl get po -n <utils_namespace> -l app.kubernetes.io/name=ingress-nginx

Wait for 30 retries with a 5 second delay between each retry.
Install Sisense Helm Chart - Install Chart
sisense : HELM | Get the Sisense release history Check which release is deployed for Sisense:

helm history -n <namespace> <namespace>
sisense : Set the isUpgrade flag Determine whether upgrade is activated in the installation configuration YAML file.
sisense: ; Template maintenance service Create a configuration file based on the template “maintenance-service.yaml.j2” as follows:

<sisense opt directory>/config/umbrella-chart/maintenance-service.yaml
sisense: Maintenance | get api-gateway replicacount If the installation is an upgrade, check if the api-gateway is up and ready:

kubectl -n <namespace> get deployment api-gateway
sisense: Maintenance | scale down the api-gateway for maintenance service Scale down deployment of the api-gateway to bring down access to Sisense, in preparation of the installation:

kubectl -n ; <namespace> scale deployment api-gateway --replicas=0
sisense: ; Apply maintenance service on upgrade If the installation is an update, activate the maintenance service that will be responsible to alert users that the system is being upgraded:

kubectl apply -f <sisense opt directory>/config/umbrella-chart/maintenance-service.yaml
sisense: Wait for the maintenance service to be ready Wait for the maintenance service to come up if the installation is an update:

kubectl wait --namespace=<namespace> --for=condition=Ready pods --selector=maintenance=true --timeout=1s
Retry 20 times with a delay of 5 seconds between each try.
sisense : HELM | Template Values Create the configuration file.

<sisense opt directory>/config/umbrella-chart/<namespace>-values.yaml
Helm3 | Remove immutable resources on update For an update installation:

kubectl delete all -n <namespace_name> -l app.kubernetes.io/name=migration --force --grace-period=0 --ignore-not-found
sisense : HELM | Install/Upgrade Sisense Chart Run the Helm chart package to install the Sisense version. The extra values path is set in “./installer/roles/sisense/defaults/main.yml”.

helm upgrade <namespace>\
; ; ; ; ; ; ; ; ; ;<sisense opt directory>/config/umbrella-chart/sisense-< sisense_version>.tgz \
; ; ; ; ; ; ; ; ; ;--values <sisense opt directory>/config/umbrella-chart/<namespace>-values.yaml \
; ; ; ; ; ; ; ; ; ;--values <sisense opt directory>/config/umbrella-chart/<namespace>-pod-resource-limits-<deployment_size>-deployment-size.yaml \
; ; ; ; ; ; ; ; ; ;--values <sisense_extra_values_path> \
; ; ; ; ; ; ; ; ; ;--namespace <namespace> \
; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ;--cleanup-on-fail"
TASK [sisense : HELM | Install/Upgrade Sisense Chart]changed: [<node helm chart installed on>] Error: failed pre-install: timed out waiting for the condition"], "stdout": "Release \"sisense\" does not exist. Installing it now.", "stdout_lines": ["Release \"sisense\" does not exist. Installing it now."]Error: failed to create resource: Internal error occurred: failed calling webhook \"vingress.elbv2.k8s.aws\": Post \"https://aws-load-balancer-webhook-service.default.svc:443/validate-networking-v1beta1-ingress?timeout=10s\": no endpoints available for service \"aws-load-balancer-webhook-service\"", "stderr_lines": ["W0426 13:29:28.843927 ; 12118 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress", "W0426 13:29:35.767347 ; 12118 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress", "W0426 13:29:35.772969 ; 12118 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress", "Error: failed to create resource: Internal error occurred: failed calling webhook \"vingress.elbv2.k8s.aws\": Post \"https://aws-load-balancer-webhook-service.default.svc:443/validate-networking-v1beta1-ingress?timeout=10s\": no endpoints available for service \"aws-load-balancer-webhook-service\""], "stdout": "Release \"sisense\" does not exist. Installing it now.", "stdout_lines": ["Release \"sisense\" does not exist. Installing it now." The issue in this case is a misconfiguration that is supplied to the Helm chart in the “<namespace>-values.yaml” file that is used as an input for running the Helm chart (e.g., the default location “../installer/roles/sisense/templates”).
Review the content and make corrections. Specifically, if a Helm chart installation is used (not using the Sisense installation that validates the configuration).
Note that the values.yaml file is populated with values based on the installation configuration file. Therefore, double check that values are correctly set.

For example:
  1. The node(s) name configured in the file might be invalid (or does not exist) or is not accessible. You can use kubectl get nodes to ensure that node names exist and are correctly configured.
  2. For a cloud load balancer, there is a wrong configuration and nodes are not reachable based on the configuration.
  3. If EKS ALB is used, then cloud_config is turned on by mistake and should be turned off (i.e. cloud_load_balancer is set to true and needs to be set to false).
If EKS load balancer is involved in the installation, the Helm will fail if the ALB load balancer is not configured properly and/or the values of the DNS entries are not configured correctly in the install config file.
sisense : Scale down the maintenance service on upgrade failure If the installation fails when the installation is an update, bring down the api-gateway-maintenance deployment:

kubectl -n <namespace> scale deployment api-gateway-maintenance --replicas=0
sisense : Scale up the api-gateway on upgrade failure If the installation fails when the installation is an update, set back the api-gateway deployment:

kubectl -n <namespace> scale deployment api-gateway --replicas=<number of replicas set previously>
sisense : Fail on Error Display failure if the installation fails. Sisense installation has failed.
Install ALB Controller if defined for an ‘aws’ installation
sisense : ALB Controller | Get service details Get the load balance service details:

kubectl get svc aws-load-balancer-webhook-service \
; ; ; ; ; ; ; ; ; ;-n kube-system
sisense : ALB Controller | Copy the AWS ALB Controller Helm chart Copy the package from: "./installer/roles/sisense/setup_alb_controller/files/aws-load-balancer-controller-1.2.6.tgz" to "<sisense opt directory>/config/umbrella-chart/"
sisense : ALB Controller | Template files Create configuration file from the “alb-controller-values.yaml.j2” template file:

<sisense opt directory>/config/umbrella-chart/alb-controller-values.yaml
sisense : ALB Controller | Check whether the release is installed Check if the release is installed already with Helm chart name "aws-load-balancer-controller”:

helm ls -A
sisense : ALB Controller | Delete the old release Delete the current installation:

helm del -n <current_namespace> aws-load-balancer-controller
sisense : ALB Controller | Install/Upgrade the AWS Load Balancer Controller Install the ALB controller Helm chart. The extra values path is set in “./installer/roles/sisense/defaults/main.yml”.

helm upgrade --install --force aws-load-balancer-controller \
; ; ; ; ; ; ; ; ; ; ; ; ; ;<sisense opt directory>/config/umbrella-chart/aws-load-balancer-controller-1.2.6.tgz \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--values <sisense opt directory>/config/umbrella-chart/alb-controller-values.yaml \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--values <alb_controller_extra_values_path> \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--namespace <utils_namespace> \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--cleanup-on-fail
sisense : ALB Controller | Delete pending upgrades secrets For a failure, delete pending upgrade secrets:

kubectl delete secret --namespace <utils_namespace>
and rerun the Helm chart installation.
sisense : LoadBalancer | Wait for AWS ALB address allocatation If alb_controller.enabled is true and cloud_load_balancer is true then then get the address allocation for ALB loadbalancer:

kubectl get ing sisense-ingress -n <namespace>
This will be retried 150 times with a 5 second delay between retries. Any failure here is not critical and installation will continue.
sisense : LoadBalancer | Set the AWS ALB address Retain the address value for the ALB load balancer.
sisense : LoadBalancer | Wait for NGINX LoadBalancer address allocation Wait for the address allocation:

kubectl get svc nginx-ingress-ingress-nginx-controller -n <utils_namespace>
sisense : LoadBalancer | Wait for LoadBalancer address allocation If gateway_port = 80 and no SSL installation is set, get the address allocation for the gateway:

kubectl get svc api-gateway-external -n <namespace>
sisense : LoadBalancer | Template Values Create the configuration file based on the “values.yaml.j2” template.

<sisense opt directory>/config/umbrella-chart/<namespace>-values.yaml"
sisense : LoadBalancer | Wait for databases migrations to finish Wait for migration jobs to complete:

kubectl get job/migration --namespace=<namespace>
Wait for 600 retries with a 5 second delay between retries until the condition is “Complete”.
sisense : LoadBalancer | Upgrade Sisense Chart Run the Helm chart package to install the Sisense version for the load balancer address allocated. The extra values path is set in “./installer/roles/sisense/defaults/main.yml”.

helm upgrade <namespace>\
; ; ; ; ; ; ; ; ; ;<sisense opt directory>/config/umbrella-chart/sisense-< sisense_version>.tgz \
; ; ; ; ; ; ; ; ; ;--values <sisense opt directory>/config/umbrella-chart/<namespace>-values.yaml \
; ; ; ; ; ; ; ; ; ;--values <sisense opt directory>/config/umbrella-chart/<namespace>-pod-resource-limits-< deployment_size>-deployment-size.yaml \
; ; ; ; ; ; ; ; ; ;--values <sisense_extra_values_path> \
; ; ; ; ; ; ; ; ; ;--namespace <namespace> \
; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ;--cleanup-on-fail"
Install Descheduler for a multi-node cluster installation
sisense : Descheduler | Gather Sisense Namespaces Find the sisense installation, with any name matching “sisense-”:
helm ls -A
sisense : Descheduler | Template descheduler values Create a config file based on the "descheduler-values.yaml.j2” template:

<sisense opt directory>/config/umbrella-chart/descheduler-values.yaml
sisense : Descheduler | Check whether the release is installed Find any Helm deployed with the name “descheduler”:

helm ls -A
sisense : Descheduler | Delete the old release Delete the current descheduler version:

helm del -n <current version> descheduler
sisense : Descheduler | Install/Upgrade Kubernetes Descheduler Run the Descheduler Helm chart. The extra Values path is set in “./installer/roles/sisense/defaults/main.yml”.

helm upgrade descheduler \
; ; ; ; ; ; ; ; ; ; ; ; ; ;<sisense opt directory>/config/umbrella-chart/descheduler-0.20.0.tgz \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--values <sisense opt directory>/config/umbrella-chart/descheduler-values.yaml \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--values < descheduler_extra_values_path> \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--namespace <utils_namespace> \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--cleanup-on-fail
sisense : Descheduler | Delete pending upgrade secrets For a failure, delete the pending upgrade secrete and rerun the Helm chart:

kubectl delete secret --namespace <utils_namespace>
Install Auto-Scaler if cloud_auto_scaler is true, and the installation is on kubernetes_cloud_provider = ‘aws’
sisense : Cluster-AutoScaler | Check whether the release is installed Find any Helm chart deployed with name “cluster-autoscaler”:

Helm ls -A
sisense : Cluster-AutoScaler | Delete the old release Delete the old version of autoscaler:

helm del -n <current installed version> cluster-autoscaler
sisense : Cluster-AutoScaler | Install/Upgrade Kubernetes Cluster AutoScaler Run the Helm chart to install autoscaler. The extra Values path is set in “./installer/roles/sisense/defaults/main.yml”

helm upgrade cluster-autoscaler \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;<sisense opt directory>/config/umbrella-chart/cluster-autoscaler-9.9.2.tgz \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--values <sisense opt directory>/config/umbrella-chart/cluster-autoscaler-values.yaml \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--values <cluster_autoscaler_extra_values_path> \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--namespace <utils_namespace> \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;--cleanup-on-fail
sisense : Cluster-AutoScaler | Delete pending upgrade secrets If the installation fails, delete pending upgrade secrets and rerun the Helm chart:

kubectl delete secret --namespace <utils_namespace>
Restore MongoDB
sisense : MongoDB Upgrade | Wait for the MongoDB cluster If clusert_mode = true, check the status of MongoDB:
kubectl rollout status --namespace <namespace> --watch --timeout=1s statefulset/<namespace>-mongodb
The wait will be retried 320 times with a 5 minute delay between retries.
sisense : MongoDB Upgrade | Wait for MongoDB single If cluster_mode = false, check for the MongoDB deployment to have the condition “available”:

kubectl wait --for=condition=available --namespace <namespace> --timeout=1s deployment/<namespace>-mongodb
The Wait will be retried 70 times with a 5 minute delay between retries.
sisense : MongoDB Upgrade | Get the MongoDB pod name If cluster_mode = false, get the pod name for MongoDB:

kubectl get po --namespace <namespace> -l app.kubernetes.io/name=mongodb
sisense : MongoDB Upgrade | Template restore job Create the MongoDB restore job configuration file based on the “mongodb-restore-job.yaml.j2” template:

<sisense opt directory>/config/umbrella-chart/<namespace>-mongodb-restore-job.yaml
sisense : MongoDB Upgrade | Restore MongoDB data Restore MongoDB:

kubectl apply -f <sisense opt directory>/config/umbrella-chart/<namespace>-mongodb-restore-job.yaml
sisense : MongoDB Upgrade | Wait for the restore job to complete Wait for the job to complete:

kubectl get job --namespace <sisense> <restore_job_name>
Wait for 400 retries with a 5 second delay between each retry.
sisense : MongoDB Upgrade | Fail the installation on restore job failure If, after retries, the status of the job is still NOT “Complete”, the installation will be failed. MongoDB restore job failed. For more information run: kubectl logs -n {{ namespace_name }} -l job-name={{ restore_job_name }} Review the job to see what caused it to take longer than the allowed time.
sisense : Copy K8S completion Generate the completion file that can be used to launch the Sisense CI command line:

<user_home_dir>/add_completion-ns<namespace>.sh"
Install Log Monitoring (Extrenal_monitoring = true)
monitoring_stack : copy tgz umbrella helm package logging-monitoring Copy the package from "./installer/roles/monitoring_stack/files/logging-monitoring-<sisense_version>.tgz" to “<sisense opt directory>/config/logging-monitoring/”
monitoring_stack : Copy the Cluster-Metrics chart Copy the package from "./installer/roles/monitoring_stack/files/cluster-metrics-<sisense_version>.tgz" to "<sisense opt directory>/config/logging-monitoring/"
monitoring_stack : Kubernetes | Get the certificate expiration date: ; ; This will be performed for an installation that checks the certificate content:

  • is_kubernetes_cloud = false
  • is_openshift = false
  • offline_installer=false
  • rwx_sc_name or rwo_sc_name not defined
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text
monitoring_stack : Kubernetes expiration date | Setting facts Retain the certificate expiration date returned in the section “Validity Not After”.
monitoring_stack : Create a configmap with JSON with gathered facts logging-monitoring Create configuration files based on templates for each file “cm-gather_facts.yaml.j2”, “logmon-values.yaml.j2”, “/cluster-metrics-values.yaml”:

<sisense opt directory>/config/logging-monitoring/cm-gather_facts.yaml<
sisense opt directory>
/config/logging-monitoring/logmon-values.yaml<
sisense opt directory>
/config/logging-monitoring/cluster-metrics-values.yaml
monitoring_stack : Deploy cm-gathered_facts.yaml Deploy cm gathered facts:

kubectl apply -f <sisense opt directory>/config/logging-monitoring/cm-gathered_facts.yaml
monitoring_stack : Logging | Install chart logging monitoring The extra values path is set in “./installer/roles/monitoring_stacke/defaults/main.yml”

helm upgrade logmon-<namespace> \
; ; ; ; ; ; ; ; ;<sisense opt directory>/config/logging-monitoring/logging-monitoring-< sisense_version>.tgz \
; ; ; ; ; ; ; ; ; ;--values <sisense opt directory>/config/logging-monitoring/logmon-values.yaml \
; ; ; ; ; ; ; ; ; ;--values <logging_extra_values_path> \
; ; ; ; ; ; ; ; ; ;--namespace <namespace> \
; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ;--cleanup-on-fail
monitoring_stack : ;Logging | Get all namespaced resources kind (1/3) Apply the Helm adoption fix for logging the chart. Get all monitoring resources.

kubectl get all -n kube-system -l 'k8s-app in (logrotate, fluentd-logzio, fluent-bit, fluentd, sysinfo, metricbeat)'
monitoring_stack : Logging | Label all namespaced resources in the system for Helm (2/3) Label all resources:

kubectl label --overwrite <each resource name returned>\
;-n kube-system \
; ; ; ; ; ;app.kubernetes.io/managed-by=Helm \


Resources also include:
  • "cm,logrotate-config"
  • "cm,cm-logz-key"
  • "cm,fluentd-config-template"
  • "clusterrole,fluent-bit-read"
  • "clusterrole,fluentd"
  • "pdb,fluentd"
  • "sa,fluentd"
  • "clusterrolebinding,fluentd"
  • "Role,fluentd-kinesis"
  • "Secret,fluentd-kinesis"
  • "RoleBinding,fluentd-kinesis"
  • "cj,cronjob-logrotate"
  • "ServiceMonitor,fluent-bit-metrics"
  • "ServiceMonitor,fluentd-metrics"
Logging | Annotate namespaced resources in the system for Helm (3/3) Annotate resources:

kubectl annotate --overwrite <each resource name returned> \
-n kube-system \
meta.helm.sh/release-namespace=<namespace>
meta.helm.sh/release-name=logmon-<namespace> \


Resources also include:
  • "cm,logrotate-config"
  • "cm,cm-logz-key"
  • "cm,fluentd-config-template"
  • "clusterrole,fluent-bit-read"
  • "clusterrole,fluentd"
  • "pdb,fluentd"
  • "sa,fluentd"
  • "clusterrolebinding,fluentd"
  • "Role,fluentd-kinesis"
  • "Secret,fluentd-kinesis"
  • "RoleBinding,fluentd-kinesis"
  • "cj,cronjob-logrotate"
  • "ServiceMonitor,fluent-bit-metrics"
  • "ServiceMonitor,fluentd-metrics"
monitoring_stack : Logging | Install chart logging monitoring Install the Helm chart for logmon:

helm upgrade logmon-<namespace> \
<sisense opt directory>/config/logging-monitoring/logging-monitoring-<sisense_version>.tgz \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--values <sisense opt directory>/config/logging-monitoring/logmon-values.yaml \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--values <logging_extra_values_path> \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--namespace <namespace> \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ; ; ; ; ;--cleanup-on-fail
monitoring_stack : Remove the Metricbeat Role Remove the Metricbeat role from the cluster:

kubectl delete clusterrole metricbeat --ignore-not-found
monitoring_stack : Install Cluster-Metrics When external_monitoring = true and metricbeat_enabled = true, install the Helm chart for the cluster metrics. The extra values path is specified in “./installer/roles/monitoring_stack/defaults/main.yml”.
metricbeat_enabled is configured in the installer extra values file.

helm upgrade cluster-metrics \
; ; ; ; ; ; ; ; ; ;<sisense opt directory>/config/logging-monitoring/cluster-metrics-< sisense_version>.tgz \
; ; ; ; ; ; ; ; ; ;--values <sisense opt directory>/config/logging-monitoring/cluster-metrics-values.yaml \
; ; ; ; ; ; ; ; ; ;--values <cluster_metrics_extra_values_path> \
; ; ; ; ; ; ; ; ; ;--namespace <monitoring_namespace> \
; ; ; ; ; ; ; ; ; ;--install \
; ; ; ; ; ; ; ; ; ;--create-namespace \
; ; ; ; ; ; ; ; ; ;--cleanup-on-fail
File: ../installer/roles/wait_for_sisense/tasks/wait_for_sisense.yml
File: ../installer/roles/wait_for_sisense/tasks/main.yaml

Description: Wait for the migration jobs to complete successfully (they upgrade the MongoDB to release being installed), and wait for all necessary pods to come up.
wait_for_sisense : Wait for databases migrations to finish This task will wait until the migration task that is running to upgrade the MongoDB database is complete. It will wait for 1s to receive a response from Kubernetes that the job is in “Complete Job” condition.

The command is:
kubectl wait --namespace=<namespace> --for=condition=Complete job --selector=app.kubernetes.io/name=migration --timeout=1s

Will return:

job.batch/migration condition met

The task will wait until the condition is met. It will attempt 600 retries with a 5 second delay between each retry, and perform the command every retry. If the condition is not met after the number of retries is reached, the installation will fail.

For example:

kubectl wait --namespace=sisense --for=condition=Complete job --selector=app.kubernetes.io/name=migration --timeout=1s

Will return:

job.batch/migration condition met
Wait for databases migrations get finished …..TASK [wait_for_sisense : Wait for databases migrations get finished] ;changed: [<node running databases>] This step might fail if the job was not able to be kicked off and/or it fails during the job running. The job might also take a long time to run, which might indicate that it is not running successfully.

This will require investigation on the job itself to understand why it is failing.
wait_for_sisense : Downloading Sisense Docker images... This step waits for the pods that are running the given Sisense apps to come up. Each of the pods is required to have Running status for installation to proceed. The command that will run for each of the apps is:

kubectl get po -n <namespace> --selector=app=<each of the items below>

It will return:

NAME ; ; ; ; ; ; ; ; ; ; ; ; ; READY ; STATUS ; ; RESTARTS ; AGE ;
<pod name> ; ; ; ; ; ; ; ; ; ; 1/1 ; Running ; ; 1 ; ; ; ; ; 1d


The command will be executed and look for the status of the pod to be in “Running” mode.

The apps list is:
  • "filebrowser"
  • "connectors"
  • "configuration"
  • "translation"
  • "oxygen"
  • "model-logspersistence"
  • "analyticalengine"
  • "usage"
  • "jobs"
  • "storage"
  • "intelligence"
  • "blox"
  • "build"
  • "external-plugins"
  • "management"
The installation will retry the command 150 times with a delay of 5 seconds between tries, checking to see if the status is “Running”. If the retries limit is reached, the installation will stop and fail.

For example:

kubectl get po -n sisense --selector=app=filebrowser

NAME ; ; ; ; ; ; ; ; ; ; ; ; ; READY ; STATUS ; ; RESTARTS ; AGE ;
filebrowser-59659b5f69-llw4l ; 1/1 ; ; Running ; 1 ; ; ; ; ; 1d
TASK [wait_for_sisense : Downloading Sisense Docker images...] *****************The following will be written for every Pod that has successfully came up:changed: [<node label running the pod>] => (item=<application name>)For example:changed: [node1] => (item=<filebrowser>) Execute the get Pod and parse out the Status from the returned string. ; If problem with returned value, you will see:error: error executing jsonpath \"{.items[0].status.phase}\": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the templatThis will appear in the case time has run out and the POD is not in a Running status yet.failed: [<node label running the pod>] (item=<application name>) => {"ansible_loop_var": "item", "attempts": 150, "changed": true, "cmd": "kubectl get po -n sisense --selector=app=<application name> -o jsonpath={.items[0].status.phase}", "delta": "<process end", "end": "<time of failure>", "item": "<application name>", "rc": 0, "start": "<process start>", "stderr": "", "stderr_lines": [], "stdout": "Pending", "stdout_lines": ["Pending"]} If you can see that the process is stuck or working slowly, run a Kubernetes describe command on the given pod to check what the issue is.

Potential issues:
  1. Slow connectivity leading to slow download
  2. Server resource issues not allowing the pod to come up properly
  3. Kubernetes-specific issues
  4. Potentially, another pod is stuck and affecting this pod from coming up
  5. The pod is using something that has been deprecated or is not compatible with the upgrade from the existing to the new version. For example, external-plugins installed currently are no longer supported and therefore the pod is not able to come up.
wait_for_sisense : Wait for Deployment pods to be ready This will only be run for an “Upgrade” installation. This task will execute the following command and wait for 1s to receive a response that the condition is met.

kubectl wait --namespace=<namespace> --for=condition=Ready pods --selector=app=<each pod item below> --timeout=1s

The command should return:

pod/<pod> condition met

The installation will execute this against each of the following:
  • "filebrowser"
  • "connectors"
  • "configuration"
  • "translation"
  • "oxygen"
  • "model-logspersistence"
  • "analyticalengine"
  • "usage"
  • "jobs"
  • "storage"
  • "intelligence"
  • "blox"
  • "build"
  • "external-plugins"
  • "management"
The installation will retry the command 150 times with a delay of 5 seconds between tries, checking to see if the status is “Running”. If the retries limit is reached, the installation will stop and fail.

For example:

kubectl wait --namespace=sisense --for=condition=Ready pods --selector=app=filebrowser --timeout=1spod/filebrowser-59659b5f69-llw4l condition met
Wait for Deployment pods become ready .........changed: [<node label running the pod>] => (item=<each application pod>)For example:changed: [node1] => (item=filebrowser)
maintenance | scale down maintenance-service This will only run for “Upgrade” installations. The task will run a command to shut down the api-gateway-maintenace deployment in kubernetes. If you run kubectl -n <namespace> get deployment api-gateway-maintenance you will see the deployment services running status.

The command installation will run, which will set the replicaset to 0, thus shutting down the deployment:

kubectl -n <namespace> scale deployment api-gateway-maintenance --replicas=0
deployment.apps/api-gateway-maintenance scaled


Note that this action is not validated to be successful.
TASK [wait_for_sisense : Maintenance | scale down maintenance-service] *********changed: [<node label running the pod>] An error could occur if kubectl is not able to scale down this service and/or for some reason it does not exist. A workaround would be, prior to installation, to run the scale down deployment manually.
wait_for_sisense : Wait for the api-gateway pod to become ready This will only run for “Upgrade” installations. This task will run the following command to check that the API gateway pod is up.

kubectl wait --namespace=<namespace> --for=condition=Ready pods --selector=app=api-gateway --timeout=1s

The command should return:
pod/api-gateway-<pod ID> condition met

The installation will retry the command 60 times with a delay of 5 seconds between tries, checking to see if the condition returned is met. If the retries limit is reached, the installation will stop and fail.

For example:

kubectl wait --namespace=sisense --for=condition=Ready pods --selector=app=api-gateway --timeout=1s

Will return:

pod/api-gateway-c8b8446dc-z26l9 condition met
Wait for api-gateway pod become ready ....changed: [<node label running the pod>]For example:changed: [node1]
wait_for_sisense : Waiting for internal logging system to run This task will only run when internal_monitoring is set to true in the installation YAML config file. This step of the installation is waiting for the pods to come up, which are “fluentd” and “fluent-bit”, both used for monitoring the platform. ; The following commands will be executed to check the status of each pod:

kubectl get po -n <namespace_name> --selector=component=fluentd

should return:

And the installation will wait until all are in status “Running”:
  • fluentd
  • fluent-bit
For example, the command returning the below string will be parsed to check for status = “Running”:

NAME ; ; ; ; ; ; ; ; ; ; READY ; STATUS ; ; RESTARTS ; AGE
<Pod name> ; ; ; ; ; ; ; 1/1 ; Running ; ; 1 ; ; ; ; 1d


It will wait for 150 retries with a 5 second delay between retries, for pods to be ready, meaning that they have the status “Running”. This is NOT a required step in installation and, if it fails, installation will continue.

For example:

kubectl get po -n sisense --selector=component=fluent-bit

Will return:

NAME ; ; ; ; ; ; ; ; ; ; READY ; STATUS ; ; RESTARTS ; AGE
fluent-bit-nx2zm ; ; ; ; 1/1 ; Running ; ; 1 ; ; ; ; ;1d
TASK [wait_for_sisense : Waiting for internal logging system to run] ***********changed: [<node label running the pod>] => (item=<application pod>)For example:changed: [node1] => (item=fluent-bit) Execute the get Pod and parse out the Status from the returned string. ; If problem with returned value, you will see:error: error executing jsonpath \"{.items[0].status.phase}\": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the templateWaits in a loop until status is “Running”. ; If it runs out of retries, it continues the installation.failed: [<node label running the pod>] (item=<application pod name>) => {"ansible_loop_var": "item", "attempts": 150, "changed": true, "cmd": "kubectl get po -n sisense --selector=component=fluent-bit -o jsonpath={.items[0].status.phase}", "delta": "0:00:00.800941", "end": "<time of failure>", "item": "fluent-bit", "msg": "non-zero return code", "rc": 0, "start":"stderr": "", "stderr_lines": [], "stdout": "Pending", "stdout_lines": ["Pending"]} If this step fails, it is possible that the execution of the get pod is failing and/or returns text in an expected format. Run the command manually and check whether what is returned is correct as expected. If the error returned is not a failure in executing the step, you will need to check the pod itself (the fluentd.log and/or fluent-bit.log for further details on potential application level issues).
wait_for_sisense : Waiting for external logging system to run This task will only run if external_monitoring is true in the installation YAML config file, and metricbeat_enabled is set to true in the extra_value config file located in "../installer/extra_values/installer/installer-values.yaml".

kubectl get po -n <namespace of monitoring> --selector=k8s-app=<application pod name>

Will return:

NAME ; ; ; ; ; ; ; ; ; ; ; READY ; STATUS ; ; RESTARTS ; AGE
<pod name>
; ; ; ; ; ; ; ; 1/1 ; ; Running ; 1 ; ; ; ; ; 1h


Wait until all are “Running”:
  • fluentd
  • metricbeat
It will wait for 150 retries with a 5 second delay between retries for pods to be ready, meaning that their status is “Running”. This is NOT a required step in the installation and if it fails, the installation will continue.

For example:

kubectl get po -n monitoring --selector=k8s-app=metricbeat

Will return:

NAME ; ; ; ; ; ; ; ; ; ; ; READY ; STATUS ; ; RESTARTS ; AGE
metricbeat-cluster-5p4n2 ; 1/1 ; ; Running ; 1 ; ; ; ; ; 1h
TASK [wait_for_sisense : Waiting for external logging system to run] *********changed: [<node label running the pod>] => (item=<application pod>)For example:changed: [node1] => (item=metricbeat) Execute the get Pod and parse out the Status from the returned string. ; If problem with returned value, you will see:error: error executing jsonpath \"{.items[0].status.phase}\": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the templateWaits in a loop until status is “Running”. ; If it runs out of retries, it continues the installation.failed: [<node label running the pod>] (item=<application pod name>) => {"ansible_loop_var": "item", "attempts": 150, "changed": true, "cmd": "kubectl get po -n <namespace of monitoring> --selector=k8s-app=<application pod name> -o jsonpath={.items[0].status.phase}", "delta": "0:00:00.800941", "end": "<time of failure>", "item": "<application pod name>", "msg": "non-zero return code", "rc": 0, "start":"stderr": "", "stderr_lines": [], "stdout": "Pending", "stdout_lines": ["Pending"]} Run the command manually and check whether what is returned is correct, as expected. Check the reason that the pod does not come up. If the error returned is not a failure in executing the step, you will need to check the pod's logs.
wait_for_sisense : Waiting for monitoring pods to run This task will only run if internal_monitoirng is true in the installation YAML config file and prometheus_enabled is set to true in the extra_value config file located in "../installer/extra_values/installer/installer-values.yaml".

The command:

kubectl get po -n <namespace of monitoring> --selector=app=<application pod name>

Will return:

NAME ; ; ; ; ; ; ; ; ; ; ; READY ; STATUS ; ; RESTARTS ; AGE
<pod name> ; ; ; ; ; ; ; ; 3/3 ; ; Running ; ;1 ; ; ; ; ; 1h


Wait until all pods are in “Running” status:
  • prometheus
  • prometheus-operator-operator
  • prometheus-node-exporter
It will wait for 30 retries with a 5 second delay between retries, for pods to be ready, meaning that they have the status “Running”. This is NOT a required step in installation and, if it fails, the installation will continue.

For example:

kubectl get po -n monitoring --selector=app=prometheus

Will return:

NAME ; ; ; ; ; ; ; ; ; ; ; READY ; STATUS ; ; RESTARTS ; AGE
<pod name> ; ; ; ; ; ; ; ; 3/3 ; ; Running ; ;1 ; ; ; ; ; 1h
TASK [wait_for_sisense : Waiting for monitoring pods to run] *******changed: [<node label running the pod>] => (item=<application pod>)For example:changed: [node1] => (item=prometheus) Execute the get Pod and parse out the Status from the returned string. ; If problem with returned value, you will see:error: error executing jsonpath \"{.items[0].status.phase}\": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the templateWaits in a loop until status is “Running”. ; If it runs out of retries, it continues the installation.failed: [<node label running the pod>] (item=<application pod name>) => {"ansible_loop_var": "item", "attempts": 30, "changed": true, "cmd": "kubectl get po -n <namespace of monitoring> --selector=app=<application pod name> ;-o jsonpath={.items[0].status.phase}", "delta": "0:00:00.800941", "end": "<time of failure>", "item": "<application pod name>", "msg": "non-zero return code", "rc": 0, "start":"stderr": "", "stderr_lines": [], "stdout": "Pending", "stdout_lines": ["Pending"]} Run the command manually and check whether what is returned is correct, as expected. Check the reason that the pod is not coming up. If the error returned is not a failure in executing the step, you will need to check the pod's logs.
wait_for_sisense : Waiting for Grafana pods to run This task will only run if internal_monitoirng is true in the installation YAML config file and prometheus_enabled is set to true in the extra_value config file located in "../installer/extra_values/installer/installer-values.yaml".

The command:

kubectl get po -n <namespace of monitoring> --selector=app.kubernetes.io/name=grafana

Will return:

NAME ; ; ; ; ; ; ; ; ; ; ; READY ; STATUS ; ; RESTARTS ; AGE
<pod name> ; ; ; ; ; ; ; ; 2/2 ; ; Running ; ;1 ; ; ; ; ; 1h


It will wait for 30 retries with a 5 second delay between retries, for pods to be ready, meaning that they have status “Running”. This is a required step and therefore failure to be in “Running” status will lead to installation failure.

For example:

kubectl get po -n monitoring --selector=app.kubernetes.io/name=grafana

Will return:

NAME ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; READY ; STATUS ; ; RESTARTS ; AGE
sisense-prom-operator-grafana-59ff94cfb6-4fsvd ; 2/2 ; ; Running ; ; 46 ; ; ; 31d
TASK [wait_for_sisense : Waiting for Grafana pods to run]changed: [<node label running the pod>] ;For example:changed: [node1] Execute the get Pod and parse out the Status from the returned string. ; If problem with returned value, you will see:error: error executing jsonpath \"{.items[0].status.phase}\": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the templateWaits in a loop until status is “Running”. ; If it runs out of retries, it continues the installation.failed: [<node label running the pod>] (item=<application pod name>) => {"ansible_loop_var": "item", "attempts": 30, "changed": true, "cmd": "kubectl get po -n <namespace of monitoring> --selector=app=<application pod name> ;-o jsonpath={.items[0].status.phase}", "delta": "0:00:00.800941", "end": "<time of failure>", "item": "<application pod name>", "msg": "non-zero return code", "rc": 0, "start":"stderr": "", "stderr_lines": [], "stdout": "Pending", "stdout_lines": ["Pending"]} Run the command manually and check whether what is returned is correct, as expected. Check the reason that the pod is not coming up. If the error returned is not a failure in executing the step, you will need to check the pod's logs.
File: installer/playbooks/kubernetes-cloud.yml
Description: Prepare the information that will be printed at the end of installation.
wait_for_sisense : Set the HTTPS base protocol If SSL is configured, set the protocol in the URL to be “https://”. This will appear at the end of the installation summary, showing the URL to run to launch the platform, which, in this case, will have the IP of the node running the application. wait_for_sisense : Set HTTPS base protocolok: [<node label>]
wait_for_sisense : Set the HTTP base protocol If SSL is NOT configured, set the protocol in the URL to be “http://”. This will appear at the end of the installation summary, showing the URL to run to launch the platform, which, in this case, will have the IP of the node running the application. wait_for_sisense : Set HTTP base protocolok: [<node label>]
wait_for_sisense : Set the HTTP base protocol If application_dns is configured in the installation, the URL that will be set is based on the DNS entered. This will appear at the end of the installation summary, showing the URL to run to launch the platform. wait_for_sisense : Set HTTP base protocolok: [<node label>]
wait_for_sisense : Register the IP Determine the IP of the nodes running. This will be performed several times to list the nodes, and the Kubernetes dashboard. TASK [wait_for_sisense : Print IP] ;ok: [<node label>]
wait_for_sisense : Print the IP Assign IPs to the variables that will be printed. TASK [wait_for_sisense : Print IP]ok: [<node label>]
wait_for_sisense : Register the Port Determine the port that will be displayed in the access URL for the Sisense application, given that SSL is not turned on. This will be done for the following configuration:
  • is_ssl = falsec
  • loud_load_balancer=false
  • application_dns_name not assigned
TASK [wait_for_sisense : Register portok: [<node label>]
wait_for_sisense : Print the Port Assign a port to a variable that will be printed. TASK TASK [wait_for_sisense : Print Port]ok: [<node label>]
wait_for_sisense : Print the Endpoint The URL for launching the Sisense app and the URL for launching the Kubernetes Dashboard will be printed.

ok: [node1] => {
; ; ; ;"msg": [
; ; ; ; ; ; ; ;"Sisense App:<http or https>://<ip or dns name>",
; ; ; ; ; ; ; ;"Kubernetes dashboard: https://<master node running kubernetes>:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy" ]
Print Endpoints A mode detailed message will be displayed if “expose_nodeports” is set to true:
  • "Sisense App: {{ base_proto }}{{ print_ip }}{{ print_port }} ; ; ; ; ; ;{% if cloud_load_balancer|default(false)|bool -%} ; ; ; ; ; ;Load balancer address: {{ load_balancer_address }} ; ; ; ; ; ;{% endif -%}"
  • "Kubernetes dashboard: https://{{ print_ip }}:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy"
  • "Management Swagger UI at {{ base_proto }}{{ print_ip }}:30082/swagger-ui.html"
  • "Query Swagger UI at {{ base_proto }}{{ print_ip }}:30084/swagger-ui.html
  • "Build Swagger UI at {{ base_proto }}{{ print_ip }}:30086/swagger-ui.html"
  • "App Test UI at {{ base_proto }}{{ print_ip }}{{ print_port }}/app/test"
Print Disk Utilisation A debug level message will be printed to show disk utilization if the configuration was set to run this.
If the installation fails
TASK [Fail on Error] This task will be performed if the installation failed (e.g. certain pods that must come up did not come up in time). ; ; fatal: [node1]: FAILED! => {"changed": false, "msg": "Sisense installation has failed."}
Scale down maintenance service on upgrade failure If the installation fails, the API gateway deployment will be scaled down to disable access to the platform if the installation was performed as an upgrade.

kubectl -n <namespace> ; scale deployment api-gateway-maintenance --replicas=0

A success message will be displayed if the installation completed fully and successfully.

For example:

Sisense L2021.5.0.180 has been installed successfully! sisense ~/installation/sisense-L2021.5.0.180
Sisense <version number> has been installed successfully! print_endpoints