Migrating from Kubespray to RKE
Note:
This topic only refers to migrating from Kubespray to RKE. The migration to RKE does not apply to on-cloud Kubernetes services such as AKS, EKS, etc.
Migrating from Kubespray to RKE consists of the following phases:
- Pre-upgrade steps
- Uninstalling the cluster
- Cleaning the environment
- Installing Sisense with RKE
- Restoring the application
- Pre-Upgrade Steps
- Create a backup. See Backing up and Restoring Sisense.
From the CLI, execute the CLI activation command:
source add_completion-ns-sisense.sh
Create the backup:
si system backup -include-farm true
- Ensure that your external plugins support the Sisense version to which you are upgrading.
- Uninstalling the cluster
Save all the scheduled builds of the application.
kubectl -n sisense get cronjobs.batch -o json > cronjobs.json
Download the Sisense tar.gz file.
wget [sisense-linux-deployment-link-current-version]
Extract the tar.gz file into the sisense-version folder.
tar zxf [sisense-linux-deployment-package-name]
Navigate to the sisense-version directory where you extracted the tar.gz file.
cd sisense-[sisense-version]
Edit the single_config.yaml file.
vim single_config.yaml
Set the following parameters:>
uninstall_cluster: true
k8s_nodes
deployment_size
linux_user
ssh_key
Uninstall the cluster.
./sisense.sh single_config.yaml -y
- Cleaning the environment
-
Free up the space of unused images. Caution: This command removes all unused images! Be sure you are not doing any development on this server.
sudo docker image prune -a -f
-
Remove Kubernetes installation leftovers.
sudo reboot
for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher; do sudo umount $mount 2>/dev/null; done
sudo rm -rf /etc/ceph \
/etc/cni \
/etc/kubernetes \
/opt/cni \
/opt/rke \
/run/secrets/kubernetes.io \
/run/calico \
/run/flannel \
/var/lib/calico \
/var/lib/etcd \
/var/lib/cni \
/var/lib/kubelet \
/var/lib/rancher/rke/log \
/var/log/containers \
/var/log/kube-audit \
/var/log/pods \
/var/run/calico
-
- Installing Sisense with RKE
-
Download the Sisense tar.gz file.
wget [sisense-linux-deployment-link]
-
Extract the tar.gz file into the {sisense-version} folder:
tar zxf [sisense-linux-deployment-package-name]
-
Navigate to the {sisense-version} directory where you extracted the tar.gz file.
cd sisense-[sisense-version]
-
Edit the single_config.yaml file.
vim single_config.yaml
Set the following parameters:
uninstall_cluster: false
uninstall_sisense: false
update: false
remove_user_data: false
k8s_nodes
deployment_size
linux_user
ssh_key
-
Run the installation.
./sisense.sh single_config.yaml -y
-
Create the scheduled builds you had previously.
kubectl create -f cronjobs.json -o yaml | kubectl apply -f -
-
- Restoring the application
- If, in the single_config.yaml file, remove_user_data was set to true and your data was erased, you will need to restore the data from the backup. See Backing up and Restoring Sisense.
- Pre-Upgrade Steps
- Create a backup. This backup procedure is mandatory, or you will lose the configuration of your Sisense application. See Backing up and Restoring Sisense.
- From the CLI, execute the CLI activation command:
source add_completion-ns-sisense.sh
- Create the backup in the management pod in the /opt/sisense/storage/system_backups/ directory.
si system backup -include-farm true
- Extract the name of the management pod.
kubectl get po -n sisense -l "app=management" -oname | awk -F '/' '{print $2}'
Identify the backup file name:
Get a list of backup files on the shared storage using
kubectl
:kubectl -n <sisense namespace> -exec -it -c management <managment pod> -- find /opt/sisense/storage/system_backups/ -type f -name "*.tar.gz"
Find the file of the relevant backup created above considering the date and time listed in the file name. For reference, the backup file name pattern is as follows: sisense_assets_collector_<YYYY-MM-DD>_<HH>_<MM>_<SS>.tar.gz
- Copy the backup file you created from the post into the host (VM) machine.
kubectl cp <sisense namespace>/<managment pod>:/opt/sisense/storage/system_backups/<backup file name> ~/<backup file name>
It is recommended to also store this file somewhere else, in case something happens to the VM. - Validate that the backup file is not corrupted. Run the following
tar
command and wait until it provides the output:tar -tf ~/<backup file name>
The output will contain a list of the files listed in the backup file. If the backup file is valid, at the end of the command output there will be no errors reported. If a backup file is corrupted, thetar
command will print out one of the following errors at the end of the output:tar: This does not look like a tar archive
: This indicates that the file does not have the format of a tar archive.tar: Unexpected EOF in archive
: This suggests that the tar file is incomplete or truncated.tar: Error is not recoverable: exiting now
: This generic error message can occur for a variety of reasons, such as severe file corruption.tar: Child returned status 1
: This indicates an error was encountered, often related to corruption or an unrecognized file format.
If the file is corrupted, create a new backup file and verify it again. If the
tar
command consistently reports that a backup file is corrupted, do not proceed with the migration and contact customer support. - Ensure that your external plugins support the Sisense version to which you are upgrading.
- Uninstalling the cluster
- Save the application's scheduled builds.
kubectl -n sisense get cronjobs.batch -o json > cronjobs.json
- Download the Sisense tar.gz file.
wget [sisense-linux-deployment-link-current-version]
- Extract the tar.gz file into the sisense-version folder:
tar zxf [sisense-linux-deployment-package-name]
- Navigate to the sisense-version directory.
cd sisense-[sisense-version]
- Edit the single_config.yaml file.
vim single_config.yaml
Set the following parameters:uninstall_cluster: true
k8s_nodes
deployment_size
linux_user
ssh_key
- Run the installation.
./sisense.sh single_config.yaml -y
- Save the application's scheduled builds.
- Cleaning the environment
Run this cleanup process from within each of your machines.
Cleanup the space of unused images. Caution: This command will remove all unused images, make sure you are not developing on this server.
sudo docker image prune -a -f
Remove Kubernetes installation leftovers.
sudo reboot
for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher; dosudo umount $mount 2>/dev/null; done
sudo rm -rf /etc/ceph \/etc/cni \
/etc/kubernetes \
/opt/cni \
/opt/rke \
/run/secrets/kubernetes.io \
/run/calico \
/run/flannel \
/var/lib/calico \
/var/lib/etcd \
/var/lib/cni \
/var/lib/kubelet \
/var/lib/rancher/rke/log \
/var/log/containers \
/var/log/kube-audit \
/var/log/pods \
/var/run/calico
- Installing Sisense with RKE
- Download the Sisense tar.gz file.
wget [sisense-linux-deployment-link]
- Extract the tar.gz file into the sisense-version folder:
tar zxf [sisense-linux-deployment-package-name]
- Navigate to the sisense-version directory.
cd sisense-[sisense-version]
- Edit the single_config.yaml file.
vim single_config.yaml
Set the following parameters:uninstall_cluster: false
uninstall_sisense: false
update: false
k8s_nodes
deployment_size
linux_user
ssh_key
- Run the installation.
./sisense.sh single_config.yaml -y
- Save all the application's scheduled builds.
kubectl create -f cronjobs.json -o yaml | kubectl apply -f -
- Download the Sisense tar.gz file.
- Restoring the application
- From the CLI, execute the CLI activation command:
source add_completion-ns-sisense.sh
- Extract the name of the new management pod.
kubectl get po -n sisense -l "app=management" -oname | awk -F '/' '{print $2}'
- Store the backup in the management pod in the /opt/sisense/storage/system_backups/ directory.
kubectl cp <backup path> <sisense namespace>/<managment pod>:/opt/sisense/storage/system_backups/<backup path> -c management
- Restore the application.
si system restore -name <backup path>
- From the CLI, execute the CLI activation command: