Deployment Script for Sisense on Amazon EKS
This page describes how to configure Amazon AWS for Sisense and includes an example script to deploy Sisense on Amazon EKS. The example can be downloaded and customized for your specific needs.
The example script below installs AWS CLI(awscli), labels your worker nodes, sets up an EKS cluster with the EKS command line (eksctl),and sets up FSx with the associated IAM role.
After you have configured Amazon AWS, you can then deploy Sisense as described in Deploying Sisense on Amazon EKS.
Prerequisites
- Amazon Linux 2 OS
- Bastion should be "Amazon Linux" type -The default linux user: ec2-user
- Bastion AWS Profile must be configured as default and the region should be the same as the provisioned one
- cat /.aws/credentials
- cat /.aws/config
Auto Scaling
Sisense supports auto-scaling for your EKS nodes using AWS EKS auto-scaling capabilities. You can configure when to add or remove nodes in the following section of the AWS script:
eksctl create nodegroup \
In this section, you specify the node type, the number of nodes, the minimum number of nodes etc. Also, in this section, you must define the labels that new nodes get when the autoscaler creates them. This is required as it enables Sisense to determine what type of node should be created, build, query or application. To define the labels, set the value of --node-labels to the label of the new nodes, for example:
--node-labels "node-sisense-Application=true,node-sisense-Query=true" \
To configure Amazon AWS for Sisense:
- In Linux, download the sample Sisense AWS Configuration script prepared by Sisense.
curl -O https://data.sisense.com/linux/scripts/sisense_full_eks_fsx_v2.sh
OR
Download the script here.
-
Edit the script to match your use case. Below you can see a copy of the full script. After you have configured Amazon AWS, you can then deploy Sisense as described in Deploying Sisense on Amazon EKS.
Note:
Sisense labels are based on the default namespace sisense . For a different namespace name, change the labels as shown in the following example:
node-NAMESPACE_NAME-Application / node-NAMESPACE_NAME-Query / node-NAMESPACE_NAME-Build
- Run the script using the following command:
./sisense_full_eks_fsx_v2.sh <cluster-name> [eks-version]
where cluster-name is a required parameter and eks-version is an optional parameter whose default value is 1.23.
Example:
./sisense_full_eks_fsx_v2.sh my-cluster
./sisense_full_eks_fsx_v2.sh my-cluster 1.23sisense_full_eks_fsx_v2.sh cluster1
#!/usr/bin/env bash
CLUSTER=$1
EKS_VERSION=$2
usage() {
echo
echo -e "Usage:"
echo -e "\t $0 <cluster-name> [eks-version]"
echo
echo -e "Parameters:"
echo -e "\t cluster-name - requiered"
echo -e "\t eks-version - optional (default=1.23)"
echo
echo -e "Examples:"
echo -e "\t $0 my-cluster"
echo -e "\t $0 my-cluster 1.23"
}
re='^[0-9]+([.][0-9]+)?$'
if [[ -z $CLUSTER || $CLUSTER =~ $re ]]; then
echo "[ERROR] EKS cluster name $CLUSTER not provided correctly!"
echo "[ERROR] Must provide eks cluster name!"
usage
exit 1
fi
if [[ -z $EKS_VERSION ]]; then
echo "[INFO] EKS_VERSION was not supplied."
EKS_VERSION=1.23
fi
if [[ ! $EKS_VERSION =~ $re ]]; then
echo "[ERROR] EKS version $EKS_VERSION is not valid."
usage
exit 1
fi
echo "[INFO] Using EKS version $EKS_VERSION"
echo "[INFO] EKS clustner name is $CLUSTER"
DIR=$(dirname $0)
# FSxType -- SCRATCH_1, SCRATCH_2 , PERSISTENT_1
FSxType=PERSISTENT_1
## Installing pip
if ! hash python pip unzip jq yq &> /dev/null; then
if [ -f /usr/bin/yum ]; then sudo yum -y -q install python-pip unzip jq && sudo pip install yq; fi
if [ -f /usr/bin/apt ]; then sudo apt update && sudo apt install --yes python-pip unzip jq && sudo pip install yq; fi
fi
## Installing awscli
if ! command -v aws &> /dev/null; then
curl --silent --location "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip -q awscliv2.zip
sudo ./aws/install --update
rm -fr awscliv2* aws*/
fi
## Installing eksctl
if ! command -v eksctl &> /dev/null; then
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
source <(eksctl completion bash) 2>/dev/null
fi
## Installing aws-iam-authenticator
if ! command -v aws-iam-authenticator &> /dev/null; then
curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.23.7/2022-06-29/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
mkdir -p $HOME/.bin && mv ./aws-iam-authenticator $HOME/.bin/aws-iam-authenticator && export PATH=$HOME/.bin:$PATH
echo 'export PATH=$HOME/.bin:$PATH' >> ~/.bashrc
fi
## Installaing kubectl
if ! command -v kubectl &> /dev/null; then
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.23.7/2022-06-29/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
fi
## aws configure
aws configure
AWS_REGION=$(aws configure get region)
## Provisioning EKS
rm -fr ./${CLUSTER}-KeyPair.pem
aws ec2 delete-key-pair --key-name "${CLUSTER}-KeyPair"
aws ec2 create-key-pair --key-name "${CLUSTER}-KeyPair" --query 'KeyMaterial' --output text > ./${CLUSTER}-KeyPair.pem
## Provisioning EKS
eksctl create cluster \
--name "${CLUSTER}-EKS" \
--version ${EKS_VERSION} \
--zones=${AWS_REGION}a,${AWS_REGION}b,${AWS_REGION}c \
--without-nodegroup
eksctl create nodegroup \
--name "${CLUSTER}-workers-APP-QRY1" \
--cluster "${CLUSTER}-EKS" \
--asg-access \
--managed \
--node-labels "node-sisense-Application=true,node-sisense-Query=true" \
--node-type m5a.2xlarge \
--nodes 1 \
--nodes-min 1 \
--nodes-max 3 \
--node-volume-size 150 \
--ssh-access \
--node-private-networking \
--node-zones=${AWS_REGION}a \
--ssh-public-key "${CLUSTER}-KeyPair"
eksctl create nodegroup \
--name "${CLUSTER}-workers-APP-QRY2" \
--cluster "${CLUSTER}-EKS" \
--asg-access \
--managed \
--node-labels "node-sisense-Application=true,node-sisense-Query=true" \
--node-type m5a.2xlarge \
--nodes 1 \
--nodes-min 1 \
--nodes-max 3 \
--node-volume-size 150 \
--ssh-access \
--node-private-networking \
--node-zones=${AWS_REGION}b \
--ssh-public-key "${CLUSTER}-KeyPair"
eksctl create nodegroup \
--name "${CLUSTER}-workers-BLD" \
--cluster "${CLUSTER}-EKS" \
--asg-access \
--managed \
--node-labels "node-sisense-Build=true" \
--node-type m5a.2xlarge \
--nodes 1 \
--nodes-min 1 \
--nodes-max 2 \
--node-volume-size 150 \
--ssh-access \
--node-private-networking \
--node-zones=${AWS_REGION}c \
--ssh-public-key "${CLUSTER}-KeyPair"
## Getting SG,SUBNET
SG=$(aws eks describe-cluster --name "${CLUSTER}-EKS" --query "cluster.resourcesVpcConfig.securityGroupIds[0]" | sed 's/\"//g')
CSG=$(aws eks describe-cluster --name "${CLUSTER}-EKS" --query "cluster.resourcesVpcConfig.clusterSecurityGroupId" | sed 's/\"//g')
VPC=$(aws eks describe-cluster --name "${CLUSTER}-EKS" --query "cluster.resourcesVpcConfig.vpcId" | sed 's/\"//g')
SUBNET=$(aws eks describe-cluster --name "${CLUSTER}-EKS" --query "cluster.resourcesVpcConfig.subnetIds[1]"| sed 's/\"//g')
## Configuring SG
aws ec2 authorize-security-group-ingress --group-id $SG --protocol tcp --port 988 --cidr 172.31.0.0/16
aws ec2 authorize-security-group-ingress --group-id $CSG --protocol tcp --port 988 --cidr 172.31.0.0/16
aws ec2 authorize-security-group-ingress --group-id $SG --protocol tcp --port 988 --cidr 192.168.0.0/16
aws ec2 authorize-security-group-ingress --group-id $CSG --protocol tcp --port 988 --cidr 192.168.0.0/16
## Create FSx
aws fsx create-file-system \
--client-request-token "$CLUSTER" \
--file-system-type LUSTRE \
--storage-capacity 1200 \
--tags Key="Name",Value="Lustre-${CLUSTER}" \
--lustre-configuration "DeploymentType=${FSxType},PerUnitStorageThroughput=200" \
--subnet-ids "$SUBNET" \
--security-group-ids $CSG
## Getting FSx
FSX_DNS_NAME=$(aws fsx describe-file-systems --query 'FileSystems[*].{DNSName:DNSName,Tags:Tags[0].Value==`'Lustre-${CLUSTER}'`}' --output text | grep True| awk '{print $1}')
FSX_MOUNT_NAME=$(aws fsx describe-file-systems --query 'FileSystems[*].{MountName:LustreConfiguration.MountName,Tags:Tags[0].Value==`'Lustre-${CLUSTER}'`}' --output text | grep True| awk '{print $1}')
## Gthering EKS kubeconfig
aws eks update-kubeconfig --region "${AWS_REGION}" --name "${CLUSTER}-EKS"
if (( $(echo "${EKS_VERSION} >= 1.23" | bc -l) )); then
curl --create-dirs --output ${DIR}/ebs-csi-driver/create_ebs_driver_sa.sh https://data.sisense.com/linux/scripts/ebs-csi-driver/create_ebs_driver_sa.sh
chmod 755 ${DIR}/ebs-csi-driver/create_ebs_driver_sa.sh
${DIR}/ebs-csi-driver/create_ebs_driver_sa.sh ${CLUSTER}
fi
## Output
echo -e "ssh_key path is: ~/${CLUSTER}-KeyPair.pem"
echo -e "kubernetes_cluster_name: ${CLUSTER}-EKS"
echo -e "kubernetes_cluster_location: ${AWS_REGION}"
echo -e "kubernetes_cloud_provider: aws"
echo -e "fsx_dns_name is: ${FSX_DNS_NAME}"
echo -e "fsx_mount_name is: ${FSX_MOUNT_NAME}"