Spotfire Cloud Deployment Kit with Azure Kubernetes Service Quick Start Guide

Spotfire Cloud Deployment Kit with Azure Kubernetes Service Quick Start Guide

book

Article ID: KB0071128

calendar_today

Updated On:

Products Versions
Spotfire Server 12.4.0

Description

This guidance is for those who want to quickly spin up a new Spotfire Server in an Azure Kubernetes Service (AKS) cluster to test issues when using the Spotfire Cloud Deployment Kit (CDK). The example in this article uses an Almalinux9 machine (mymachine.company.com), but the commands shown below are also expected to work with other yum/dnf compatible Linux distributions such as RHEL, Fedora, and CentOS.

Issue/Introduction

Details the steps needed to get a new Spotfire Server deployed in an Azure Kubernetes (AKS) cluster. This is accomplished by using the Spotfire Cloud Deployment Kit (CDK).

Environment

Kubernetes 1.23+

Resolution

Some actions, detailed in the steps below, may need to be accomplished by your Azure administrator. If you are not the Owner of the Azure Resource Group, you should have your Azure administrator review the steps below and assist as needed.

This procedure assumes that the person tasked with installing Spotfire Server (herein referred to as "the SSO user" or "you") has SSO access to a shared Azure team account. The SSO user has the following Azure roles assigned at the Azure Resource Group scope:

A role that allows the SSO user to create new role assignments for the Spotfire Server Application Service Principal (as shown in step 4). Assign one of the following built-in Azure roles to enable the SSO user to create new role assignments:
  • Role Based Access Control Administrator
  • User Access Administrator
  • Owner
A role that allows the SSO user to create an Azure Container Registry (ACR), and an Azure Kubernetes Service (AKS) cluster. Assign one of the following built-in Azure roles to enable the SSO user to create these resources:
  • User Access Administrator
  • Owner
A role that allows the Spotfire Server Application Service Principal to install Spotfire Server. Assign all of the following built-in Azure roles to enable the Spotfire Server Application Service Principal to complete the installation:
  • AcrPull
  • AcrPush
  • Contributor

Installation Steps:

1. Install Azure CLI, Docker, Kubectl, Git and Helm.

Azure CLI:
$ sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
$ sudo dnf install -y https://packages.microsoft.com/config/rhel/9.0/packages-microsoft-prod.rpm
$ sudo dnf install azure-cli

Docker:
$ sudo dnf -y upgrade
$ sudo dnf -y install yum-utils
$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ sudo dnf -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin --allowerasing
$ sudo systemctl enable docker
$ sudo systemctl start docker
$ sudo usermod -aG docker $USER
$ newgrp docker
$ exit
$ exit

We exit here (twice) to force a logout, so that we won't have to prepend docker commands with 'sudo' after we log back in.

Kubectl:
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
$ echo "$(cat kubectl.sha256)  kubectl" | sha256sum --check
kubectl: OK
$ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
$ kubectl version

Git and Helm:
$ sudo yum -y install git
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh
$ helm --help

2. Choose a name for your new Spotfire Server application, and other values that will be needed to complete the installation. Assign environment variables in your command prompt for each value.
$ export AZ_APP_DISPLAY_NAME=SpotfireAzApp
$ export AZ_GROUP_NAME=CompanyResourceGroup

In addition to the above values, your Spotfire Server application will need access to an AKS cluster and ACR. Check with the Owner of your Azure Resource Group to see if these resources are already created and available. If they are not yet available, you will need to create them using the commands shown in steps 5 through 7. Again, set environment variables for these (and related) values:
$ export AZ_ACR_NAME=sfazacr
$ export AKS_CLUSTER_NAME=azapps

3. Use the Azure command-line interface (az cli) to create the Azure application, the service principal, and a secret:
$ rm -rf az-login-info.json
$ az logout
$ az login -u "sso-user@company.com" > az-login-info.json
$ export AZ_TENANT_ID=$(jq .[].tenantId az-login-info.json | tr -d '"')
$ export AZ_SUBSCRIPTION_ID=$(jq .[].id az-login-info.json | tr -d '"')
$ rm -rf app-role-manifest.json

$ echo -e \
"[{ \
    \"allowedMemberTypes\": [ \
        \"User\" \
    ], \
    \"description\": \"Admin App Role for Spotfire Server.\", \
    \"displayName\": \"SpotfireAdmin\", \
    \"isEnabled\": \"true\", \
    \"value\": \"admin\" \
}]" | jq . > app-role-manifest.json

$ rm -rf az-app-manifest.json
$ az ad app create \
  --display-name $AZ_APP_DISPLAY_NAME \
  --app-roles "@app-role-manifest.json" \
  --enable-access-token-issuance true \
  --enable-id-token-issuance true > az-app-manifest.json

$ rm -rf az-sp-info.json
$ az ad sp create-for-rbac \
  --name $AZ_APP_DISPLAY_NAME \
  --role "Contributor" \
  --scopes /subscriptions/$AZ_SUBSCRIPTION_ID/resourceGroups/$AZ_GROUP_NAME > az-sp-info.json

$ export AZ_APP_SECRET=$(jq .password az-sp-info.json | tr -d '"')
$ export AZ_SERVICE_PRINCIPAL_ID=$(jq .appId az-sp-info.json | tr -d '"')

4. Now assign the following additional Azure roles to your application service principal (at the Resource Group level):
  • AcrPush
  • AcrPull
  • User Access Administrator

Note: The 'User Access Administrator' role is only required if you need to create the ACR, AKS cluster and node pool. If these resources have already been created, you may exclude 'User Access Administrator', and skip steps 5 through 7.

Also Note: To perform the role assignments in this step, your currently logged-in Azure user (used in step 3) needs (at least) the permissions enabled by the built-in 'Role Based Access Control Administrator' role. Other built-in Azure roles like 'User Access Administrator' or 'Owner' will also enable your Azure user to perform the below role assignments. If your currently logged-in Azure user does not have an assignment to one of these roles, ask your Azure administrator (or whoever is the Owner of the Azure Resource Group) to add these assignments for you.
$ az role assignment create \
  --role "AcrPush" \
  --scope /subscriptions/$AZ_SUBSCRIPTION_ID/resourceGroups/$AZ_GROUP_NAME \
  --assignee $AZ_SERVICE_PRINCIPAL_ID

$ az role assignment create \
  --role "AcrPull" \
  --scope /subscriptions/$AZ_SUBSCRIPTION_ID/resourceGroups/$AZ_GROUP_NAME \
  --assignee $AZ_SERVICE_PRINCIPAL_ID

$ az role assignment create \
  --role "User Access Administrator" \
  --scope /subscriptions/$AZ_SUBSCRIPTION_ID/resourceGroups/$AZ_GROUP_NAME \
  --assignee $AZ_SERVICE_PRINCIPAL_ID

Alternatively, you can create these role assignments in the Azure portal under Home > Resource Groups > azgroup > Access Control (IAM) > Add > Add role assignment. On the 'Add role assignment' screen, select the option to assign access to 'User, group, or service principal'. Then click '+ Select members', and search for your application name (i.e. the value of $AZ_APP_DISPLAY_NAME). Then click 'Review + assign' to complete the assignment.

5. Login using the service principal you created in step 3, and create the ACR:
$ az login \
  --service-principal \
  --username=$AZ_SERVICE_PRINCIPAL_ID \
  --password=$AZ_APP_SECRET \
  --tenant=$AZ_TENANT_ID

$ az acr create \
  --name=$AZ_ACR_NAME \
  --resource-group=$AZ_GROUP_NAME \
  --sku=basic --output=json

6. Get the public IP address of the machine you're working with:
$ export AKS_IP_RANGE=$(curl ifconfig.me)

7. Create the AKS cluster and attach the container registry you created in step 6:
$ az aks create \
  --name $AKS_CLUSTER_NAME \
  --resource-group $AZ_GROUP_NAME \
  --os-sku AzureLinux \
  --api-server-authorized-ip-ranges $AKS_IP_RANGE \
  --generate-ssh-key

Note: This will restrict access to the AKS cluster to your ip address. You can also enable a range of IP addresses. Your Azure administrator can help you define the appropriate range to include other client machines.
$ az aks update \
  --name $AKS_CLUSTER_NAME \
  --resource-group $AZ_GROUP_NAME \
  --attach-acr $AZ_ACR_NAME

8. Verify the nodes are ready:
$ az aks get-credentials \
  --overwrite-existing \
  --resource-group=$AZ_GROUP_NAME \
  --name=$AKS_CLUSTER_NAME

$ kubectl get node

NAME                                STATUS   ROLES   AGE    VERSION
aks-nodepool1-10599113-vmss000000   Ready    agent   2m4s   v1.25.6
aks-nodepool1-10599113-vmss000001   Ready    agent   2m1s   v1.25.6
aks-nodepool1-10599113-vmss000002   Ready    agent   2m7s   v1.25.6

9. Install the PostgresSQL server:
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install vanilla-tssdb bitnami/postgresql
$ export POSTGRES_PASSWORD=$(kubectl get secret \
    --namespace default vanilla-tssdb-postgresql \
    -o jsonpath="{.data.postgres-password}" | base64 -d)

10. Obtain the latest official release of the Spotfire CDK (which is version 1.4.0 at the time of this writing):
$ git clone https://github.com/TIBCOSoftware/spotfire-cloud-deployment-kit.git -b v1.4.0

11. Copy the Spotfire Server installation files to the CDK's 'downloads' directory. These files may be obtained from edelivery.tibco.com. It should look like this when you're done copying:
$ ls -al ~/spotfire-cloud-deployment-kit/containers/downloads/

total 1868370
drwxr-xr-x  2 myuser svc_dev_str_server_admins        10 Jul 10 09:31 .
drwxr-xr-x 13 myuser svc_dev_str_server_admins        15 Jul 10 09:31 ..
-rw-r--r--  1 myuser svc_dev_str_server_admins        52 Jul 10 09:31 .gitignore
-rwxr-xr-x  1 myuser svc_dev_str_server_admins 248434933 Jul 10 09:31 Spotfire.Dxp.netcore-linux.sdn
-rwxr-xr-x  1 myuser svc_dev_str_server_admins 150882007 Jul 10 09:31 Spotfire.Dxp.PythonServiceLinux.sdn
-rwxr-xr-x  1 myuser svc_dev_str_server_admins 419967725 Jul 10 09:31 Spotfire.Dxp.sdn
-rwxr-xr-x  1 myuser svc_dev_str_server_admins 128760600 Jul 10 09:31 Spotfire.Dxp.TerrServiceLinux.sdn
-rwxr-xr-x  1 myuser svc_dev_str_server_admins 318626713 Jul 10 09:31 TIB_sfire_server_12.4.0_languagepack-multi.zip
-rwxr-xr-x  1 myuser svc_dev_str_server_admins 267634757 Jul 10 09:31 tsnm-12.4.0.x86_64.tar.gz
-rwxr-xr-x  1 myuser svc_dev_str_server_admins 376060615 Jul 10 09:32 tss-12.4.0.x86_64.tar.gz

12. Navigate to the CDK's 'containers' directory and build the container images:
$ cd ~/spotfire-cloud-deployment-kit/containers
$ make build

This will take a few minutes to complete. Check the Issues for the CDK on github if you run into any errors. Sometimes, a manual change to the Dockerfile is needed and will be corrected in a future CDK release.

13. After the images are built, you can push them to your K8s container registry:
$ az acr login -n $AZ_ACR_NAME
$ make REGISTRY=$AZ_ACR_NAME.azurecr.io push

This will take a few minutes to complete.

14. Navigate to the CDK's 'helm' directory, and build the charts:
$ cd ../helm/
$ make

15. Navigate to the CDK's spotfire-server helm directory, authenticate with the Azure Container Registry, then deploy the Spotfire Server:
$ cd charts/spotfire-server/
$ TOKEN=$(az acr login --name $AZ_ACR_NAME --expose-token --output tsv --query accessToken)
$ docker login $AZ_ACR_NAME.azurecr.io --username 00000000-0000-0000-0000-000000000000 --password-stdin <<< $TOKEN
$ helm install tss1240azure . \
    --set acceptEUA=true \
    --set global.spotfire.image.registry="$AZ_ACR_NAME.azurecr.io" \
    --set global.spotfire.image.pullPolicy="Always" \
    --set database.bootstrap.databaseUrl="jdbc:postgresql://vanilla-tssdb-postgresql.default.svc.cluster.local/" \
    --set database.create-db.databaseUrl="jdbc:postgresql://vanilla-tssdb-postgresql.default.svc.cluster.local/" \
    --set database.create-db.adminUsername="postgres" \
    --set database.create-db.adminPassword="$POSTGRES_PASSWORD" \
    --set database.create-db.enabled=true \
    --set configuration.site.publicAddress="http://localhost"


Here, we have set the helm release name to something descriptive: 'tss1240azure' (to indicate that this is a Spotfire Server 12.4.0 deployment, and it runs in Azure).


16. Obtain the Spotfire 'admin' user password:
$ export SPOTFIREADMIN_PASSWORD=$(kubectl get secrets \
    --namespace default tss1240azure-spotfire-server \
    -o jsonpath="{.data.SPOTFIREADMIN_PASSWORD}" | base64 --decode)
$ echo $SPOTFIREADMIN_PASSWORD
i036eadkxlfJ

17. In a separate terminal, forward the local tcp port 8081 to the K8s port configured for the *-haproxy pod (which is 80 by default):
$ kubectl port-forward --address 0.0.0.0 --namespace default service/tss1240azure-haproxy 8081:80

18. In a web browser, navigate to the server's public address that you configured above:

http://mymachine.company.com:8081

Login with username 'admin', using the password you extracted/decoded in the previous step.



 

Additional Information

Github: Cloud Deployment Kit for TIBCO Spotfire