1. Install Docker, Minikube, Kubectl, Git and Helm.
Docker:
$ sudo dnf -y upgrade
$ sudo dnf -y install yum-utils
$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ sudo dnf -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin --allowerasing
$ sudo systemctl enable docker
$ sudo systemctl start docker
$ sudo usermod -aG docker $USER
$ newgrp docker
$ exit
$ exit
We exit here (twice) to force a logout, so that we won't have to prepend docker commands with 'sudo' after we log back in.
Kubectl:
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
$ echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
kubectl: OK
$ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
$ kubectl version
Git and Helm:
$ sudo yum -y install git
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh
$ helm --help
Minikube:
$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
$ sudo install minikube-linux-amd64 /usr/local/bin/minikube
2. Start a new Kubernetes cluster using Minikube:
$ minikube start --cpus 4 --memory 8192
It is recommended that you provision a minimum of 8GB of memory to minikube. Then configure your terminal to build docker images directly inside minikube:
$ eval $(minikube docker-env)
This will avoid having to use a container registry, which speeds up testing. However, you must be careful to only build your docker images AFTER running the above 'eval' command.
3. Install the PostgresSQL server:
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install vanilla-tssdb bitnami/postgresql
$ export POSTGRES_PASSWORD=$(kubectl get secret \
--namespace default vanilla-tssdb-postgresql \
-o jsonpath="{.data.postgres-password}" | base64 --decode)
4. Obtain the latest official release of the Spotfire CDK (which is version 2.4.0 at the time of this writing):
$ git clone https://github.com/TIBCOSoftware/spotfire-cloud-deployment-kit.git -b v2.4.0
5. Copy the Spotfire Server installation files to the CDK's 'downloads' directory. It should look like this when you're done copying:
$ ls -al ~/spotfire-cloud-deployment-kit/containers/downloads/
total 1972496
drwxr-xr-x 2 mysuer svc_dev_str_server_admins 4096 Jun 20 19:02 .
drwxr-xr-x 4 mysuer svc_dev_str_server_admins 92 Jun 20 18:52 ..
-rwxr-xr-x 1 mysuer svc_dev_str_server_admins 52 Jun 20 18:52 .gitignore
-rwxr-xr-x 1 mysuer svc_dev_str_server_admins 261242423 May 22 12:21 Spotfire.Dxp.netcore-linux.sdn
-rwxr-xr-x 1 mysuer svc_dev_str_server_admins 192013178 May 22 18:07 Spotfire.Dxp.PythonServiceLinux.sdn
-rwxr-xr-x 1 mysuer svc_dev_str_server_admins 63531645 May 22 18:07 Spotfire.Dxp.RServiceLinux.sdn
-rwxr-xr-x 1 mysuer svc_dev_str_server_admins 465164397 May 22 12:21 Spotfire.Dxp.sdn
-rwxr-xr-x 1 mysuer svc_dev_str_server_admins 141876963 May 22 18:07 Spotfire.Dxp.TerrServiceLinux.sdn
-rwxr-xr-x 1 mysuer svc_dev_str_server_admins 249628793 May 22 14:39 spotfirenodemanager-14.4.0.x86_64.tar.gz
-rwxr-xr-x 1 mysuer svc_dev_str_server_admins 350211594 May 22 14:41 spotfireserver-14.4.0.x86_64.tar.gz
-rwxr-xr-x 1 mysuer svc_dev_str_server_admins 296146764 Jun 20 18:56 SPOT_sfire_server_14.4.0_languagepack-multi.zip
You can obtain these files from
edelivery.tibco.com.
6. Navigate to the CDK's 'containers' directory and build the container images:
$ cd ~/spotfire-cloud-deployment-kit/containers/
$ make build
This will take a few minutes to complete. Check the Issues for the CDK on github if you run into any errors. Sometimes, a manual change to the Dockerfile is needed and will be corrected in a future CDK release.
7. Navigate to the CDK's 'helm' directory, and build the charts:
$ cd ~/spotfire-cloud-deployment-kit/helm/
$ make
8. Navigate to the CDK's spotfire-server helm directory, and deploy the Spotfire Server:
$ cd ~/spotfire-cloud-deployment-kit/helm/charts/spotfire-server/
$ helm install tss1440 . \
--set acceptEUA=true \
--set global.spotfire.image.pullPolicy="Never" \
--set database.bootstrap.databaseUrl="jdbc:postgresql://vanilla-tssdb-postgresql.default.svc.cluster.local/" \
--set database.create-db.databaseUrl="jdbc:postgresql://vanilla-tssdb-postgresql.default.svc.cluster.local/" \
--set database.create-db.adminUsername="postgres" \
--set database.create-db.adminPassword="$POSTGRES_PASSWORD" \
--set database.create-db.enabled=true \
--set configuration.site.publicAddress="http://mymachine.company.com:8081"
Here, we have set the helm release name to something descriptive: 'tss1440' (to indicate that this is a Spotfire Server 14.4.0 deployment). In addition, we have set the Spotfire Server's public port to 8081. If this port is not available on your machine, choose a different port that is available.
You should now see the following pods running in your K8s cluster:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
tss1440-cli-6f857f99c4-s7rxl 1/1 Running 0 15s
tss1440-config-job-1-7z9k9 1/1 Running 0 15s
tss1440-haproxy-64cb78b47b-tgtbz 1/1 Running 0 15s
tss1440-log-forwarder-59ccfc874b-ntmdl 1/1 Running 0 15s
tss1440-spotfire-server-679599b4cb-5b6gq 1/2 Running 0 15s
vanilla-tssdb-postgresql-0 1/1 Running 0 46s
9. In a separate terminal, forward the local tcp port 8081 to the K8s port configured for the *-haproxy pod (which is 80 by default):
$ kubectl port-forward --address 0.0.0.0 --namespace default service/tss1440-haproxy 8081:80
This will allow remote clients to connect to the Spotfire Server over port 8081.
10. Extract the Spotfire Server admin user's password by decoding the K8s secret:
$ export SPOTFIREADMIN_PASSWORD=$(kubectl get secrets \
--namespace default tss1440-spotfire-server \
-o jsonpath="{.data.SPOTFIREADMIN_PASSWORD}" | base64 --decode)
$ echo $SPOTFIREADMIN_PASSWORD
wXjiuqoDet9G
11. In a web browser, navigate to the server's public address that you configured above:
http://mymachine.company.com:8081
Login with username 'admin', using the password you extracted/decoded in the previous step.