Skip to main content

Cambria Cluster / FTC 5.5.0

Akamai Cloud Kubernetes Help Documentation

Terraform Installation


Document History

VersionDateDescription
5.4.010/03/2024Updated for release 5.4.0.21627 (Linux)
5.5.004/11/2025Updated for release 5.5.0.23529 (Linux)

Download the online version of this document for the latest information and latest files. Always download the latest files.

Do not move forward with the installation process if you do not agree with the End User License Agreement (EULA) for our products.
You can download and read the EULA for Cambria FTC, Cambria Cluster, and Cambria License Manager from the links below:

Limitations and Security Information

Cambria FTC, Cluster, and License Manager are installed on Linux Docker containers. Limitations and security checks done for this version are included in our general Linux documents below:

Note: These documents are for informational use only. The setup for Kubernetes starts in section 2. Create Kubernetes Cluster.

Note: This document references Kubernetes version 1.32 only.


⚠️ Important: Before You Begin

PDF documents have a copy/paste issue. For best results, download this document and any referenced PDF documents in this guide and open them in a PDF viewer such as Adobe Acrobat.

For commands that are in more than one line, copy each line one by one and check that the copied command matches the one in the document.


⚠️ Critical Information: Read Before Proceeding with Installation

Before starting the installation, carefully review the following considerations. Skipping this section may result in errors, failed deployments, or misconfigurations.

Read only the Critical Information: Read Before Proceeding with Installation sections of the following documents:


1. Prerequisites

1.1. X11 Forwarding for User Interface

For Windows and Mac only, there are some special tools that need to be installed in order to be able to use the user interface from Capella's terraform installer.
If using Linux, the machine must have a graphical user interface.

1.1.1. Option 1: Microsoft Windows Tools

  1. Download and install the X11 Forwarding Tool Xming:
    Download vcxsrv

  2. Also, download and install PuTTY or similar tool that allows X11 Forwarding SSH:
    Download PuTTY

  3. Open XLaunch and do the following:

    • Window 1: Choose Multiple windows and set Display number to 0
    • Window 2: Choose Start no client
    • Window 3: Enable all checkboxes: Clipboard, Primary Selection, Native OpenGL, and Disable access control
    • Window 4: Click on Save configuration and save this somewhere to reuse in the future

1.1.2. Option 2: Apple macOS Tools

  1. Download and install the X11 Forwarding Tool XQuartz:
    Download XQuartz


2. Prepare Deployment Server

  1. On the Akamai Dashboard, create a new Ubuntu Linode used for the Terraform deployment.
    For best performance, choose an 8GB RAM or higher machine as X11 Forwarding uses a lot of memory on the Linode instance.

  2. SSH into the new Linode and install general tools:

    sudo apt update
    sudo apt upgrade
    sudo apt install curl unzip libice6 libsm6 dbus libgtk-3-0
  3. Download the Terraform package:

    curl -o terraform_LKE_CambriaCluster_1_0.zip -L "https://www.dropbox.com/scl/fi/t8pgwr6otlg4lesgf3xbq/terraform_LKE_CambriaCluster_1_0.zip?rlkey=82rjt9l3at2gfcztt7sy0zsz1&st=pwzmzhpb&dl=1"
  4. Unzip the package and make the shell scripts executable:

    unzip terraform_LKE_CambriaCluster_1_0.zip
    chmod +x *.sh
    chmod +x ./TerraformVariableEditor
  5. Install the tools needed for deployment:

    ./setupTools.sh
    ./installLogcli.sh
  6. Verify the tools are installed:

    kubectl version --client
    helm version
    terraform -v
    logcli --version
  7. Exit from the SSH session.

3. Installation

  1. SSH into the instance created in section 2. Prepare Deployment Server using one of the following methods (depending on the OS being used as the SSH client):

Option 1: Windows

  1. Open PuTTY or similar tool. Enable X11 Forwarding in the configuration. On PuTTY, this can be found under:
    • Connection > SSH > X11 > X11 forwarding
  2. SSH into the instance with the created user. Usually, the user is root.

Option 2: Unix (Linux, macOS)

  1. Open a terminal window and SSH into the Linode instance using the -Y option and one of the created users. This is usually root.
    Example:
    ssh -Y -i "mysshkey" root@123.123.123.123
  2. Run the following script to set secrets as environment variables. These are important details such as credentials, license keys, etc. Reference the following table:
source ./setEnvVariablesCambriaCluster.sh

Environment Variable Explanation

VariableExplanation
Linode API TokenAPI token from Akamai Cloud's Dashboard. See guide
PostgreSQL PasswordThe password for the PostgreSQL database that Cambria Cluster uses. General password rules apply.
Cambria API TokenA token needed for making calls to the Cambria FTC web server. General token rules apply (e.g., 1234-5678-90abcdefg).
Web UI UsersLogin credentials for the Cambria WebUIs for the Kubernetes cluster. Format: role,username,password.
Allowed roles:
1. admin – Full access and user management
2. superuser – Full access
3. user – View-only
Example: admin,admin,changethispassword1234,user,guest,password123
Argo Events Webhook TokenToken for specific Argo Events calls. General token rules apply (e.g., 1234-5678-90abcdefg).
Cambria FTC License KeyProvided by Capella. Starts with a '2' (e.g., 2AB122-11123A-ABC890-DEF345-ABC321-543A21).
Grafana PasswordPassword for the Grafana Dashboard.
Access KeyS3-compatible log storage access key (e.g., AWS_ACCESS_KEY_ID).
Secret KeyS3-compatible log storage secret key (e.g., AWS_SECRET_ACCESS_KEY).
  1. Run the terraform editor UI:
./TerraformVariableEditor
  1. Click on Open Terraform File and choose the CambriaCluster_LKE.tf file.

5. Terraform UI Editor Configuration

Using the UI, edit the fields accordingly. Reference the following table for values that should be changed:

  • Blue: values in blue do not need to change unless the specific feature is needed / not needed
  • Red: values in red are proprietary values that need to be changed based on your specific environment
VariableExplanation
lke_cluster_name = CambriaFTCClusterThe name of the Kubernetes cluster
lk_region = us-miaThe region code where the Kubernetes cluster should be deployed
lke_manager_pool_node_count = 3The number of Cambria Cluster nodes to create
workers_can_use_manager_nodes = falseWhether the Cambria Cluster nodes should also handle encoding tasks
workersUseGPU = falseSet to true to enable NVENC capabilities
nbGPUs = 1Max number of GPUs to use from encoding machines
manager_instance_type = g6-dedicated-4Instance type of the Cambria Cluster nodes
ftc_enable_auto_scaler = trueEnable Cambria FTC's autoscaler
ftc_instance_type = g6-dedicated-8Instance type for autoscaled encoders
max_ftc_instances = 5Max number of encoder instances
cambria_cluster_replicas = 3Max number of management and replica nodes
expose_capella_service_externally = trueCreate load balancers to expose Capella
enable_ingress = trueCreate ingress for Capella applications
host_name = myhost.comPublic domain name
acme_registration_email = test@example.comEmail for Let's Encrypt registration
acme_server = https://acme-staging-v02.api.letsencrypt.org/directoryACME server URL
enable_eventing = trueEnable Argo eventing features
expose_grafana = truePublicly expose Grafana dashboard
loki_storage_type = s3_embedcredStorage type for Loki logs
loki_local_storage_size_gi = 100Size of local storage in Gi for Loki
loki_s3_bucket_name = ""S3 bucket name for Loki
loki_s3_region = ""S3 bucket region for Loki
loki_replicas = 2Number of Loki replicas
loki_max_unavailable = 3Max unavailable Loki pods during upgrades
loki_log_retention_period = 7Days to retain logs
  1. Once done click on Save Changes and close the UI.
  2. Run the following commands to create a Terraform plan:
terraform init && terraform plan -out lke-plan.tfplan
  1. Apply the Terraform plan to create the Kubernetes cluster:
terraform apply -auto-approve lke-plan.tfplan
  1. Set the KUBECONFIG environment variable:
export KUBECONFIG=kubeconfig.yaml
  1. Save the following files securely for future changes or redeployment:
  • .tfstate file
  • lke-plan.tfplan
  • CambriaCluster_LKE.tf

4. Verification and Testing

Follow the steps in section 5. Verify Cambria FTC / Cluster Installation of the main Cambria Cluster Kubernetes Installation Guide for verification and section 6. Testing Cambria FTC / Cluster for testing with Cambria FTC jobs:

Cambria Cluster and FTC 5.5.0 on Akamai Kubernetes (PDF)


5. Upgrading

5.1. Option 1: Normal Upgrade via Terraform Apply

This upgrade method is best for when changing version numbers, secrets such as the license key, WebUI users, etc., and Cambria FTC / Cluster-specific settings such as max number of pods, replicas, etc.

⚠️ Warning – Known Issues:

  • pgClusterPassword cannot currently be updated via this method.
  • Changing the PostgreSQL version is not supported via this method.
  • The region of the cluster cannot be changed.

Steps:

  1. Follow the steps in section 3: Installation.
  2. Follow the verification steps in section 4: Verification and Testing to ensure the updates were applied.

5.2. Option 2: Upgrade via Cambria Cluster Reinstallation

This is the most reliable upgrade option. It uninstalls all Cambria FTC and Cluster components and reinstalls them using a new Helm chart and values file. This will delete the database and remove all jobs from the Cambria Cluster UI.

Steps:

  1. Follow section 4.2: Creating and Editing Helm Configuration File to prepare your new capellaClusterConfig.yaml.

  2. Uninstall the current deployment:

    helm uninstall capella-cluster --wait
  3. Reinstall using the updated values file:

    helm upgrade --install capella-cluster capella-cluster.tgz --values cambriaClusterConfig.yaml
  4. Wait a few minutes for the Kubernetes pods to install.

  5. Verify the installation using section 4: Verification and Testing.


6. Cleanup

To clean up the environment, ensure the following steps are followed in order. If using FTC's autoscaler, verify that no leftover Cambria FTC nodes remain running.

Steps:

  1. Remove Helm deployments:

    helm uninstall capella-cluster -n default --wait
  2. If persistent volumes remain, patch them for deletion:

    kubectl get pv -o name | awk -F'/' '{print $2}' | xargs -I{} kubectl patch pv {} -p='{"spec": {"persistentVolumeReclaimPolicy": "Delete"}}'
  3. Delete all contents from the monitoring namespace (Prometheus, Grafana, Loki, etc):

    kubectl delete namespace monitoring
  4. If ingress-nginx was deployed:

    kubectl delete namespace ingress-nginx
  5. Destroy the Kubernetes cluster:

    terraform destroy -auto-approve
  6. In the NodeBalancers section of the cloud dashboard, delete any leftover balancers created by the Kubernetes cluster.

  7. In the Volumes section, delete any remaining volumes created by the Kubernetes cluster.