Fortinet white logo
Fortinet white logo

Administration Guide

Kubernetes compliance integration using Terraform

Kubernetes compliance integration using Terraform

Use the FortiCNAPP terraform-kubernetes-agent module to create a Secret and DaemonSet and deploy the Node and Cluster collectors in your Kubernetes cluster.

DaemonSets are an easy way to deploy a Kubernetes pod onto every node in the cluster. This is useful for monitoring tools such as FortiCNAPP.

If you are new to the FortiCNAPP Terraform Provider or FortiCNAPP Terraform Modules, read the Terraform for FortiCNAPP Overview article to learn the basics on how to configure the provider and more.

This topic assumes familiarity with the Terraform Provider for Kubernetes maintained by Hashicorp on the Terraform Registry.

Run Terraform

The following code snippets deploy the DaemonSet to the Kubernetes cluster being managed with Terraform.

Before running this code, adjust the following values to match the intended configuration for your deployment:

Setting Description Example(s)
<kubernetes-config-path> The filesystem path to your kubeconfig file. ~/.kube/config
<kubernetes-config-context> The context to use within your kubeconfig file. my-context
<agent-access-token-name> The Name of your FortiCNAPP Agent access token. prod_token
<agent-server-url> Your FortiCNAPP Agent server URL. https://api.lacework.net
https://aprodus2.agent.lacework.net
https://api.fra.lacework.net
https://auprodn1.agent.lacework.net
<kubernetes-cluster-name> Provide your Kubernetes cluster name as it is defined in your Cloud Provider (for example: Amazon EKS, GKE).
See also How FortiCNAPP Derives the Kubernetes Cluster Name.
prod
<config-compliance-only> true = Configuration Compliance integration only.
false = Configuration Compliance and Workload Security integration. Set to false or omit this variable for the following outcomes:

&#x2022; You want to install the FortiCNAPP Agent to monitor both configuration compliance and workload security of your Kubernetes cluster.
&#x2022; You already have the FortiCNAPP Agent installed and monitoring workload security, and you also want to monitor configuration compliance of your Kubernetes cluster.
false
Amazon EKS:
terraform {
  required_providers {
    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "~> 2.0"
    }
    lacework = {
      source = "lacework/lacework"
      version = "~> 1.0"
    }
  }
}

provider "kubernetes" {
  config_path    = "<kubernetes-config-path>"
  config_context = "<kubernetes-config-context>"
}

data "aws_region" "current" {}

# Use the access token resource below if you are intending
# to generate a new access token for this integration.
resource "lacework_agent_access_token" "k8s" {
  name = "<agent-access-token-name>"
}

# Use the data entry below if you are choosing to use an
# existing access token for this integration.
data "lacework_agent_access_token" "k8s" {
  name = "<agent-access-token-name>"
}

module "lacework_k8s_datacollector" {
  source  = "lacework/agent/kubernetes"
  version = "~> 2.0"

  # Use one of the lacework_access_token options below depending
  # on whether you are generating a new token or using an existing one.

  # Option 1: Generate a new access token
  #lacework_access_token = lacework_agent_access_token.k8s.token
  
  # Option 2: Use an existing access token
  #lacework_access_token = data.lacework_agent_access_token.k8s.token

  # The lacework_server_url property is optional if your FortiCNAPP tenant
  # is deployed in the US, but mandatory for non-US tenants.
  # https://docs.lacework.net/onboarding/agent-server-url#agent-server-url
  #lacework_server_url   = "<agent-server-url>"

  # Provide your Kubernetes cluster name as it is defined in your Cloud Provider.
  lacework_cluster_name   = "<kubernetes-cluster-name>"

  # Set lacework_cluster_exclusive to true if you only want a Configuration Compliance integration.
  # Default is false.

  #lacework_cluster_exclusive = <config-compliance-only>

  enable_cluster_agent    = true
  lacework_cluster_region = data.aws_region.current.name
  lacework_cluster_type   = "eks"
}
GKE:
terraform {
  required_providers {
    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "~> 2.0"
    }
    lacework = {
      source = "lacework/lacework"
      version = "~> 1.0"
    }
  }
}

provider "kubernetes" {
  config_path    = "<kubernetes-config-path>"
  config_context = "<kubernetes-config-context>"
}

data "google_client_config" "current" {}

# Use the access token resource below if you are intending
# to generate a new access token for this integration.
resource "lacework_agent_access_token" "k8s" {
  name = "<agent-access-token-name>"
}

# Use the data entry below if you are choosing to use an
# existing access token for this integration.
data "lacework_agent_access_token" "k8s" {
  name = "<agent-access-token-name>"
}

module "lacework_k8s_datacollector" {
  source  = "lacework/agent/kubernetes"
  version = "~> 2.0"

  # Use one of the lacework_access_token options below depending
  # on whether you are generating a new token or using an existing one.

  # Option 1: Generate a new access token
  #lacework_access_token = lacework_agent_access_token.k8s.token
  
  # Option 2: Use an existing access token
  #lacework_access_token = data.lacework_agent_access_token.k8s.token

  # The lacework_server_url property is optional if your FortiCNAPP tenant
  # is deployed in the US, but mandatory for non-US tenants.
  # https://docs.lacework.net/onboarding/agent-server-url#agent-server-url
  #lacework_server_url   = "<agent-server-url>"

  # Provide your Kubernetes cluster name as it is defined in your Cloud Provider.
  lacework_cluster_name   = "<kubernetes-cluster-name>"

  # Set lacework_cluster_exclusive to true if you only want a Configuration Compliance integration.
  # Default is false.

  #lacework_cluster_exclusive = <config-compliance-only>

  enable_cluster_agent    = true
  lacework_cluster_region = data.google_client_config.current.region
  lacework_cluster_type   = "gke"
}
  1. Open an editor and create a file called main.tf.
  2. Copy/Paste the code snippet above into the main.tf file and save the file.
  3. Run terraform plan and review the changes that will be applied.
  4. Once satisfied with the changes that will be applied, run terraform apply -auto-approve to execute Terraform.

Validate the Changes

After Terraform executes, you can use kubectl or check the FortiCNAPP console to validate the DaemonSet is deployed successfully:

  • Run the following kubectl command:

    kubectl get pods -n lacework -o wide
    
  • Go to Workloads > Kubernetes in the FortiCNAPP console.

    In the Behavior section, click Pod network and then Pod activity.

All Node Collector and Cluster Collector pods have a naming convention that includes lacework-agent-* and lacework-agent-cluster-* respectively.

Troubleshooting

See Kubernetes Troubleshooting for help with this integration.

Kubernetes compliance integration using Terraform

Kubernetes compliance integration using Terraform

Use the FortiCNAPP terraform-kubernetes-agent module to create a Secret and DaemonSet and deploy the Node and Cluster collectors in your Kubernetes cluster.

DaemonSets are an easy way to deploy a Kubernetes pod onto every node in the cluster. This is useful for monitoring tools such as FortiCNAPP.

If you are new to the FortiCNAPP Terraform Provider or FortiCNAPP Terraform Modules, read the Terraform for FortiCNAPP Overview article to learn the basics on how to configure the provider and more.

This topic assumes familiarity with the Terraform Provider for Kubernetes maintained by Hashicorp on the Terraform Registry.

Run Terraform

The following code snippets deploy the DaemonSet to the Kubernetes cluster being managed with Terraform.

Before running this code, adjust the following values to match the intended configuration for your deployment:

Setting Description Example(s)
<kubernetes-config-path> The filesystem path to your kubeconfig file. ~/.kube/config
<kubernetes-config-context> The context to use within your kubeconfig file. my-context
<agent-access-token-name> The Name of your FortiCNAPP Agent access token. prod_token
<agent-server-url> Your FortiCNAPP Agent server URL. https://api.lacework.net
https://aprodus2.agent.lacework.net
https://api.fra.lacework.net
https://auprodn1.agent.lacework.net
<kubernetes-cluster-name> Provide your Kubernetes cluster name as it is defined in your Cloud Provider (for example: Amazon EKS, GKE).
See also How FortiCNAPP Derives the Kubernetes Cluster Name.
prod
<config-compliance-only> true = Configuration Compliance integration only.
false = Configuration Compliance and Workload Security integration. Set to false or omit this variable for the following outcomes:

&#x2022; You want to install the FortiCNAPP Agent to monitor both configuration compliance and workload security of your Kubernetes cluster.
&#x2022; You already have the FortiCNAPP Agent installed and monitoring workload security, and you also want to monitor configuration compliance of your Kubernetes cluster.
false
Amazon EKS:
terraform {
  required_providers {
    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "~> 2.0"
    }
    lacework = {
      source = "lacework/lacework"
      version = "~> 1.0"
    }
  }
}

provider "kubernetes" {
  config_path    = "<kubernetes-config-path>"
  config_context = "<kubernetes-config-context>"
}

data "aws_region" "current" {}

# Use the access token resource below if you are intending
# to generate a new access token for this integration.
resource "lacework_agent_access_token" "k8s" {
  name = "<agent-access-token-name>"
}

# Use the data entry below if you are choosing to use an
# existing access token for this integration.
data "lacework_agent_access_token" "k8s" {
  name = "<agent-access-token-name>"
}

module "lacework_k8s_datacollector" {
  source  = "lacework/agent/kubernetes"
  version = "~> 2.0"

  # Use one of the lacework_access_token options below depending
  # on whether you are generating a new token or using an existing one.

  # Option 1: Generate a new access token
  #lacework_access_token = lacework_agent_access_token.k8s.token
  
  # Option 2: Use an existing access token
  #lacework_access_token = data.lacework_agent_access_token.k8s.token

  # The lacework_server_url property is optional if your FortiCNAPP tenant
  # is deployed in the US, but mandatory for non-US tenants.
  # https://docs.lacework.net/onboarding/agent-server-url#agent-server-url
  #lacework_server_url   = "<agent-server-url>"

  # Provide your Kubernetes cluster name as it is defined in your Cloud Provider.
  lacework_cluster_name   = "<kubernetes-cluster-name>"

  # Set lacework_cluster_exclusive to true if you only want a Configuration Compliance integration.
  # Default is false.

  #lacework_cluster_exclusive = <config-compliance-only>

  enable_cluster_agent    = true
  lacework_cluster_region = data.aws_region.current.name
  lacework_cluster_type   = "eks"
}
GKE:
terraform {
  required_providers {
    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "~> 2.0"
    }
    lacework = {
      source = "lacework/lacework"
      version = "~> 1.0"
    }
  }
}

provider "kubernetes" {
  config_path    = "<kubernetes-config-path>"
  config_context = "<kubernetes-config-context>"
}

data "google_client_config" "current" {}

# Use the access token resource below if you are intending
# to generate a new access token for this integration.
resource "lacework_agent_access_token" "k8s" {
  name = "<agent-access-token-name>"
}

# Use the data entry below if you are choosing to use an
# existing access token for this integration.
data "lacework_agent_access_token" "k8s" {
  name = "<agent-access-token-name>"
}

module "lacework_k8s_datacollector" {
  source  = "lacework/agent/kubernetes"
  version = "~> 2.0"

  # Use one of the lacework_access_token options below depending
  # on whether you are generating a new token or using an existing one.

  # Option 1: Generate a new access token
  #lacework_access_token = lacework_agent_access_token.k8s.token
  
  # Option 2: Use an existing access token
  #lacework_access_token = data.lacework_agent_access_token.k8s.token

  # The lacework_server_url property is optional if your FortiCNAPP tenant
  # is deployed in the US, but mandatory for non-US tenants.
  # https://docs.lacework.net/onboarding/agent-server-url#agent-server-url
  #lacework_server_url   = "<agent-server-url>"

  # Provide your Kubernetes cluster name as it is defined in your Cloud Provider.
  lacework_cluster_name   = "<kubernetes-cluster-name>"

  # Set lacework_cluster_exclusive to true if you only want a Configuration Compliance integration.
  # Default is false.

  #lacework_cluster_exclusive = <config-compliance-only>

  enable_cluster_agent    = true
  lacework_cluster_region = data.google_client_config.current.region
  lacework_cluster_type   = "gke"
}
  1. Open an editor and create a file called main.tf.
  2. Copy/Paste the code snippet above into the main.tf file and save the file.
  3. Run terraform plan and review the changes that will be applied.
  4. Once satisfied with the changes that will be applied, run terraform apply -auto-approve to execute Terraform.

Validate the Changes

After Terraform executes, you can use kubectl or check the FortiCNAPP console to validate the DaemonSet is deployed successfully:

  • Run the following kubectl command:

    kubectl get pods -n lacework -o wide
    
  • Go to Workloads > Kubernetes in the FortiCNAPP console.

    In the Behavior section, click Pod network and then Pod activity.

All Node Collector and Cluster Collector pods have a naming convention that includes lacework-agent-* and lacework-agent-cluster-* respectively.

Troubleshooting

See Kubernetes Troubleshooting for help with this integration.